In their new series "Aric asks...", they conduct short interviews with interesting personalities from the AI industry, starting with Robert Kilian, Managing Director of CertifAI, an AI certification and software provider for AI systems. His keynote on software testing and risk-based regulation as the basis for trust in AI will be streamed live on the ARIC Linkedin channel.
ARIC: What is CertifAI? What exactly does CertifAI do?
Robert Kilian: CertifAI is a testing and inspection company for AI systems that was founded as a joint venture between PwC, DEKRA and the City of Hamburg. With our experts in AI technology and regulation, we are able to evaluate AI systems according to criteria such as robustness, security and fairness, as well as their compliance with regulatory requirements. Ultimately, we ensure quality assurance, particularly in the target sectors of automotive, banking, smart manufacturing and healthcare. To this end, we develop and operate software for testing AI systems and carry out audits to test and certify these systems.
In your opinion, when is an AI responsible?
We define trustworthiness of AI along seven dimensions:
In this context, for example, the fairness of an AI system is primarily reflected in the data for its training and testing. These must be balanced and representative in order to avoid subsequent discrimination in the use of AI. Human autonomy & control relates to the distribution of tasks between humans and AI and the associated empowerment of AI users. In addition, our model for reliable AI development follows a four-stage approach, which also determines how we test AI systems. It consists of a reliable development process, a targeted risk analysis, ODD-based edge case tests and statistical evidence of error rates.
What opportunities does a certification system offer?
Firstly, the certification of AI systems - especially those that involve high risks - is crucial in order to strengthen trust in AI. After all, a certificate is nothing more than an independent seal of approval that confirms the system's conformity with certain legal or non-binding quality standards. Secondly, a certification system ensures that AI systems function reliably. If AI systems have to undergo independent testing before they are placed on the market, we get more efficient systems that are actually ready for the market. Thirdly, a certification system provides legal certainty for the developing AI companies. They can be confident that their systems meet the legal requirements if they comply with certain standards that are required for certification. Certifications therefore increase the willingness to invest in AI development and promote innovation. European legislators have also taken up the aforementioned advantages of a certification system and, with the AI Act, have made a conformity assessment of AI systems in high-risk application areas a prerequisite for the market approval of such systems. We are currently seeing similar developments in every major industrial region around the world.
Which AI topic should we talk more about?
On the technical standards and harmonized norms currently being developed in connection with the testing of AI systems. Because they show how we can achieve product safety and thus trust in the technology. With sufficient participation via the standardization organizations, this can also be a real advantage for companies.
Listen to the episode on Spotify
In this episode, Prof. Dr. Frauke Schleer-van Gellecom and Andreas Odenkirchen, the hosts from PwC Deutschland, talk to our CTO Jan Zawadzki about the challenges and opportunities of certifying and testing AI solutions.
Jan gives insights into the work of CertifAI and explains how the company supports the testing and certification of AI systems. He emphasizes the importance of reliable and dependable AI solutions and explains how structured testing and compliance with regulatory requirements help companies to develop safe AI products.
Frauke and Andreas join Jan to discuss the practical challenges companies face when implementing AI solutions and share real-life examples. They address the importance of data quality and risk analysis and how these factors influence the development and testing of AI.
Finally, Jan gives an optimistic forecast for the future of AI, emphasizes the advantages of reliable AI products and encourages medium-sized companies to tackle the topic with courage and professional support.
Tune in to gain deeper insights into how we’re shaping the future of software-based AI testing at CertifAI.