--------- Website Scaling Desktop - Max --------- -------- Website Highlight Effekt ---------

EU AI Act: What Are My Obligations?

We are thrilled and full of gratitude to share the success of our recent event, “EU AI Act: What Are My Obligations” in this recap. Last week, we gathered leaders in AI policy, legal, technical, and operational backgrounds for an evening full of insights, discussions, and networking, thereafter showcasing a case study of EU AI Act preparation in action!

For those who missed our event, our primary focus was to provide actionable steps and insights for firms looking to comply with the Act ahead of the competition. As with GDPR, enterprises who waited until full enactment  struggled heavily; such was reflected by our panelists, who discussed the number of lessons for both companies and policy makers. As such, we created a mini compliance “checklist” to aid companies in their readiness revolving around the tenets of the Act; if you missed grabbing a copy, you can view it here!

We extend a sincere appreciation to our speakers, who made the success of this event possible with their impactful perspectives. Starting with Dan Nechita’s forward-looking keynote - setting the stage for the next steps in the Act (for those who unfortunately did not make it to our event, we prepared a video of the keynote which you can find here) - we welcomed Emily Gillett, Anda Bologa, Tarek R. Besold, and Dr. Benedikt Flöter to our panel discussion moderated by Anastasia Movcharenko, leading to a synergetic environment from each’s diverse perspectives and engagement with AI and its regulation. Our case study, presented by Johan Wouters, thereafter displayed Calvin’s EU AI Act modules through a live demo of a high-risk model’s EU AI Act classification.

Our aforementioned EU AI Act Checklist dives deeper into the requisites for each of these requirement groups, ranging from model assessments to AI Risk and Quality Management Systems (RMS & QMS), depending on ownership status and severity. At our event, legal and general purpose AI expert Emily Gillet cited the fact that the EU AI Act is a regulatory stride towards the entire AI value chain; the dual approach to regulating both providers and deployers of AI ensures that trustworthiness is fully carried from creation into action. This was reflected by the European Commission’s webinar, where Dr. Tatjana Evas stated that “testing is really key to the risk management system”, with QMS and RMS being vital to the Act’s compliance fulfillment.With that said, how do these standards translate into practice? Should firms fundamentally change their AI use-cases, or even business strategies, to avoid regulatory strain?

A focal conversation of our roundtable strived to identify whether the strategy of minimizing high-risk use-cases in favor of minimal risk AI (or no AI at all) would produce better effects rather than the compliance hassle of aligning with the EU AI Act. However, the echoed consensus from our legal, policy, and technical panel was that such an approach would lead to an AI portfolio not maximizing its economic and optimizing potential; ultimately, high-risk AI use-cases will yield high rewards, and while models can be adapted to employ human-in-the-loop strategies, such will still be resource draining in the long run. Consequently companies may opt to avoid high-risk, but such leaves a strong competitive advantage at stake - especially for those in margin-oriented industries like finance, insurance, and telecommunications. In these instances, employing a set of technical tools, such as Calvin’s suite of EUAIA and assessment modules, allows for full implementation and regulation of all model classifications, avoiding the lengthy and resource-heavy compliance process that would otherwise be assumed.

Working with firms on the legal end, our panelist Dr. Benedikt Flöter experienced companies inquiring if relocation to the US and UK stands reasonable to avoid the regulatory efforts associated with the Act. With insights from Anda Bologa’s policy perspective with the US and China’s AI regulation developments, such suggestions may offer short term “relief” for a select profile of companies, however similar sets of regulation worldwide are likely to approach in coming years.The true competitive advantage lies in a carefully tailored technical suite of tools. Tarek Besold offered a hands-on viewpoint from his experience in the technical aspects of the field, citing that “people want trustworthy tech”. Ranging from human oversight to internal processes, such remains a point not to avoid, but rather to embrace; the compliance process and adaptability of the Act is not designed to hinder progress, but enhance it, ensuring safety for the public and minimized incidents for firms to handle.

Upgrade AI risk management today!

request a demo

Subscribe to our
monthly newsletter.

Join our
awesome team
today!

e-mail us