This site is part of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Informa logo

Session Summary, June 2025

AI Compliance in a Global Landscape: Turning Regulatory Challenges into Business Opportunities

The session began with a discussion on the rapid changes and challenges in AI compliance, particularly in the context of the European AI Act.

Key pannellists included: 

  • Kai Zenner, Head of Office and Digital Policy Adviser for MEP 
  • Axel Voss, European Parliament 
  • Matthias Holweg, American Standard Companies Professor in Operations Management, University of Oxford 
  • Moderator: Uthman Ali, Global Responsible AI Officer, BP

. Kai Zenner highlighted the complexities involved in implementing and enforcing the AI Act, noting pushback from various authorities, including the US government. He emphasised the importance of companies preparing for new governance structures and meeting deadlines to classify high-risk AI systems. Matthias Holweg added that the legal obligations and the establishment of governance structures are critical as companies and public authorities face pressure to adapt quickly to these new requirements, often resulting in a chaotic and overstretched environment.

Moving to the global landscape, Matthias mentioned the diverse regulatory approaches being adopted worldwide, such as the NIST proposals in the US, ADA in Canada, and the AI Act in Europe. He noted the executive order 14110 in the US, which marked a significant shift in the regulatory landscape, making it more complex for multinational companies to navigate. Both speakers agreed that companies must invest in AI governance to mitigate risks and create long-term value, with Matthias stressing the importance of liability under directives like the European product liability directive. They also discussed the potential for different countries to lead in AI regulation, with Kai mentioning the possibility of a Beijing effect or Washington effect, indicating that the future regulatory landscape is still uncertain.

The conversation then shifted to the future of AI governance and the potential impact of AGI. Matthias pointed out that AI systems have already achieved significant milestones in various intelligences, suggesting that AGI is plausible within years, not decades. However, he emphasised the greater risk posed by AI systems' agency and unintended consequences. Kai added that the European Parliament has focused on creating adaptable laws to manage these risks dynamically. Both speakers highlighted the need for a robust ecosystem of trust involving public and private sectors, academia, and NGOs to ensure effective governance. They concluded with hopes for streamlined procedures and better transparency in incident reporting and audits, aiming for a balanced approach that supports innovation without stifling smaller market players.

Takeaways

Complexities in AI Compliance


The European AI Act presents significant challenges for companies and public authorities, requiring new governance structures and meeting various deadlines for high-risk AI systems. This has led to a chaotic and overstretched environment as stakeholders adapt to these new requirements.

Global Regulatory Landscape


There are diverse approaches to AI regulation worldwide, such as the NIST proposals in the US and the ADA in Canada. This creates a complex environment for multinational companies, emphasising the need for robust AI governance to mitigate risks and ensure compliance across different jurisdictions.

Future of AI Governance and AGI


AI systems have achieved significant milestones, making AGI plausible within years. However, the greater risk lies in AI systems' agency and unintended consequences. A robust ecosystem involving public and private sectors, academia, and NGOs is essential for dynamic and effective AI governance.

Thank You to All Our Sponsors & Partners