Session Summary, June 2025
Mastering the Forces of Generative and Agentic AI to Redefine Business Innovation
The discussion centred around the complexities and practicalities of scaling AI within large organisations, particularly focusing on generative and agentic AI.
Key panellists include:
- Dr. Paul Dongha, PhD Head of Responsible AI & AI Strategy, NatWest
- Group Moderator: Henrike Mueller Manager - AI Strategy Team, Financial Conduct Authority
Dr. Paul Dongha highlighted the intricacies involved in risk management and governance, emphasising the importance of ethical considerations and bias mitigation in AI modelling. The role of AI in enhancing productivity through tools like AI co-pilots was explored, demonstrating the value it brings in terms of efficiency for knowledge workers. Moreover, the utilisation of AI in fraud detection and customer service through large language models (LLMs) was discussed, noting the challenges in data quality and the need for extensive manual work to ensure effective AI implementation.
Henrike Mueller and Dr. Dongha also delved into the future of agentic AI, considering its potential to automate complex tasks and improve operational activities. They discussed the need for cautious experimentation and the importance of human-in-the-loop processes to maintain control and accountability. The conversation highlighted various applications of agentic AI in areas such as fraud investigations and compliance activities, stressing the significance of well-defined business processes and robust data management. The discussion underscored the evolving nature of AI technologies and the necessity for organisations to adapt and refine their strategies continuously.
The panel addressed the broader challenges of AI adoption within the financial sector and the economy at large. Dr. Dongha noted the slow uptake of AI, attributing it to unrealistic expectations and the complexities involved in integrating AI into existing systems. He emphasised the need for skilled personnel and comprehensive governance frameworks to navigate AI's risks and unlock its potential. Henrike Mueller introduced the FCA's initiative to support firms in developing and deploying AI models responsibly, highlighting the importance of transparency and accountability. The session concluded with reflections on the ethical dimensions of AI, the role of organisational culture in fostering responsible AI practices, and the upcoming legislative requirements such as the EU AI Act.
Takeaways
Ethical considerations are paramount in AI implementation
Dr. Paul Dongha emphasised the importance of ethical management processes, including bias impact assessments and ethics panels. Organisations must invest time in preparing and balancing data, ensuring fairness and transparency. Comprehensive governance frameworks and continuous learning are essential to mitigate risks and foster responsible AI practices.
Agentic AI holds significant potential but requires cautious experimentation
The discussion highlighted various applications of agentic AI, from fraud detection to compliance activities. However, the need for human-in-the-loop processes and robust data management was stressed to maintain control and accountability. Organisations should start with simple tasks and gradually build capabilities, ensuring all processes are resilient and repeatable.
The slow uptake of AI is due to unrealistic expectations and practical challenges
Dr. Dongha noted that many organisations face difficulties integrating AI into their systems, often due to fragmented data and lack of skilled personnel. He underscored the importance of setting realistic goals and investing in infrastructure and talent. The FCA's initiative aims to support firms in developing and deploying AI models responsibly, ensuring safe and effective use.
























