This site is part of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Informa logo
VIP Interview, April 2026

Pioneering AI in National Security

Insights on Innovation, Ethics, and Future Challenges

The AI Summit London team recently caught up with David Henstock, Head of Data Science, BAE Systems Digital Intelligence, a leading expert in AI-driven solutions for defense and security, to discuss the transformative role of artificial intelligence in this critical sector.

During the interview, David shared his extensive experience and insights on the challenges, opportunities, and ethical considerations surrounding AI adoption. From operationalising AI for crisis decision-making to exploring generative AI applications, scalable infrastructure strategies, and advancing cybersecurity, the conversation provided a fascinating glimpse into the cutting-edge innovations shaping the future of national security.

Read the full interview below:

1. Operationalising AI for Decision-Making in Crisis Scenarios

In your experience, what are the key challenges in operationalising AI for decision-making during crisis scenarios, and how do you ensure the reliability and ethical use of AI in environments such as defence and security? 

Operationalising AI for crisis decision‑making in defence and security environments is fundamentally challenged by data availability, system integration, and crucially, user trust. In high‑tempo, high‑risk scenarios, there are opportunities for AI to contribute meaningfully, but only if it has access to timely, relevant information and is embedded seamlessly into operational workflows. Even then, its value depends on whether operators believe the system will behave predictably under pressure. That trust has to be earned through rigorous testing in realistic conditions and through demonstrating that human–machine teaming genuinely enhances, rather than complicates, the decision cycle.

Ensuring reliability and ethical use requires a strong assurance framework that includes robust validation, clear guardrails, and maintained human control. At the same time, we must avoid creating so much friction that innovation is stifled. The balance lies in enabling rapid development and experimentation while maintaining the oversight and safeguards necessary for responsible deployment in defence and security contexts.

2. Generative AI in Product Lifecycle Management

How have you approached experimenting with generative AI, and what potential do you see for generative AI in optimising complex systems within defence and security sectors? 

Generative AI has enormous potential to optimise complex systems across the defence and security landscape, particularly when you look at programmes end‑to‑end. We’re already seeing value in areas like product lifecycle management, where generative models can capture knowledge, support rapid data retrieval, and even automate elements of software and product development. When combined with simulation and wargaming, generative AI can help us explore how systems will behave under different conditions long before they reach the field. In manufacturing there’s also a growing opportunity to use AI‑driven design and optimisation (such as through the use of digital twins) to streamline processes and improve resilience.

But the real transformative potential lies in connecting these capabilities across the entire ecosystem. Generative AI can help integrate data flows, support decision‑making from collection through analysis, and enable more coherent control of autonomous systems across the battlespace - from cyber assets to fixed platforms to uncrewed systems. It becomes a tool for orchestrating complexity at scale. The opportunity is huge, but so are the challenges. Assurance remains essential: we can build powerful point solutions, but without robust safeguards we risk introducing new vulnerabilities. The task ahead is to harness generative AI’s potential while maintaining the rigour and reliability that defence and security operations demand.

3. AI Infrastructure Strategies for Scalable Enterprise Solutions

What are the critical considerations when developing AI infrastructure strategies for scalable enterprise solutions in defence, and how do you balance scalability with the need for robust security measures?

Developing AI infrastructure for defence at enterprise scale means making a series of strategic choices that balance performance, resilience, and security from the outset. Organisations need to think carefully about where workloads will run — cloud, on‑premise, or at the edge — and what level of compute is realistically available in each environment. Defence‑grade AI requires not just raw processing power but an infrastructure that can support the full AI lifecycle: data access and management, model training, deployment, monitoring, and continuous improvement. You also have to account for the distributed nature of defence systems. It’s not just a central data centre; it’s drones, sensors, vehicles, ships, and other edge components that all need to run performant models on constrained hardware.

Balancing scalability with robust security is ultimately about making deliberate architectural trade‑offs. On‑premise environments offer tighter control but require forward planning to ensure they can scale as AI demand grows. Cloud environments provide elasticity but introduce additional security considerations that must be addressed through strong guardrails, access controls, and data‑handling policies. At the edge, the challenge is even sharper: how do you deliver capable models that operate reliably on limited and potentially disconnected compute without compromising security or increasing operational risk? Across all of this, assurance remains the anchor. You can scale AI aggressively, but without rigorous safeguards and validation, you risk introducing vulnerabilities into mission‑critical systems. The goal is an infrastructure strategy that is scalable by design and secure by default; not one at the expense of the other.

4. Advancing AI-Driven Cybersecurity and National Security Solutions

Given your extensive experience in advancing AI-driven cybersecurity solutions, how do you address emerging threats such as adversarial AI, and what role does AI play in safeguarding national security? 

AI‑driven cybersecurity in defence increasingly resembles an arms race, with offensive and defensive models evolving in parallel. Addressing emerging threats like adversarial AI starts with rigorous testing; continuously challenging our models against adversarial techniques to understand how they could fail and how to strengthen them. Provenance also becomes critical: knowing exactly where your components, data, and models come from allows you to react quickly when vulnerabilities emerge. There’s always a tension between assurance and speed of deployment, and in national security you can’t ignore the risk. You need the agility to respond to fast‑moving threats, but never at the expense of introducing weaknesses into mission‑critical systems.

AI’s role in safeguarding national security is only going to grow. It gives organisations the opportunity to scale analysis across vast, complex data streams and make sense of information at a pace humans alone can’t match. It takes on the dull and dirty tasks – the repetitive monitoring, the high‑volume triage, the rapid sifting of intelligence – thereby freeing people to focus on higher‑order judgment. In that sense, AI becomes a genuine force multiplier. But its power also means we must deploy it with care, ensuring every capability is robust, assured, and aligned with the mission of protecting national security.

5. Ethical and Practical AI Adoption in Defence

As a thought leader in ethical AI adoption, what frameworks or principles do you advocate for ensuring the responsible development and deployment of AI technologies in defence and security applications?

As we have already said, in defence and security, responsible AI adoption starts with strong guardrails and a commitment to assurance throughout the entire lifecycle. There are already several well‑established frameworks, from MOD guidance to wider government and cyber security standards, and I advocate building on these rather than reinventing them. They require rigorous testing for model robustness, clear safety considerations for any AI destined for operational platforms, and an understanding of the provenance of every component in the system. Human control remains a core principle: operators must be properly trained, empowered to override AI outputs, and supported by systems designed so that if the AI makes a wrong decision, additional safeguards prevent it from escalating into something harmful.

At the same time, ethical adoption isn’t static; it has to evolve with the threat landscape and the tempo of operations. Defence programmes need frameworks that are flexible enough to adapt to different use cases while still maintaining a high bar for assurance. It’s about striking the right balance between innovation, operational effectiveness, and responsible use.

Conclusion

Our conversation with David offered a wealth of knowledge and thought-provoking insights into the evolving landscape of AI in defense and security. His expertise illuminated the complexities of operationalising AI, ensuring ethical adoption, and addressing emerging threats in national security. As we reflect on this discussion, it’s clear that balancing innovation with responsibility will be key to harnessing AI’s full potential. 

Stay tuned for more interviews exploring the technologies shaping the future of business and the impact and role that AI plays.


ABOUT THE AI SUMMIT LONDON

The AI Summit London is the UK and Europe’s leading event for applied artificial intelligence, bringing together forward-thinking technologists, business leaders and policymakers from around the world to explore how AI is being deployed at scale across enterprise.

Taking place at Tobacco Dock on 10–11 June 2026, the Summit marks its 10th anniversary, celebrating a decade of progress in commercial AI. Over two days, the event delivers an immersive experience combining strategic insight, practical use cases and live technology demonstrations, empowering organisations to move confidently from experimentation to real-world impact.

As the flagship AI event of London Tech Week, The AI Summit London provides unparalleled opportunities for AI adopters to connect with peers, partners and innovators, equipping them with the knowledge, tools and relationships needed to accelerate responsible, results-driven AI initiatives.

To register for the 2026 show running 10-11 June, visit www.london.theaisummit.com

Interested members of the press and analysts may register here to attend.

ABOUT THE AI SUMMIT SERIES

If you’re building, buying or backing AI, The AI Summit Series is where ideas become outcomes. We cut through the buzzwords to spotlight real use cases, live demos and candid playbooks that help you deploy faster, govern smarter and prove ROI with confidence. No hype, just AI that transforms business. 

Launched by Informa in 2016, at a time when artificial intelligence events were largely focused on research and academia, The AI Summit Series was the first conference and exhibition dedicated to what AI means in practice for business.

For a decade, the Series has convened senior executives, investors, technology providers and data scientists to share insight, showcase breakthrough solutions and shape the commercial AI ecosystem. Trusted long before the hype, The AI Summit has established itself at the centre of the global AI community.

Today, the Series delivers world-class events across London, New York, Singapore, and Melbourne continuing to set the standard for enterprise-focused, responsible AI worldwide.

For more information, visit www.london.theaisummit.com.


Thank You to Our 2026 Sponsors & Partners