Interview
What inspired you to focus on training teams to use AI effectively and ethically?
I transitioned from building AI systems to teaching about them because I recognised a fundamental truth: impact isn't determined by where the state-of-the-art sits, but by how people use technology.
History proves this. The steam engine took over a century to become a locomotive. Electricity required 50 years from the first power stations to measurably affecting manufacturing productivity. New technologies take time to change how we live because humans must reorganise processes and discover novel applications.
The same applies to AI. Even if development stopped today, we'd have enough capability to drive decades of productivity gains and societal transformation. The challenge now is integration—how people harness it, how job roles evolve to accommodate it. That's where the real work happens, and frankly, it's more exciting.
How do you design course materials to ensure they address both technical proficiency and ethical considerations?
General Purpose builds bespoke courses for each client because meeting people where they are is essential for embedding AI into workflows. Rather than presenting abstract potential, we examine the actual tasks people perform, identify where AI can help, and construct classes around those specific needs. This ensures participants leave knowing how to use AI productively in their day-to-day work immediately.
This approach drives positive ethical impact. We're a socially motivated business that understands technology's impact is shaped as much by usage as by the technology itself. We're committed to responsible, ethical AI adoption.
There's genuine urgency here. When people discuss AI's evolution over the next 5-10 years, I see a critical window. If we reach fully autonomous agents capable of substantial work whilst those systems remain biassed and prone to hallucinations, we'll face serious societal problems.
Right now, human involvement is still required at every step. We have supervision, the ability to provide feedback, to say "this isn't acceptable going forwards." Keeping humans in the loop to train AI towards safety before widespread scaling is vital.
What are the key skills and knowledge areas you believe are essential for teams to succeed in leveraging AI responsibly?
Success with AI requires surprisingly human skills: the ability to articulate what you want clearly and to break down tasks for delegation. It's somewhat clichéd, but thinking of an AI assistant as a bright, enthusiastic intern is genuinely useful. If you can identify which tasks you'd break into intern-sized chunks and hand off, you'll work effectively with AI.
Every future job role will involve managing AI agents. We need to teach people to excel at this whilst maintaining the critical thinking and judgement necessary to review AI-generated work.
What gives me hope is that the most effective AI users in any field are the subject matter experts. Existing experience makes you a better AI user because you possess the knowledge to evaluate outputs and push back when necessary.
We often discuss a future of "responsible humans"—where AI might handle 80-90% of the work, but humans remain essential to sign off on outputs, confirming they're suitable for clients or customers. Humans aren't being written out of the script. New roles will emerge in this space.
At General Purpose, we specialise in instilling an AI-first mindset, which I've come to see as remarkably similar to an entrepreneurial mindset. Much of our advanced work involves one-to-one coaching to develop this entrepreneurial thinking—helping people look at problems and consider how AI might apply. Nearly every role will require this level of entrepreneurial thinking going forwards.
If you can do this, you'll remain valuable regardless of AI's capabilities. When AI handles another set of tasks, you can look at your role and identify new, high value work you'd like to pursue.
What's your perspective on organisational AI adoption?
Many companies make the mistake of creating lists of high-value use cases for AI application. The technology isn't ready for that. Organisations usually aren't ready for that.
I regularly speak with organisations wanting to build agents to automate X and Y, yet when asked if their SharePoint is connected to their assistant, they admit it's not IT-secure. We have foundational work to complete first.
Most organisations with grand ambitions haven't got their data in shape to power even the simplest AI applications. This is where culture change work becomes crucial, working with IT teams and others concerned about risks, helping them feel comfortable, often drawing boundaries. These teams have legitimate concerns. Sometimes we'll say, "Let's not do this right now," but then identify 80% of it that the organisation is comfortable providing access to.
I personally advocate that the best way to adopt AI is to give people access and let them experiment—a bottom-up process. The people doing the jobs are best qualified to know if AI will be useful. They're most likely to spot opportunities, not some product team brainstorming in isolation.
We developed this view after being brought in too many times to train people on internal AI tools nobody was using. When we ran classes, people told us nobody had ever asked what their job looked like—the tool built didn't do what they needed or wanted.
We now advocate that firms roll out a general-purpose assistant—ChatGPT, Copilot, Claude, or Gemini—and let people who do the roles experiment, discover what works, and feed that back into the organisation. Then you know what custom tooling to build. This helps you avoid expensive missteps.
How do you teach teams to identify and mitigate bias in AI systems?
Risk and bias are subtly different, and each requires a slightly different approach—both are incredibly important. To get teams thinking about risk when working with AI, I use a framework that evaluates two factors: the severity of what can go wrong and the blast radius if it does go wrong.
You might have something incredibly severe that affects only one person—a very small blast radius. Conversely, you could have something less severe that affects every single one of your customers—a very large blast radius. These two factors are always in interplay when deciding where to apply AI.
Ideally, you want to reduce both the severity and the blast radius to a level your organisation is comfortable with. It’s often easier to reduce the blast radius than the severity, because severity tends to be fundamental to the technology.
A simple example I give for this: if you’re connecting your AI assistant to your internal systems, giving it read-only access immediately reduces the blast radius by about 99% because it can’t overwrite or destroy any data. Or you might decide to grant write access, but only to three specific functions you’ve identified as genuinely useful—rather than granting access to everything. There are many ways to put hard guardrails around AI usage to make it much safer.
On bias, the key thing to understand is that you tend to notice only the bias that affects you. For a long time, large language models defaulted to American English because they predict the next most likely word based on training data—and most English on the internet uses American spellings. If you wanted British English, you simply had to prompt for it. But you’ll only do that if you notice it’s a problem.
When we run classes in the US, people say, “Well, of course it writes in American English—that feels completely natural.” This is why it’s so important to get as many people as possible using AI at this juncture. The more people using it, the more who will spot inherent biases and errors, and feed them back before systems become too involved in making autonomous decisions—when the risks become much higher.
It’s one reason we’re keen for firms to roll out AI tools widely, rather than keeping them within a small group identified as high-value use cases. Those groups may not spot the major error that could cause the business serious trouble in six months’ time—whereas someone outside that cohort might.
What strategies do you use to keep your instructors up to date with the latest advancements in AI?
Within our training, we always cover risk and bias, particularly how to ensure everyone’s voice is heard within the organisation. Much of this is culture change as well as individual training. It’s about building an organisation where issues such as AI bias are surfaced, and where people feel free to speak up and ask, ‘Is this a reasonable level of risk?’ or, ‘Can we reduce this risk by cutting the blast radius?’
On the risk side, we tend to be more pro‑taking calculated risk. Most organisations we speak to are risk‑averse—and that’s true of almost every large company. The very reason they’ve survived is that they’ve learnt not to make mistakes through a series of processes. That mindset makes AI adoption difficult because this is a new technology that requires exploration. There will be risks, and things will go wrong.
What firms often fail to evaluate is the risk of doing nothing—the risk of waiting until someone brings them all the answers, by which point it may be too late. If competitors adopt faster, they gain a significant edge. What’s possible today versus six months ago, especially in AI coding, is extraordinary. If, six months ago, you decided not to get involved with coding agents because it felt uncomfortable, you’d be at a major—almost insurmountable—disadvantage today. You can provide training, of course, but it’s not a substitute for an organisation where everyone has been practising for six months already. This gap will only widen. There is a real downside risk to inaction.
How do you see the demand for AI training evolving in the next 5–10 years?
I see demand evolving towards developing entrepreneurial thinking. At the moment, we spend a lot of time making sure the foundations are in place. I recommend not relying on ‘top 10 prompts for my role’ lists you might find on LinkedIn. It’s worth understanding a little about how large language models work because each new version changes what constitutes a good prompt.
Those long, 17‑step prompts—‘act as this person…’—haven’t been relevant for at least 18 months. If you understand how these models work, you develop an intuitive sense for whether a prompt is likely to lead to hallucination or bias, or whether the model is likely to return a poor result because it simply can’t know the answer. There are fundamental characteristics of these systems that mean some tasks they’ll be good at and others they’ll be bad at—until a new architecture comes along. That intuition is far more valuable than memorising a specific set of prompts.
What emerging trends in AI do you believe will require new approaches to training and education?
The arrival of AI coding agents will change a lot of roles. Among our most forward‑thinking clients, almost every role now involves producing some level of code using AI coding agents, which is driving huge productivity gains. I suspect this is the end state for many roles: you may not be a software engineer, but you’ll use an AI assistant that writes code for you.
Just as it’s useful to understand core AI principles to know what a good prompt looks like, it will be necessary to understand software engineering fundamentals—how code runs, what a server is, what a database is—to use these tools safely. We see worrying examples of people unfamiliar with programming basics using visual coding tools to build internal apps that are riddled with security and privacy issues. I spend a lot of time putting processes in place to manage this and prevent significant risks. Every role will need some level of this understanding to work effectively with the next wave of AI agents.
What advice would you give to teams looking to master the art of prompt design for AI systems?
My core advice is to understand the fundamentals. These systems generate text by predicting the next most likely word based on training data. That alone tells you there are things they’ll be good at and others they’ll always struggle with. If you internalise this and build a solid intuition for where models are strong or weak, you’ll be far more productive than by trying to memorise a list of prompts.
How do you envision the future of AI education and its role in shaping the workforce of tomorrow?
It’s about understanding what an agent is and how to give it adequate instructions to achieve what you want—skills you can build over time. We focus on equipping people to progress towards that capability, keeping them in high‑value roles on an ongoing basis.
Conclusion
AI success now hinges on judgement: set hard guardrails, broaden access to surface bias early, and build a culture that challenges risk—the cost of inaction is rising fast. Invest in fundamentals and software literacy as coding agents scale across roles, so you move from experimentation to advantage and deliver measurable impact for your organisation.
You can connect with Tom this June at The AI Summit London! Join his sessions: ‘AI in Action: From Idea to Agent in Under 25 Minutes’ taking place on Thursday June 11, 2:55pm on the Headliners stage, and ‘Humans in the Loop: Designing Organisations for the AI Native Workplace’ at 3:35pm at the Next Generation stage.



























