The AI Summit London Conference and Expo

News

AIconics Awards Winner Richard Davis Unveils Ofcom's Cutting-Edge AI Initiatives

AIconics Awards Winner Richard Davis Unveils Ofcom's Cutting-Edge AI Initiatives

In an exclusive interview, Richard Davis, the AIconics Award-winning Solution Implementer of the Year from Ofcom, details the regulator's AI initiatives. He highlights the focus on traditional AI and machine learning, providing examples of automating broadcast media complaints and tackling illegal online sales.

Summary:

  • Ofcom's AI Initiatives: Explore how Ofcom, the UK's communications regulator, is leveraging AI to streamline processes, automate tasks, and enhance efficiency across various areas, from handling complaints in broadcast media to addressing illegal online sales of radio equipment.
  • Future AI Plans at Ofcom: Gain insights into Ofcom's extensive list of upcoming AI initiatives, spanning specific project-based applications, broader capacity-building efforts, and the exploration of advanced capabilities like code generation.
  • Evolution of AI at Ofcom: Learn about Ofcom's vision for the next 10 years, encompassing internal capacity building, AI literacy efforts, and regulatory considerations. Delve into the regulator's role in enhancing public understanding of AI and its collaborative approach with central government and other regulators in shaping AI regulation.

Thanks for joining me Richard, and congratulations again on winning the Solution Implementer of the Year award at London 2023! Could you speak about the initiatives which could have led to this recognition? 

There are a number of different areas that cover the entirety of Ofcom’s remit. And one of the big things we are doing, we've talked about AI, and I think people often jump straight into the machine learning and generative AI parts. Whereas a lot of what we've got, it's actually using more traditional AI machine learning, and that's been a very big focus of ours.

Examples of that is where we have complaints about broadcast media, trying to understand the process behind the videos that people would complain about. That used to be a very manual process where we'd send it off to be transcribed for people to watch it to write down the information. We're now in a process where we can actually use some machine learning processes to automatically transcribe and translate recordings and present it back in a way that we're able to then look through and help the complaints team to very quickly identify whether complaints should be upheld or not.

So that's one example. Other examples of where we're using it is with illegal equipment, radio equipment that's sold online, and so using AI processes to be able to find and detect where radio equipment is being sold illegally and use that to issue takedown notices. So, it's not a fully automated process, but something to be able to support the teams that are already doing this work and to be able to identify opportunities.

Beyond that, we've just started some work where we are using large language models to be able to understand information that's coming in from responses to consultations. We’ve got a legal duty to read every response to consultations that come in. But one thing that we are then trying to do is to be able to automatically add metadata and meta tags, and information around the consultations that we're getting so we can help the team to be able to sort and search across the consultations that we get.

And that kind of technology can start to see into more of the other policy areas where we have large, large corpuses of documents and trying to understand what's out there from Ofcom’s point of view. So those are a few examples.

 

So essentially, Ofcom’s use of AI is about lightening the load and automating processes?

Yes, a lot of it is about seeing what we can do to actually automate what is already done. I think there are some areas where we're also saying, where can I add capability that would not have been possible beforehand.

So, some of the areas that we're looking at, for example online safety, to be able to understand where risks are in online safety, the platforms that are in scope of the regime, and try to identify common themes of risks and where they're being presented. So, it's just to help us to be able to understand what the landscape looks like.

 

And as we head to the the end of 2023, have you got any new AI related initiatives that you're planning on bringing in in the new year?

We do. We've got, looking down, we've got a list of like 30 initiatives here. AI specific initiatives. I think we've got another list of like 19 of them, some of those will be repetitive but things that we're doing are a mixture of very specific project-based applications that will have an impact on our regulatory areas.

But beyond that, we're also let's say, how do we increase our capacity for AI more broadly, across things like the solutions, office tools that will enable people to write and search more capably within the everyday jobs that they do.

And then moving on from that, things like code generation is certainly something that we're exploring and looking to pull in. Within all of this, Ofcom’s obviously a government regulator. So, we're not a startup, we won't just pull something off the shelf and say, let's run with it. So, everything that we're looking to do has an ethical, risk-based method where we look into that.

So, we've done quite a lot of work in it, what's the best policy, but being able to use some of the more advanced generative AI capabilities, there's looking as to whether they're open-source, flow source, whether they're contained within the organisation, what information they're passing back to parent organisations and where your data is held within its own state.

And across that also looking at what for us would be public information, what is private information? And trying to get a sense of how best to use different tools for different purposes, and that was really important for us when we're thinking about the risks that if something is going to be published and all of that information is already out in the public scope, then obviously then we have free rein to use the public tools. But if we're talking about some of the internal data, information that we're wanting to search or summarise, then we'd obviously want to use some of the closed tools that we can keep on Ofcom’s closed environments.

 

And is ChatGPT a feature of any of this in terms of internal processes, or any other generative AI models?

Yeah, so I won't say a specific model, but we are looking at, so there’s a variety of models. It's not one particular one. Some of them have already been released into capabilities. So, I think that Bing has already got a version of ChatGPT in its’ search capability so absolutely, where that's being used within a closed environment. That's something that we use but we're trying to look across a range of different tools, as I said, for different purposes.

There are some large language models or foundational models that we’re saying, what can we do to pull those in, that we can train on or fine tune our own corpus of information? And that's a different approach. And then say, actually, we're going to search across our corpus of information using a public model. So, we want to be really clear on what the information is, and what the risk of that being exposed outside the organisation will look like.

 

And how do you envision the role of AI within the scope of Ofcom evolving over the next 10 years?

I think there's a few different areas that we'd probably see evolving. The first is, as I said, building up our own internal capability to be able to use and deploy artificial intelligence that can make us a more streamlined, productive organisation, better capable to respond to consumers. That's a huge positive for us. But alongside that, we want to be able to upskill more colleagues from across Ofcom to enable them to understand what AI is, where there's harms, where there's risks, and also what the opportunities for using AI are. So that's a bit of a traditional data literacy but AI literacy AI capability build that we want to look at.

Beyond that, then understanding what the role of the regulator is, when it comes to AI and how that interfaces with AI safety and the work that's been done in central government around AI regulation. And what that means, as in the White Paper earlier in the year, the response to that that's going out. There's been a number of iterations of that and the role for the regulators is quite clear, but we in ourselves can't go and do something that we're not able to; we have a specific remit that's already defined by law. So, anything that is AI within that we need to then stick to what is within our legal remit.

 

In terms of your AI literacy point. Do you see Ofcom playing a role in enhancing public understanding of AI as well?

Legislation was passed and so often has a role within that to be able to help people understand risks of online harms and things like that. So within that you can start to say okay, are there risks people might face from some of the AI tools that are being presented online, especially when those are merged with search tools or potentially adult services, or the tools and services that come within the scope of online safety.

 

Following on to the AI safety summit that happened recently, there's obviously a renewed interest in responsible AI. What can we expect in terms of regulator changes? Or rather the approach that regulators are taking?

I can't really comment too much on that because I think that's all work in progress. As they say, I think we're considering what AI means for regulatory approaches within Ofcom and making sure that from our point of view, we really look to where the legislation is for AI. This is work that we've already done, understanding where harms of AI might intersect with that legislation, and making sure that our teams are built up with the capability to understand that those are things that we are saying are in action at the moment. But yeah, there's more work to be done.

We're working very closely with central government to understand what that can look like and also cross regulators as well. So, we've worked with the Digital Regulation Cooperation Forum, so that encompasses Ofcom, the ICO, the FCA, and the CMA, all of us together very much closely working on what AI regulation could look like.

I think probably within that as well, where there is existing legislation, some other teams are already then starting to understand where that can have an impact. So, where AI is being used in, let's say, online safety, trying to understand where the harms are within recommender systems or trying to understand how beneficial or the efficacy of age verification systems are something that we're looking at that will then influence the codes and practices that will be coming out through consultation. So really, there's a big part within a team that we want to be able to understand what AI is currently being used in the sectors that we regulate and how that works.

 

In Austin this year for Applied Intelligence Live!, the winner of the AIconics there was a company called Toxmod which monitors chat rooms and cuts out inappropriate comments and things like that. Is there anything that Ofcom does similar to regulating chat online?

No, so I think what you're talking about is called safety tech. And I don't think we're in a position where we want to be recommending a particular safety tech company over another. It's our job to make sure that the platforms have appropriate tools and guidance in place to keep their users safe. We're working very much on that side of the coin, but yeah, that's fascinating. I'll look them up because it's really interesting work.

I think the main thing for me is, I've been doing this for 20 odd years. And it's really exciting to suddenly see other people find an interest in it. The bits I've worked on, I've come up with implementations for climate change to disease detection to financial services, fraud, cyber threat detection, and I did all of that up to 2018 maybe 2020. But before ChatGPT was a thing and the news broke. And within all of that cycle, there's now a lot of buyers working with a lot of the heavy hitters around. How do you do this safely? And what does it mean for the ethics of an individual or for groups of individuals to be able to keep people safe within that AI system and the decisions that are being made? It's really interesting that now these are the conversations that are coming to the fore.

 

There seems in recent years to be a push further towards regulatory and cybersecurity discussions. So, it's become less about what it can do and it's what should it not do? That’s been quite a big shift, could you speak further to that? 

I feel like there is a bit of this don't get me wrong, I often look at some of the worst things that it can do. But at the same time, I'm still fascinated by the subject and the opportunities that AI can bring to not just provide companies with something to make them more efficient, but the things that are beyond that to say what we do that can change some of the biggest issues and challenges that society faces at the moment, whether that's climate change or hunger.

There are a number of different areas because these are problems that can be solved by AI and machine learning. And the more that we can actually start to think outside the box and say, there's opportunities for us to make this a better world. And I think AI is there and I don't want us to just focus on the doom and gloom and the bad things, we need to be talking about the huge opportunities it presents.

 

Yes of course. I think it will be interesting to see how the space develops in the new year.

It’s definitely going to be a fascinating future for us. Yeah, definitely.

Will you be planning to attend the AI Summit London in June again?

Absolutely. It'll be really good to come back and see people and build on the great discussions we're having. I had my hand cramped by the end of it with all the notes I took.

Brilliant. Well thank you very much for your time and congratulations again on winning the AIconics Award, Solution Implementer of the Year.

View all News
Loading

The AI Summit London 2024 supported by:

Headline Partners

Loading

Industry Partners

Loading

Diamond Sponsors

Loading

Platinum Sponsors

Loading

Gold Sponsors

Loading

Feature Sponsors

Hackathon Partner - Fetch AI    AI Business TV

Silver Sponsors

Bronze Sponsors

Loading

Associate Sponsors

Loading

Media & Strategic Partners