The AI Summit London Conference and Expo

News

30 May 2023

Speaker Interview: Jonny Rankin, Head of Data, MyTutor

Speaker Interview: Jonny Rankin, Head of Data, MyTutor
Jonny Rankin, Head of Data, MyTutor

“The rise of ChatGPT and other large language model based products has been keeping me up at night with excitement and fear in equal measure”

Data is important to any industry that wants to measure the impact of the work they are doing. When your customers are children and you are responsible for educating them, this becomes extra important.

That’s what drives Jonny Rankin’s day-to-day as the Head of Data for MyTutor, the UK's leading online platform for one-to-one learning, but like many in data science, the path to data didn’t run so smoothly.

After studying computer science at uni, Jonathan initially joined The Guardian as a Software Engineer - responsible for trying to convince our readers to pay for a free product.

“We were very data-driven - constantly running experiments and using data to identify opportunities. For the majority of the time, we were very well supported by the data team, but there was a period of about 6 months where, due to resourcing issues, it was very hard to get data resource. This was difficult at first, but it forced us to develop the skills on the team and I became the in-house data analyst/engineer, setting up data pipelines, and building data models and dashboards. It was a brilliant example of a team using self-serve analytics to power themselves towards a goal, and it’s an example I still refer back to now when explaining my vision of self-serve analytics to people!”

Following on from this, I was desperately trying to get into product and wanted to be a product manager. I was asked if I could join a new team that was being set up to migrate our data stack from a Hadoop-based data lake to something closer resembling the “modern data stack” - i.e cloud data warehouse, data transformation in SQL using dbt and so on. I said I would only join if I could be product manager!

About six months into this job, the Head of Data quit, and I was asked to cover most of his responsibilities, which was a very challenging period but one that taught me a lot. It was an exciting, greenfield project and we got to have a go at implementing the modern data stack from scratch. We did a lot of stuff well; we made a lot of mistakes! I then brought what I’d learnt to MyTutor, where I am currently Head of Data.”

 

Your current role encompasses analytics engineering, could you give us some insight into what that entails?

“Analytics Engineering is a fairly new profession in data. Ultimately, they are responsible for delivering easy-to-use, accurate, reliable and timely data to users of data within an organisation.

With the amount of data that companies collect these days, it is very easy for data to become unmanageable - I’m sure many people reading this will relate to what I describe as a “data swamp” - 100s, maybe 1000s of tables, no consistent table structure or naming standards, data quality issues all over the shop, the same metrics modelled differently in different places. Sadly, this is a very common sight! In this world, it becomes very time-consuming - and not very fun - for data analysts or data scientists to extract value from data.

This is where analytics engineering comes in. Through a combination of data modelling, modern data tooling and data governance, analytics engineers work towards building beautiful data that is super-simple to work with. A lot of it comes from tried-and-tested engineering principles - there’s a big focus on centralising business logic so that it is only ever defined once and in one place, reducing the chances of multiple sources of truth springing up.”

 

What's in your tech stack at the moment that you'd really recommend other people utalise?

“LightDash! I’m a huge fan of LightDash - it is an open-source data visualisation/data exploration/business intelligence tool that works seamlessly with the semantic layer.

When using LightDash, data teams spend less time building dashboards and more time building great data and defining metrics, letting data consumers worry about how they want to visualise the data. Having only ever worked in Tableau shops until LightDash came along, it has solved all my frustrations with Tableau in one fell swoop. “

 

You’re obviously adept at moving fast and adapting, what breakthrough from the past year are you most excited about?

 

Given that I’m speaking at an AI conference, I’m sure that no one will be surprised that it’s been the rise of ChatGPT and other large language model-based products that have been keeping me up at night with excitement and fear in equal measure.

One exciting development in the data space off the back of this has been the development of AI tools that act as an interface between data and the consumer.

But for AI to work well with your data, it needs to be able to understand your data! This has led to a recent focus within analytics engineering on developing what is known as a “semantic layer” on top of your tables - an unambiguous, codified grammar for talking about your key business metrics that can be understood by humans, tooling and AI.

It sounds complicated, but it’s not really. It’s just a .yml file that contains metric definitions. The key innovation is that this logic is stored centrally, rather than buried in multiple dashboards or models, and can be read by any consumer of your data. Semantic layers aren’t new, but the rise of LLMs makes them essential for any company that wants to make the most of the opportunities that LLM-based tooling will provide.”

 

You studied in the field and then clearly developed quite a passion for the possibilities, to touch on the topic of your talk a little how should we be preparing our children to work and study in an AI-powered future?

“That’s a big question and one that I’m sure is causing children and parents a lot of anxiety at the moment. I would focus on two things.

Firstly, getting really good at prompting AI. I’ve been amazed by what I’ve been able to do with AI behind me that I would never have previously been able to do - program in languages I’ve never used before using ChatGPT as an oracle, direct photo-realistic photoshoots using Midjourney. Being good at Googling has traditionally been a very valuable skill that has been propping up workers in all manner of workplaces, from GP surgeries to software development agencies. That skill is going to be replaced with prompting AI. Being better than your peers at asking the right questions to AI will give you an edge.

Secondly, developing skills that AI won’t be able to replace. It’s really difficult to say what effect AI will have on the job roles we know today - the less cynical experts say that AI will create as many jobs as it destroys, and I am hopeful that this is true.

Whilst it’s difficult to predict which technical skills will still be valuable in three years' time, so-called “soft skills” like empathy, leadership and communication are still going to be valuable no matter what your job is. The days of the knowledge worker with zero communication skills who gets spoon-fed tasks to complete are numbered. The workers of tomorrow will need to differentiate themselves by being people who make things happen, who can work well with others, who can communicate effectively and solve conflicts. Which isn’t really that different from today!”

 

If some of what Jonny has been shared is keeping you up night, make sure to head to the AI in Education fringe forum at The AI Summit London where he’ll be joined by Geoff Stead, Chief Product Officer for MyTutor to discuss these challenges (and opportunities in detail).

View all News
Loading

The AI Summit London 2024 supported by:

Headline Partners

Loading

Industry Partners

Loading

Diamond Sponsors

Loading

Platinum Sponsors

Loading

Gold Sponsors

Loading

Feature Sponsors

Hackathon Partner - Fetch AI    AI Business TV

Silver Sponsors

Bronze Sponsors

Loading

Associate Sponsors

Loading

Media & Strategic Partners