Resonance Blog

MCP Explained: The USB-C Moment for Generative AI

Written by Tom Fry | Mar 24, 2025 3:01:14 PM

Something I love about AI right now is how quickly it moves. Just when you think you’re getting a handle on the space something new comes along. Already in 2025, we kicked off the year with Deepseek’s R1 model, a model that redefines the economics of frontier LLMs, and more recently, OpenAI introduced Operator, an advanced model capable of performing tasks online.

In the past week a new concept in generative AI has been gathering momentum – it’s called MCP, or Model Context Protocol, and it’s a big deal when it comes to integrating AI into systems.

First, a quick primer on APIs

APIs (Application Programming Interfaces) are the unsung heroes of modern technology. They enable different software systems to communicate seamlessly by standardising the exchange of data. Whether it's accessing cloud-based storage, pulling data from CRM systems, or automating workflows, APIs underpin nearly every digital interaction we rely upon.

But there’s a problem: LLMs struggle with APIs.

In a way, it’s a strange state of affairs given that APIs are the basis of machine-to-machine communication and LLMs are the latest-and-greatest machine technology, but there’s a good reason - and it comes down to the difference between structured and unstructured data.

LLMs are great at working with unstructured data – input and output. That’s what astonished us when we first came across ChatGPT, the model understood us and responded to whatever we asked with coherent blocks of text. But the models struggle when they need to generate structured, precise instructions – the type that APIs require.

LLMs often misinterpret or inaccurately format requests, making reliable integration into systems challenging.

This shortcoming is where MCP steps in.

Meet MCP: the USB-C of AI integration

MCP is designed as a structured communication protocol that standardises interactions between LLMs and external APIs or applications. By explicitly defining request-and-response formats, MCP ensures LLMs communicate reliably with APIs, dramatically simplifying the integration process.

But the introduction of an MCP isn’t just about the protocol, it’s what it enables that’s more important.

It finally allows LLMs to connect to applications, tools, and services without having to start from scratch with each integration. Developers have described MCP as a kind of “universal translator” for models to be able to talk to the web or applications. Another commentator describes it as the “USB-C for AI.”

What’s great is that the standard is open source. Similar to the ecosystem seen growing around open-source models such as Meta’s Llama and Deepseek, MCP benefits from continuous improvement by a global community of developers and researchers. This open approach boosts adoption, as no single company holds exclusive control.

The future of AI is structured and connected

What’s fascinating is how quickly AI is gaining momentum – just look at the past few months. DeepSeek slashed the cost of running frontier models. Distilled models (smaller models trained to replicate the performance of an LLM) are much cheaper and faster. And with MCP we have a universal connector between models and services which means we can make much greater use of LLMs.

In other words, the stage is set.

As we move further into 2025 and beyond, MCP promises to reshape what's possible with AI, giving businesses a tool to reliably integrate sophisticated AI capabilities directly into the fabric of their software stack.

The future of AI isn’t just conversational - it's fundamentally connected, structured, data-led, and ready for business.