Resonance Blog

How DeepSeek won the PR game and what PR pros can learn from its success

Written by Tom Fry | Feb 4, 2025 11:47:17 AM

It’s not too often that PR alone causes stockmarkets to crash and world leaders to question their technological superiority, but January 2025 was such a moment. How could the status quo be upended in a matter of days by a company very few people had heard about before? Look into the circumstances and it was one sentence, hidden within a technical document, that caused this AI house of cards to tumble.

Up until last week ChatGPT held the record for the fastest growing consumer application in history, reaching 100 million active monthly users within just two months of its launch. It seems like DeepSeek may have just beaten the record, coming out of seemingly nowhere to top the app charts in a matter of days.

It’s not clear how many users it has, the Chinese company doesn’t share usage statistics, but according to SimilarWeb, daily visits to the DeepSeek’s website reached 12.4 million on January 26th, as well as millions of app downloads on the Apple App Store and Google Play Store. Given it’s free to use, it’s reasonable to assume that at least 100 million people have tried the platform.

Just like ChatGPT before it, DeepSeek managed this astronomical growth with zero marketing budget. All its success came from word of mouth and significant traction in traditional media and social media. That’s why it should be of interest to PR professionals – is there any part of the magic ingredients we can copy for our own clients? Or was it just luck?

DeepSeek’s Slow Burn

To give tech commentators due credit, DeepSeek has been on the horizon for a few months. In November, VentureBeat wrote about it (https://venturebeat.com/ai/deepseeks-first-reasoning-model-r1-lite-preview-turns-heads-beating-openai-o1-performance/), and noted its excellent performance and open source nature. This was of interest but didn’t particularly stand out amongst the host of other LLMs at the time.

It was the release of DeepSeek-V3 on December 26th, 2024 that laid the groundwork for success. Within the project’s GitHub page (https://github.com/deepseek-ai/DeepSeek-V3 ) was an innocuous-looking sentence:

“Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for its full training,”

This single sentence broke the internet – but it took a month before the world noticed.

TechCruch covered the V3 release (https://techcrunch.com/2024/12/26/deepseeks-new-ai-model-appears-to-be-one-of-the-best-open-challengers-yet/) and even quoted  Andrej Karpathy, a prominent computer scientist, who foresaw the news:

“DeepSeek (Chinese AI co) making it look easy today with an open weights release of a frontier-grade LLM trained on a joke of a budget (2048 GPUs for 2 months, $6M).” – but the news remained limited to the tech press."

DeepSeek’s Hockey Stick Growth

On 20th January, DeepSeek released R1, and this is when the AI community took notice. The reasoning model was as good as, and in many cases better, than the previous state-of-the-art, OpenAI-o1. Not only that, it was open source and free for everyone to try.

Source: https://github.com/deepseek-ai/DeepSeek-R1

Within a matter of days, national journalists had caught wind of the news – and rightly so, they honed in on the most newsworthy part of the news: this model had cost peanuts to train, 1/50th of the cost of OpenAI, and this was despite US restrictions designed to stop them doing it.

The story blew up.

It wasn’t just the innovation, it was the fact this innovation came out of China and challenged the notion that Silicon Valley were the leaders in the AI revolution. It came just days after the US administration’s Stargate announcement, a $500 billion investment in AI infrastructure that now seemed ridiculous in the face of an equivalent model that only took $5m to train. It even sent shockwaves into the energy industry as the vast energy requirements of future AI investments brought into question.

It was remarkable to watch, the story kept evolving. Marc Andreessen notably warned it was “AI's Sputnik moment.”

And the story keeps rolling on even today.

What can PR pros learn from the DeepSeek story?

 Clearly this type of story combining innovation with geopolitics comes very infrequently, but it’s interesting to take a step back and think about what it teaches us as PR professionals.

  1. The power of influencers. The buzz within the AI community is what first alerted the media to this innovation. DeepSeek didn’t put out a press release (to my knowledge) and although press releases are a foundation of our work, it goes to show that they aren’t the only way to announce news.
  2. Paint the big picture. Build a story with your news and ensure there’s a narrative threading through your press release. Don’t just share the facts but talk about how it connects to larger themes - whether it’s geopolitical implications, societal impact, or industry disruption. This is what transforms a so-so announcement to something that has legs.
  3. Make it accessible. What gave DeepSeek the edge was that it was an app that anyone could try – for free. It meant that everyone could be part of the story, most importantly turning users into evangelists. It’s important to make stories meaningful for the reader.
  4. Timing is important. DeepSeek hit the market at the perfect moment, just after the U.S. announced its $500 billion AI infrastructure plan. It made the cost-effectiveness of DeepSeek’s model a direct, headline-grabbing contrast. I can only imagine this was by luck rather than design, but it shows the power of good timing.
  5. Transparency builds trust. DeepSeek’s open-source approach was game changing. By making their models and training processes public, they invited scrutiny and participation from the global tech community. This transparency fostered trust, credibility, and a sense of collective ownership, fuelling organic discussions and media coverage. For PR pros, it’s a reminder that openness and authenticity can be more powerful than tightly controlled messaging.