Welcome to the Halftime Show.

This week we have a special ~sentimental edition. We are revisiting the cannon event that changed the bracket, odds, and playing field of the greatest game on earth.

It might even be the moment that spawned the idea for League of Delta (?).

Let's lock in.

PLAY OF THE WEEK
The most expensive free model in history

One year ago, a quant from Hangzhou released DeepSeek R1 - a reasoning model that matched OpenAI's o1. For free!

Nvidia lost 17% ($589 billion) in a single session, the largest single day market cap loss in history. People compared it to the Sputnik crisis.

Everyone then moved on, as they do.

What has Deepseek been up to since then? Did they win or lose the race?

SCOUT REPORT

→ #1 in the US App Store. Unseating ChatGPT, WhatsApp, and Instagram. January 27, 2025. DeepSeek R1 became the most downloaded free app in America. The first time a Chinese consumer tech product topped the Western AI market. Top spot in over 150 countries within 72 hrs of release. (Britannica)

→ $5.58 million to train. OpenAI spent $100 million+ on GPT-4. 2,048 export compliant H800 chips. DeepSeek's engineers abandoned standard CUDA and wrote in PTX assembly to manually control cross chip communication - turning a hardware constraint into an algorithmic breakthrough. (CNBC)

→ 131.5 million MAUs (monthly active users) and growing. Free model weights under MIT license. Pre loaded on Huawei and Xiaomi budget devices. No subscription wall. The infrastructure of choice across Africa, the Middle East, and every market where ChatGPT requires a credit card. (Microsoft Global AI Adoption Report)

→ Chinese LLMs went from 3% to 13% of global traffic in two months. RAND tracked LLM site visits across 135 countries. Chinese models crossed 10% penetration in 30 countries and 20% market share in 11. DeepSeek drove the entire move. (RAND Corporation)

→ 10–30x cheaper than OpenAI. TDeepSeek API: $0.28 per million input tokens. OpenAI GPT-4o: $3.00. For a startup processing 10M tokens a day: $42/month vs $750/month. (DeepSeek API Docs)

FILM ROOM
There’s 2 scoreboards in AI

First, let’s actually do a tl;dr on what made Deepseek goat’d.

Don’t stop at the word cheap. In tech, we call that efficient. And efficiency is what lubricates AI. 

Unlike traditional models that activate all parameters for every task, DeepSeek uses a fine grained Mixture-of-Experts architecture (you can credit that to the founder’s quant background). V3 has 671 billion total parameters: but only 37 billion are active for any given token. Traditional models burn compute across the entire network every time. DeepSeek's routing system only lights up what it needs.

Then, denied access to Nvidia's best chips by US export controls, their engineers abandoned standard CUDA entirely and wrote in PTX assembly to manually control cross chip communication.

Sooo that means the hardware ceiling became a software floor.

So what actually happened in the twelve months, after the dust has settled?

  • DeepSeek didn't take over the world's frontier AI development

  • US companies recovered

  • Nvidia's stock came back

  • The hyperscalers didn't cut capex

  • Goldman now projects $500 billion in AI infrastructure spend in 2026

  • The model that was supposed to make compute irrelevant ended up making everyone want more compute (more on the Jevons paradox at a some point).

Hmm….. did Deepseek go night night?

The global AI race isn't one race….but two.

Race one: frontier model benchmarks

Who has the best model? Who trained on the most compute? Who can generate the most convincing reasoning chain? This race is happening in SF and Seattle. US companies are winning it, for now, and spending half a trillion dollars in 2026 to keep winning it.

Race two: global distribution

Who becomes the default AI for the next billion users? Who gets embedded in the devices, apps, and infrastructure of the markets that aren't already locked into the Google-Anthropic-OpenAI stack? (sorry Llama)

That latter open source race is happening in Hangzhou (and pretty much every other Chinese model). And it's not close.

Open Source Chinese Models

Any founder who's built a product internationally knows how this plays out. The product that wins the premium market isn't always the product that wins the world. The product that's free, fast, open, and available on the device people already own - that product wins the next billion users.

DeepSeek is free. It's open source. It's preloaded on Huawei phones. And it’s taking over markets that nobody is paying attention to.

Call it a Sputnik moment or not. Call it a distribution strategy. One year in, it’s working exactly as designed. 

STAT OF THE WEEK

156

The number of countries where DeepSeek was the #1 most downloaded app within weeks of launch. Not most used. Not highest revenue. Most wanted, by people who had never heard of it a month before

See you next week,
Jen, live from Shenzhen

Keep Reading