Reading "Empire of AI" - Talent war, Power, and the Cracks Behind the Doors
AI is no longer just technology — it is an empire.
We just finished reading the book “Empire of AI” by Karen Hao, which paints a fascinating and critical look at the path AI is taking and tells a story of how OpenAI acts like an empire and reveals the dynamics behind closed doors.
Below are some of the highlights that stood out to us and our thoughts on the book.
Image Source
Why Empire of AI?
Karen Hao frames the AI industry as a modern empire. Like historical empires, the whole AI industry operates and expands by capturing resources that are not its own. This includes news articles, books, videos, images, and paintings, which artists, writers, and content creators put a lot of effort into. All for the purpose of training the next bigger AI model.
This conquest for resources is not limited to data; the AI industry is fighting over AI talent, over GPUs, and where to put the next ultra-large data center. And while this amassing of resources, knowledge, and political power is often presented to be for the benefit of mankind with the promise of AGI, in reality, Karen argues, this primarily benefits the AI empire itself.
The talent war - where the empire began
Every empire needs generals. In Silicon Valley, that means AI researchers and engineers.
On the day of OpenAI’s launch, Elon Musk wrote, “Our most important consideration is recruitment of the best people”.
The talent war was already raging.
Before the announcement of OpenAI, Ilya Sutskever was almost staying at Google. To win him over, OpenAI offered $2 million, and Google, in a bid to keep him, had offered 2-3 times that amount.
Apparently, Sutskever chose OpenAI.
OpenAI started recruiting top university graduates for $200,000+ base salaries, and stealing talent from competitors has long become the norm in Silicon Valley.
It wasn’t just about hiring — it was about denying your rival the talent they needed.
Still, money was not everything. How did OpenAI manage to attract so much talent in its early days? How could it compete with big tech giants?
Along with early backing from Elon Musk, this was thanks to Sam Altman.
To outsiders from Silicon Valley, Sam Altman might be coming out of nowhere, but he was well-known and respected in the tech industry in Silicon Valley as the president of Y Combinator, a startup accelerator that provided seed funding and mentorship. If you want to build things, your best bet is to get into YC, and Sam Altman is the gatekeeper.
Karen describes Altman as someone who is so good at inspiring people and painting a future that aligns with their visions. And he even goes beyond that; he makes people believe in that future, even if they did not think it was possible.
So when Sam Altman and Elon Musk founded OpenAI, they probably truly believed in doing something extraordinary for humankind.
Because of that contrast to the profit-driven tech giants of Silicon Valley, OpenAI decided to present itself as a non-profit.
They competed with the big tech giants on a sense of mission and purpose. Google or Meta, or Microsoft sound like old tech giants that you want to stay there for job security and money. But if you want to build something breathtaking and get something out of your life, then joining a startup for a non-profit and beautiful mission is way cooler.
Everyone having this sense of mission and purpose - I think that’s something really amazing and something I don’t see as reflected in what happens generally today.
AGI for humanity
When OpenAI started, AGI was always the central goal since the beginning. The mission is to create the first general AI, distribute it, make it safe, and do “the good for the world”.
By benefiting the world, they envisioned tackling major global challenges—most notably, “addressing climate change and curing cancer”.
Elon Musk was afraid that Google’s acquisition of DeepMind would allow Google to develop the AGI that could go so terribly wrong, possibly a catastrophe to humanity.
By operating with the nonprofit model, relying on investments, conducting and publishing open research, and by sharing all AI models openly, OpenAI pushed for democratizing AI.
This would ensure that people are free to explore AI models and to ensure that AGI would serve the common good, and not privileged companies or individuals.
Quickly, the company attracted idealists who were eager to create the technology that would change society for the greater good. By prioritizing open research and safety protocols before releasing AI models, it presented itself as a leader in ethical innovation, ensuring AGI will be aligned with human values.
The cracks behind closed doors — Ego, Power and Departure
This vision started to crumble shortly after releasing GPT-2. Investors were very unimpressed by the lack of intelligence in the models.
But the path forward was clear, from Sutskever’s perspective: scaling.
More data and more compute.
This meant massive investments from Microsoft and a slow departure from open research. The next modules would not be released with open weights; the exact training data and model specifics would not be disclosed.
When OpenAI abandoned its open research direction, it caused distrust in the company.
With competitors and expectations from investors out there, safety protocols were sidelined.
Within the company, the divide between those who want to deploy fast and those who want time for testing and implementing safety guardrails was increasing.
Tensions eventually led to a major split. The most significant departure came when Dario and Daniela Amodei, along with several top researchers, split to form Anthropic. Their mission is to build safer AI, as they say, free from some of the compromises they believed OpenAI had made.
Eventually, OpenAI’s move for-profit, backed by Microsoft, fueled further distrust. As more founding members departed, raising concerns over the company’s approach to safety and deployment.
Shortly after Sam’s short firing as CEO, a lot of people at OpenAI left, notably Ilya Sutskever, who is seen as the AI visionary.
Eventually, CTO Mira Murati left to form her own empire too. Of course, she did not leave empty-handed. She brought trusted allies with her.
Karen spent a lot of time on the small dramas behind the doors of OpenAI, the shifting alliances between Ilya Sutskever and Greg Brockman, CTO Mira Murati’s frustrations with Brockman, who, despite reporting to her, would bypass and go directly to Altman. There are full of interesting insights about who Sam Altman is: a visionary leader, a manipulator, and a liar.
You will see lots of dynamics in people and relationships, and who is whose ally. That is super fruitful if you enjoy drama 🙂
An alternate future: More democratic and less colonial AI Industry?
Back to the “empire” metaphor, Karen argues AI does not have to be colonial. For instance, she points to a small, Indigenous-driven effort in New Zealand that uses AI to revive the dying language of the Māori people. The main difference with “big AI” here is that all the data that was used in training the model was willingly provided by the Māori people, and each of them agreed and celebrated the production of such an AI to keep the language alive. It’s an example of an AI that is created independently, fully transparent, and in a democratic way.
So how can we dissolve the AI empire? There are a couple of things:
- Invest community driven AI and independent research
- Enforce transparent policies: force companies to share training data, to enable evaluation and accountability
- Empower marginalized communities: use journalism and civil society to reveal the impact of AI and foster inclusive technological development.
What do we think about the book?
It is a fascinating read and an extremely well-reported book. There seems to be a decent amount of factual information about players in AI and all the dramas behind OpenAI’s wall , which climaxes at the ousting of Sam Altman.
Sam Altman seems to deny it though 😉
Karen Hao is a brilliant journalist who writes with cinematic flair and dramatic style. At times, reading the book feels like watching a movie, eagerly waiting for what happens next.
We also appreciated the angle of exposing unacknowledged costs behind developing AI models, like the environmental impacts of computing and data centers, the labor work behind toxic content filtering, and the psychological toll on low-paid workers. While these issues have been discussed before, notably in The Atlas of AI by Kate Crawford, it is a good reminder to be less ignorant.
That being said, the book is a strong critique of the AI industry. As AI enthusiasts, we sometimes felt Karen Hao’s stance lacked balance. Reading it often feels like she has little respect for the AI industry or the people working to build AI that many of us already benefit from.
It is an extremely well-researched book, but its bias is also clear. We see repetition of the critique against AI, for example, it is often “artists vs AI,” when much of it today could potentially be “artists together with AI”. Though we understand the critiques are there for a good reason.
During the internal turmoil of OpenAI, many people left to start their own companies: Anthropic, DAIR, Safe Superintelligence, and Thinking Machines Lab. The departures were because of a lack of safety guardrails, transparency, and diversity in AI research and development. But can we say that Anthropic has better safety deployment as compared to OpenAI? In the media, they may present themselves as alternatives, yet in practice, they produce the same products, using the same technology, trained on the same data. Every company claims that safety and alignment are priorities. But without transparency, we, the public, have no way of verifying whether these promises are genuine.
Ultimately, departures were not just about safety and distrust. They were also about ego and power — or the lack of it. When reading the book, you feel that many leaders felt they no longer had the influence, resources, or freedom to “do the right things,” or they had reached their mission when they started at OpenAI, and it is time for a new adventure.
The book does end with a positive note. In the epilogue, she highlights the interesting example when AI is being used to keep a language alive, which has almost gone extinct due to colonization – and this angle is well worth supporting the good use of AI. AI is a tool that can be used for bad, but also for good.
We are looking forward to a more balanced view of AI in her future books.
In any case, it is definitely worth reading!
Enjoyed this week’s newsletter? Give it a ❤️ so I know to write similar ones in the future. Leave a comment, I try to respond to every one.