Explorer
Home Blog Resume Contact

Am I the Only One Who Watched Terminator?

Feb 8, 2025

AI is getting the nukes now. Cool, cool, cool. Google has quietly deleted a key section from its AI Principles, the pledge against developing AI for weapons or surveillance tech.

TL;DR Takeaways

  • Google Quietly Dropped Its Explicit Ban on Developing AI for Weapons and Surveillance, Signaling a Major Ethical Pivot.
  • The Move Coincides with Rising Geopolitical Tensions and the Start of Trump’s Second Term.
  • Critics Warn This Could Normalize AI-Enabled Warfare, Removing Previous Corporate Guardrails.
  • OpenAI Is Also Partnering with U.S. National Labs for Nuclear Research.

The AI Arms Race Is Real

There was a time when Google made loud promises not to build AI for warfare or surveillance. Way back in 2018, Google employees revolted over Project Maven, a military project using AI to analyze drone footage. Thousands protested, some quit, and Google promised to steer clear of weapons projects. That line is now gone.

In February 2025, Google updated its AI Principles page. Out went the explicit ban; in came a blog post from DeepMind CEO Demis Hassabis and SVP James Manyika about “responsible” AI partnerships with democracies. Critics like Margaret Mitchell aren’t buying it, warning it opens the door to AI “that can kill people.”

The Pentagon now has an office dedicated to AI systems for warfare. Tech companies are pouring billions into AI infrastructure and eyeing military contracts to monetize it. Alphabet alone plans $75 billion in AI investments.

Meanwhile OpenAI, once the poster child for AI safety, will now provide GPT models to U.S. national labs for nuclear weapons research, “to reduce nuclear risk,” they claim. Critics aren’t convinced.

The Tech World Is Split

  • Andrew Ng supports the move, citing national security.
  • Meredith Whittaker and Geoffrey Hinton warn against weaponizing AI.
  • Even Jeff Dean previously opposed autonomous weapons.

Inside Google, not everyone is onboard. Reportedly, memes on internal boards joked “Are we the baddies?”

Shall We Play a Game?

This isn’t just about tech companies flipping their ethics. It’s about normalizing machine-speed warfare, where systems react to systems without human oversight. Mistakes will happen. Accountability will vanish. And the world’s rules of war aren’t ready. I want to believe, I need to believe, that technology can be an overall force for good. Self driving cars will save lives. AI will diagnose patients and cure diseases. I want that Star Trek future. But that hinges on us not getting blowing ourselves up

Would You Like to Know More?