Explorer
Home Blog Resume Contact

Am I the Only One Who Watched Terminator?

Feb 10, 2025

AI is getting the nukes now. Cool, cool, cool. And Google has quietly deleted a key section from its AI Principles, the pledge against developing AI for weapons or surveillance tech.

TL;DR Takeaways

  • Google quietly dropped its explicit ban on developing AI for weapons and surveillance
  • Critics warn this could normalize AI-enabled warfare
  • OpenAI is also partnering with U.S. National Labs for “nuclear research.”

The AI Arms Race Is Real

There was a time when Google made loud promises not to build AI for warfare or surveillance. Way back in 2018, Google employees revolted over Project Maven, a military project using AI to analyze drone footage. Thousands protested, some quit, and Google promised to steer clear of weapons projects. That line in the sand is now gone.

Google just updated its AI Principles page. Out went the explicit ban; in came a blog post from DeepMind CEO Demis Hassabis and SVP James Manyika about “responsible” AI partnerships with democracies. Critics like

The Pentagon now has an office dedicated to AI systems for warfare. Tech companies are pouring billions into AI infrastructure and eyeing military contracts to monetize it. Alphabet alone plans $75 billion in AI investments.

Meanwhile OpenAI, once the poster child for AI safety, will now provide GPT models to U.S. national labs for nuclear weapons research, “to reduce nuclear risk,” they claim. Critics aren’t convinced.

The Tech World Is Split

  • Andrew Ng supports the move, citing national security.
  • Meredith Whittaker and Geoffrey Hinton warn against weaponizing AI.
  • Jeff Dean previously opposed autonomous weapons.
  • Margaret Mitchell isn’t buying it, warning it opens the door to AI “that can kill people.”

And inside Google not everyone is onboard. Apparently memes on internal boards joked “Are we the baddies?” I’m curious if we will see any protests or mass resignations soon, but in today’s not so hot job market I wouldn’t be surprised if it’s much more subdued than alst timew.

Shall We Play a Game?

This isn’t just about tech companies backstepping their ethics. It’s about normalizing machine-speed warfare, where systems react to systems without human oversight. Mistakes will happen. Accountability will vanish. And the world’s rules of war aren’t ready. I want to believe, I need to believe, that technology can be an overall force for good. Self driving cars will save lives. AI will diagnose patients and cure diseases. I want that Star Trek future. But that hinges on us not blowing ourselves up

Would You Like to Know More?