Explorer
Home Blog Resume Contact

Am I The Only One Who Watched Terminator?

Feb 8, 2025

So we’re giving AIs the nukes now. What could go wrong?

TL;DR Takeaways

  • Google has quietly removed its explicit ban on developing AI for weapons and surveillance, signaling a major ethical pivot.
  • The change coincides with rising geopolitical tensions, defense industry partnerships, and the start of Trump’s second term.
  • Critics warn this could normalize AI-enabled warfare, removing previous corporate guardrails on how AI is used in military contexts.
  • OpenAI is also offering its models to U.S. national labs for nuclear weapons-related research under the guise of risk reduction.

Google Quietly Scraps Its AI Weapons Ban — And That’s a Big Deal

There was a time when Google made loud, proud promises not to build AI for warfare or surveillance. That time is over.

In a subtle but significant move, Google has removed a key section from its AI Principles — the part where it explicitly pledged not to develop AI for weapons or surveillance tech. The quiet deletion signals a shift that’s anything but quiet in its implications. This isn’t just a policy tweak; it’s a signal flare in the escalating race between Silicon Valley and global militaries to arm up with artificial intelligence.

A Promise Made Under Pressure

Rewind to 2018. Internal protests over Google’s involvement in Project Maven — a U.S. military initiative to use AI to analyze drone footage — went viral. Around a dozen employees quit. Over 4,000 signed an open letter demanding the company sever ties with the Department of Defense.

In response, Google published a now-famous set of AI principles. Among them: a hard line against building AI for weapons or surveillance. It was framed as a moral boundary, a gesture of corporate ethics. It felt, at the time, like employee activism had won a rare and important victory.

That line is now gone.

The Justification: National Security… But Make It Vague

In February 2025, Google updated its AI principles page. The ban on weapons and surveillance quietly disappeared. In its place? A blog post co-authored by DeepMind CEO Demis Hassabis and SVP James Manyika that reframes the company’s posture.

The new narrative: the world is geopolitically unstable, and democracies need to lead in AI development to uphold values like freedom and human rights. Working with governments on national security efforts is now considered compatible with Google’s mission — provided it’s “responsible” and aligned with “international law.”

Vague, carefully worded. And significantly weaker than a clear “we will not build this.”

Critics — including Margaret Mitchell, former co-lead of Google’s ethical AI team — aren’t buying it. She described the reversal as “erasing the work” of the ethical AI community and fears it opens the door to AI systems “that can kill people.”

Politics, Power, and Timing

The timing is telling.

Just weeks after Donald Trump’s second inauguration — an event where Google CEO Sundar Pichai was notably present — Trump rescinded Biden-era rules requiring companies to disclose AI safety testing. Google, meanwhile, donated $1 million to Trump’s inaugural fund.

Coincidence? Maybe. But the optics aren’t subtle. Human rights advocates are already calling Google a “corporate war machine.”

The Tech World Is Split

This isn’t a unanimous march to militarization. The AI community is deeply divided.

  • Andrew Ng, founder of Google Brain, welcomes the reversal. He’s argued that American companies have a duty to help U.S. soldiers and sees military AI as a necessary response to China’s rapid tech growth.
  • Meredith Whittaker, president of Signal and a key voice in the 2018 protests, still believes Google has no business in war.
  • Geoffrey Hinton, AI pioneer and Nobel laureate, wants AI weaponry regulated — or banned outright.
  • Even Jeff Dean, now DeepMind’s chief scientist, once signed a public letter opposing autonomous weapons.

Inside Google, not everyone’s onboard either. Alphabet union leader Parul Koul called the decision “deeply concerning,” and internal message boards reportedly lit up with memes asking whether they’re now “the baddies.”

From Drones to Infrastructure

This isn’t just about drones anymore. Last year, Google and Amazon were hit with employee backlash over Project Nimbus — a contract providing cloud infrastructure to the Israeli government, including its military.

The debate has moved from “what kind of AI are we building?” to “what kind of systems are we enabling?” Hosting infrastructure, training models, optimizing logistics — it’s all now part of the modern military tech stack.

And nearly every big player is getting in on the action:

  • OpenAI reversed its weapons ban and is working with defense firms like Anduril. It’s also giving access to national labs working on nuclear security.
  • Anthropic is partnering with Palantir to pitch Claude to military clients.
  • Amazon is teaming with Palantir too.

In short: AI’s next big customer might be your government’s defense department.

The AI Arms Race Is Real

The Pentagon recently set up an office specifically focused on AI for military use — autonomous systems, battlefield logistics, command and control. The goal? Stay ahead of China and Russia in AI-powered warfare.

Meanwhile, the financial stakes are massive. Tech companies are investing tens of billions into AI infrastructure. Monetizing that through military partnerships makes business sense — even if it raises a few existential questions.

Alphabet alone is planning $75 billion in capex, mostly for AI. Pair that with slower-than-expected growth in Google Cloud, and the pressure to chase defense dollars starts to make uncomfortable sense.

Oh, and OpenAI Is Getting Into Nuclear Work Too

Meanwhile, OpenAI — once seen as the poster child for AI safety — made its own quiet leap toward military applications. In late January, the company announced it would offer its technology to U.S. national laboratories for nuclear weapons research, under the banner of reducing nuclear risk. The deal includes access to GPT models for projects related to securing nuclear materials and mitigating nuclear war scenarios. While OpenAI framed the move as a safety initiative, critics point out the irony of using an occasionally hallucination-prone language model in the world’s most high-stakes domain — and note how quickly the guardrails around AI’s military use are vanishing across the board.

Why This Matters

This shift isn’t just about corporate hypocrisy. It’s about the normalization of automated warfare.

As AI enters the battlefield, we’re moving toward machine-speed conflict — systems reacting to systems, with little human oversight. Diplomacy can’t keep up with algorithms. And unlike humans, algorithms don’t have accountability.

Mistakes will happen. Civilian casualties are likely. And when the fog of war is generated by an opaque neural net, who do we blame?

Human Rights Watch warns that AI muddies responsibility. Academics are calling for new Geneva Conventions for algorithms. Some propose an IAEA-style international body to regulate military AI, complete with sanctions for violations.

So What Now?

Google’s decision isn’t just a policy change — it’s a red flag. It shows that self-regulation in AI ethics is breaking down under economic and political pressure. The “Don’t Be Evil” era is long gone. The AI arms race is here, and the ethical lines are blurring fast.

The company says it will act responsibly. But with no more explicit bans in place, “responsible” could mean whatever the quarterly earnings call needs it to.

The line in the sand? It’s been erased.

And if we want one drawn again, it won’t be Big Tech doing the drawing.

Would You Like to Know More?