No Result
View All Result
SUBMIT YOUR ARTICLES
  • Login
Monday, December 22, 2025
TheAdviserMagazine.com
  • Home
  • Financial Planning
    • Financial Planning
    • Personal Finance
  • Market Research
    • Business
    • Investing
    • Money
    • Economy
    • Markets
    • Stocks
    • Trading
  • 401k Plans
  • College
  • IRS & Taxes
  • Estate Plans
  • Social Security
  • Medicare
  • Legal
  • Home
  • Financial Planning
    • Financial Planning
    • Personal Finance
  • Market Research
    • Business
    • Investing
    • Money
    • Economy
    • Markets
    • Stocks
    • Trading
  • 401k Plans
  • College
  • IRS & Taxes
  • Estate Plans
  • Social Security
  • Medicare
  • Legal
No Result
View All Result
TheAdviserMagazine.com
No Result
View All Result
Home Market Research Economy

Are We Waking Up Fast Enough to the Dangers of AI Militarism?

by TheAdviserMagazine
2 months ago
in Economy
Reading Time: 8 mins read
A A
Are We Waking Up Fast Enough to the Dangers of AI Militarism?
Share on FacebookShare on TwitterShare on LInkedIn


By Tom Valovic, a writer, editor, futurist, and the author of Digital Mythologies (Rutgers University Press), a series of essays that explored emerging social and cultural issues raised by the advent of the Internet. He has served as a consultant to the former Congressional Office of Technology Assessment and was editor-in- chief of Telecommunications magazine for many years. Tom has written about the effects of technology on society for a variety of publications including Common Dreams, Counterpunch, The Technoskeptic, the Boston Globe, the San Francisco Examiner, Columbia University’s Media Studies Journal, and others. He can be reached at [email protected]. Originally published at Common Dreams

Yves here. The stoopid, it burns. AI errors and shortcomings are getting more and more press, yet implementation in high risk settings continues. This post discusses Trump Administration’s eagerness to use AI for critical military decision despite poor performance in war games and similar tests.

By Tom Valovic, a writer, editor, futurist, and the author of Digital Mythologies (Rutgers University Press), a series of essays that explored emerging social and cultural issues raised by the advent of the Internet. He has served as a consultant to the former Congressional Office of Technology Assessment and was editor-in- chief of Telecommunications magazine for many years. Tom has written about the effects of technology on society for a variety of publications including Common Dreams, Counterpunch, The Technoskeptic, the Boston Globe, the San Francisco Examiner, Columbia University’s Media Studies Journal, and others. He can be reached at [email protected]. Originally published at Common Dreams

AI is everywhere these days. There’s no escape. And as geopolitical events appear to spiral out of control in the Ukraine and Gaza, it seems clear that AI, while theoretically a force for positive change, has become has become a worrisome accelerant to the volatility and destabilization that may lead us to once again thinking the unthinkable—in this case World War III.

The reckless and irresponsible pace of AI development badly needs a measure of moderation and wisdom that seems sorely lacking in both the technology and political spheres. Those who we have relied on to provide this in the past—leading academics, forward-thinking political figures, and various luminaries and thought leaders in popular culture—often seem to be missing in action in terms of loudly sounding the necessary alarms. Lately, however, and offering at least a shred of hope, we’re seeing more coverage in the mainstream press of the dangers of AI’s destructive potential.

To get a feel for perspectives on AI in a military context, it’s useful to start with an article that appeared in Wired magazine a few years ago, “The AI-Powered, Totally Autonomous Future of War Is Here.” This treatment practically gushed with excitement about the prospect of autonomous warfare using AI. It went on to discuss how Big Tech, the military, and the political establishment were increasingly aligning to promote the use of weaponized AI in a mad new AI-nuclear arms race. The article also provided a clear glimpse of the foolish transparency of the all-too-common Big Tech mantra that “it’s really dangerous but let’s do it anyway.”

More recently, we see supposed thought leaders like former Google CEO Eric Schmidt sounding the alarm about AI in warfare after, of course, being heavily instrumental in promoting it. A March 2025 article appearing in Fortune noted that “Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks are warning that treating the global AI arms race like the Manhattan Project could backfire. Instead of reckless acceleration, they propose a strategy of deterrence, transparency, and international cooperation—before superhuman AI spirals out of control.” It’s unfortunate that Mr. Schmidt didn’t think more about his planetary-level “oops” before he decided to be so heavily instrumental in developing its capabilities.

The acceleration of frenzied AI development has now been green-lit by the Trump administration with US Vice President JD Vance’s deep ties to Big Tech becoming more and more apparent. This position is easily parsed—full speed ahead. One of Trump’s first official acts was to announce the Stargate Project, a $500 billion investment in AI infrastructure. Both President Donald Trump and Vance have made their position crystal clear about not attempting in any way to slow down progress by developing AI guardrails and regulation even to the point of attempting to preclude states from enacting their own regulation as part of the so called “Big Beautiful Bill.”

Widening The Public Debate

If there is any bright spot in this grim scenario, it’s this: The dangers of AI militarism are starting to get more widely publicized as AI itself gets increased scrutiny in political circles and the mainstream media. In addition to the Fortune article and other media treatments, a recent article in Politico discussed how AI models seem to be predisposed toward military solutions and conflict:

Last year Schneider, director of the Hoover Wargaming and Crisis Simulation Initiative at Stanford University, began experimenting with war games that gave the latest generation of artificial intelligence the role of strategic decision-makers. In the games, five off-the-shelf large language models or LLMs—OpenAI’s GPT-3.5, GPT-4, and GPT-4-Base; Anthropic’s Claude 2; and Meta’s Llama-2 Chat—were confronted with fictional crisis situations that resembled Russia’s invasion of Ukraine or China’s threat to Taiwan. The results? Almost all of the AI models showed a preference to escalate aggressively, use firepower indiscriminately, and turn crises into shooting wars—even to the point of launching nuclear weapons. “The AI is always playing Curtis LeMay,” says Schneider, referring to the notoriously nuke-happy Air Force general of the Cold War. “It’s almost like the AI understands escalation, but not deescalation. We don’t really know why that is.”

Personally, I don’t think “why that is” is much of a mystery. There’s a widespread perception that AI is a fairly recent development coming out of the high-tech sector. But this is a somewhat misleading picture frequently painted or poorly understood by corporate-influenced media journalists. The reality is that AI development was a huge ongoing investment on the part of government agencies for decades. According to the Brookings Institution, in order to advance an AI arms race between the US and China, the federal government, working closely with the military, has served as an incubator for thousands of AI projects in the private sector under the National AI Initiative act of 2020. The COO of Open AI, the company that created ChatGPT, openly admitted to Timemagazine that government funding has been the main driver of AI development for many years.

This national AI program has been overseen by a surprising number of government agencies. They include but are not limited to government alphabet soup agencies like DARPA, DOD, NASA, NIH, IARPA, DOE, Homeland Security, and the State Department. Technology is power and, at the end of the day, many tech-driven initiatives are chess pieces in a behind-the-scenes power struggle taking place in an increasingly opaque technocratic geopolitical landscape. In this mindset, whoever has the best AI systems will gain not only technological and economic superiority but also military dominance. But, of course, we have seen this movie before in the case of the nuclear arms race.

The Politico article also pointed out that AI is being groomed to make high-level and human-independent decisions concerning the launch of nuclear weapons:

The Pentagon claims that won’t happen in real life, that its existing policy is that AI will never be allowed to dominate the human “decision loop” that makes a call on whether to, say, start a war—certainly not a nuclear one. But some AI scientists believe the Pentagon has already started down a slippery slope by rushing to deploy the latest generations of AI as a key part of America’s defenses around the world. Driven by worries about fending off China and Russia at the same time, as well as by other global threats, the Defense Department is creating AI-driven defensive systems that in many areas are swiftly becoming autonomous—meaning they can respond on their own, without human input—and move so fast against potential enemies that humans can’t keep up.

Despite the Pentagon’s official policy that humans will always be in control, the demands of modern warfare—the need for lightning-fast decision-making, coordinating complex swarms of drones, crunching vast amounts of intelligence data, and competing against AI-driven systems built by China and Russia—mean that the military is increasingly likely to become dependent on AI. That could prove true even, ultimately, when it comes to the most existential of all decisions: whether to launch nuclear weapons.

The AI Technocratic Takeover: Planned for Decades

Learning the history behind the military’s AI plans is essential to understanding its current complexities. Another eye-opening perspective on the double threat of AI and nuclear working in tandem was offered by Peter Byrne in “Into the Uncanny Valley: Human-AI War Machines”:

In 1960, J.C.R. Licklider published “Man-Computer Symbiosis” in an electronics industry trade journal. Funded by the Air Force, Licklider explored methods of amalgamating AIs and humans into combat-ready machines, anticipating the current military-industrial mission of charging AI-guided symbionts with targeting humans…

Fast forward sixty years: Military machines infused with large language models are chatting verbosely with convincing airs of authority. But, projecting humanoid qualities does not make those machines smart, trustworthy, or capable of distinguishing fact from fiction. Trained on flotsam scraped from the internet, AI is limited by a classic “garbage in-garbage out” problem, its Achilles’ heel. Rather than solving ethical dilemmas, military AI systems are likely to multiply them, as has been occurring with the deployment of autonomous drones that cannot reliably distinguish rifles from rakes, or military vehicles from family cars…. Indeed, the Pentagon’s oft-echoed claim that military artificial intelligence is designed to adhere to accepted ethical standards is absurd, as exemplified by the live-streamed mass murder of Palestinians by Israeli forces, which has been enabled by dehumanizing AI programs that a majority of Israelis applaud. AI-human platforms sold to Israel by Palantir, Microsoft, Amazon Web Services, Dell, and Oracle are programmed to enable war crimes and genocide.

The role of the military in developing most of the advanced technologies that have worked their way into modern society still remains beneath the threshold of public awareness. But in the current environment characterized by the unholy alliance between corporate and government power, there no longer seems to be an ethical counterweight to unleashing a Pandora’s box of seemingly out-of-control AI technologies for less than noble purposes.

That the AI conundrum has appeared in the midst of a burgeoning world polycrisis seems to point toward a larger-than-life existential crisis for humanity that’s been ominously predicted and portrayed in science fiction movies, literature, and popular culture for decades. Arguably, these were not just films for speculative entertainment but in current circumstances can be viewed as warnings from our collective unconscious that have largely gone unheeded. As we continue to be force-fed AI, the voting public needs to find a way to push back against this onslaught against both personal autonomy and the democratic process.

No one had the opportunity to vote on whether we want to live in a quasi-dystopian technocratic world where human control and agency is constantly being eroded. And now, of course, AI itself is upon us in full force, increasingly weaponized not only against nation-states but also against ordinary citizens. As Albert Einstein warned, “It has become appallingly obvious that our technology has exceeded our humanity.” In a troubling ironic twist, we know that Einstein played a strong role in developing the technology for nuclear weapons. And yet somehow, like J. Robert Oppenheimer, he eventually seemed to understand the deeper implications of what he helped to unleash.

Can we say the same about today’s AI CEOs and other self-appointed experts as they gleefully unleash this powerful force while at the same time casually proclaiming that they don’t really know if AI and AGI might actually spell the end of humanity and Planet Earth itself?



Source link

Tags: DangersFastMilitarismWaking
ShareTweetShare
Previous Post

Holders Cross 8.1 Mil, Ahead Of XRP & ADA

Next Post

Gold ETF inflows jump fourfold in September to Rs 8,363 crore, hit record high

Related Posts

edit post
David Deutsch on the Pattern

David Deutsch on the Pattern

by TheAdviserMagazine
December 22, 2025
0

0:37Intro. Russ Roberts: Before introducing today's guest, I want to mention something important about when it was recorded.This week's episode...

edit post
Fentanyl Classified As Weapon Of Mass Destruction

Fentanyl Classified As Weapon Of Mass Destruction

by TheAdviserMagazine
December 22, 2025
0

Fentanyl has been classified as a Weapon of Mass Destruction (WMD) after it claimed the lives of 57,000 Americans in...

edit post
The Brutality of the US Empire Is on Display in Venezuela

The Brutality of the US Empire Is on Display in Venezuela

by TheAdviserMagazine
December 21, 2025
0

I can’t help but feel very sorry for the Venezuelan people.Imagine living under a brutal and corrupt illegitimately-elected dictatorship, a...

edit post
Links 12/21/2025 | naked capitalism

Links 12/21/2025 | naked capitalism

by TheAdviserMagazine
December 21, 2025
0

The art of Pysanky: traditional egg decorating from Europe 📹vilsonpessanka pic.twitter.com/deAXF6ETty — Science girl (@sciencegirl) December 20, 2025 James Webb...

edit post
The Sunday Morning Movie Presents: An Ideal Husband (1969) Run Time: 1H 22M Bonus: Woo!

The Sunday Morning Movie Presents: An Ideal Husband (1969) Run Time: 1H 22M Bonus: Woo!

by TheAdviserMagazine
December 21, 2025
0

Greetings gentle readers and welcome to another installment of the Sunday Morning Movie. Today it’s an adaptation of Oscar Wilde’s...

edit post
These 15 Coal Plants Would Have Retired. Then Came AI and Trump.

These 15 Coal Plants Would Have Retired. Then Came AI and Trump.

by TheAdviserMagazine
December 21, 2025
0

By Joe Fassler, a writer and journalist whose work on climate and technology appears in outlets like The Guardian, The...

Next Post
edit post
Gold ETF inflows jump fourfold in September to Rs 8,363 crore, hit record high

Gold ETF inflows jump fourfold in September to Rs 8,363 crore, hit record high

edit post
“Hot, Hot, Hot”: Robert Kiyosaki on silver and Ethereum, sees white metal hitting

“Hot, Hot, Hot”: Robert Kiyosaki on silver and Ethereum, sees white metal hitting $75

  • Trending
  • Comments
  • Latest
edit post
How Long is a Last Will and Testament Valid in North Carolina?

How Long is a Last Will and Testament Valid in North Carolina?

December 8, 2025
edit post
In an Ohio Suburb, Sprawl Is Being Transformed Into Walkable Neighborhoods

In an Ohio Suburb, Sprawl Is Being Transformed Into Walkable Neighborhoods

December 14, 2025
edit post
Democrats Insist On Taxing Tips        

Democrats Insist On Taxing Tips        

December 15, 2025
edit post
Detroit Seniors Are Facing Earlier Shutoff Notices This Season

Detroit Seniors Are Facing Earlier Shutoff Notices This Season

December 20, 2025
edit post
Living Trusts in NC Explained: What You Should Know

Living Trusts in NC Explained: What You Should Know

December 16, 2025
edit post
Elon Musk adds to his 9 billion fortune after Delaware court awards him  billion pay package

Elon Musk adds to his $679 billion fortune after Delaware court awards him $55 billion pay package

December 20, 2025
edit post
David Deutsch on the Pattern

David Deutsch on the Pattern

0
edit post
Key takeaways from Micron’s (MU) first-quarter 2026 results

Key takeaways from Micron’s (MU) first-quarter 2026 results

0
edit post
JPMorgan weighs offering Bitcoin, crypto trading to institutional clients

JPMorgan weighs offering Bitcoin, crypto trading to institutional clients

0
edit post
Is Costco Open on Christmas Day 2025?

Is Costco Open on Christmas Day 2025?

0
edit post
Regeneron Pharmaceuticals: Nach erfolgreichem Ausbruch – Pullback-Szenario bietet attraktives Setup!

Regeneron Pharmaceuticals: Nach erfolgreichem Ausbruch – Pullback-Szenario bietet attraktives Setup!

0
edit post
Sun shines again for SolarEdge

Sun shines again for SolarEdge

0
edit post
JPMorgan weighs offering Bitcoin, crypto trading to institutional clients

JPMorgan weighs offering Bitcoin, crypto trading to institutional clients

December 22, 2025
edit post
Is Costco Open on Christmas Day 2025?

Is Costco Open on Christmas Day 2025?

December 22, 2025
edit post
Golden Goose agrees deal to bring in HSG as majority shareholder

Golden Goose agrees deal to bring in HSG as majority shareholder

December 22, 2025
edit post
Outbound Indian university enrolments fall after three-year rise

Outbound Indian university enrolments fall after three-year rise

December 22, 2025
edit post
David Deutsch on the Pattern

David Deutsch on the Pattern

December 22, 2025
edit post
Regeneron Pharmaceuticals: Nach erfolgreichem Ausbruch – Pullback-Szenario bietet attraktives Setup!

Regeneron Pharmaceuticals: Nach erfolgreichem Ausbruch – Pullback-Szenario bietet attraktives Setup!

December 22, 2025
The Adviser Magazine

The first and only national digital and print magazine that connects individuals, families, and businesses to Fee-Only financial advisers, accountants, attorneys and college guidance counselors.

CATEGORIES

  • 401k Plans
  • Business
  • College
  • Cryptocurrency
  • Economy
  • Estate Plans
  • Financial Planning
  • Investing
  • IRS & Taxes
  • Legal
  • Market Analysis
  • Markets
  • Medicare
  • Money
  • Personal Finance
  • Social Security
  • Startups
  • Stock Market
  • Trading

LATEST UPDATES

  • JPMorgan weighs offering Bitcoin, crypto trading to institutional clients
  • Is Costco Open on Christmas Day 2025?
  • Golden Goose agrees deal to bring in HSG as majority shareholder
  • Our Great Privacy Policy
  • Terms of Use, Legal Notices & Disclosures
  • Contact us
  • About Us

© Copyright 2024 All Rights Reserved
See articles for original source and related links to external sites.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • Financial Planning
    • Financial Planning
    • Personal Finance
  • Market Research
    • Business
    • Investing
    • Money
    • Economy
    • Markets
    • Stocks
    • Trading
  • 401k Plans
  • College
  • IRS & Taxes
  • Estate Plans
  • Social Security
  • Medicare
  • Legal

© Copyright 2024 All Rights Reserved
See articles for original source and related links to external sites.