No Result
View All Result
SUBMIT YOUR ARTICLES
  • Login
Sunday, November 2, 2025
TheAdviserMagazine.com
  • Home
  • Financial Planning
    • Financial Planning
    • Personal Finance
  • Market Research
    • Business
    • Investing
    • Money
    • Economy
    • Markets
    • Stocks
    • Trading
  • 401k Plans
  • College
  • IRS & Taxes
  • Estate Plans
  • Social Security
  • Medicare
  • Legal
  • Home
  • Financial Planning
    • Financial Planning
    • Personal Finance
  • Market Research
    • Business
    • Investing
    • Money
    • Economy
    • Markets
    • Stocks
    • Trading
  • 401k Plans
  • College
  • IRS & Taxes
  • Estate Plans
  • Social Security
  • Medicare
  • Legal
No Result
View All Result
TheAdviserMagazine.com
No Result
View All Result
Home Market Research Economy

Are We Waking Up Fast Enough to the Dangers of AI Militarism?

by TheAdviserMagazine
3 weeks ago
in Economy
Reading Time: 8 mins read
A A
Are We Waking Up Fast Enough to the Dangers of AI Militarism?
Share on FacebookShare on TwitterShare on LInkedIn


By Tom Valovic, a writer, editor, futurist, and the author of Digital Mythologies (Rutgers University Press), a series of essays that explored emerging social and cultural issues raised by the advent of the Internet. He has served as a consultant to the former Congressional Office of Technology Assessment and was editor-in- chief of Telecommunications magazine for many years. Tom has written about the effects of technology on society for a variety of publications including Common Dreams, Counterpunch, The Technoskeptic, the Boston Globe, the San Francisco Examiner, Columbia University’s Media Studies Journal, and others. He can be reached at [email protected]. Originally published at Common Dreams

Yves here. The stoopid, it burns. AI errors and shortcomings are getting more and more press, yet implementation in high risk settings continues. This post discusses Trump Administration’s eagerness to use AI for critical military decision despite poor performance in war games and similar tests.

By Tom Valovic, a writer, editor, futurist, and the author of Digital Mythologies (Rutgers University Press), a series of essays that explored emerging social and cultural issues raised by the advent of the Internet. He has served as a consultant to the former Congressional Office of Technology Assessment and was editor-in- chief of Telecommunications magazine for many years. Tom has written about the effects of technology on society for a variety of publications including Common Dreams, Counterpunch, The Technoskeptic, the Boston Globe, the San Francisco Examiner, Columbia University’s Media Studies Journal, and others. He can be reached at [email protected]. Originally published at Common Dreams

AI is everywhere these days. There’s no escape. And as geopolitical events appear to spiral out of control in the Ukraine and Gaza, it seems clear that AI, while theoretically a force for positive change, has become has become a worrisome accelerant to the volatility and destabilization that may lead us to once again thinking the unthinkable—in this case World War III.

The reckless and irresponsible pace of AI development badly needs a measure of moderation and wisdom that seems sorely lacking in both the technology and political spheres. Those who we have relied on to provide this in the past—leading academics, forward-thinking political figures, and various luminaries and thought leaders in popular culture—often seem to be missing in action in terms of loudly sounding the necessary alarms. Lately, however, and offering at least a shred of hope, we’re seeing more coverage in the mainstream press of the dangers of AI’s destructive potential.

To get a feel for perspectives on AI in a military context, it’s useful to start with an article that appeared in Wired magazine a few years ago, “The AI-Powered, Totally Autonomous Future of War Is Here.” This treatment practically gushed with excitement about the prospect of autonomous warfare using AI. It went on to discuss how Big Tech, the military, and the political establishment were increasingly aligning to promote the use of weaponized AI in a mad new AI-nuclear arms race. The article also provided a clear glimpse of the foolish transparency of the all-too-common Big Tech mantra that “it’s really dangerous but let’s do it anyway.”

More recently, we see supposed thought leaders like former Google CEO Eric Schmidt sounding the alarm about AI in warfare after, of course, being heavily instrumental in promoting it. A March 2025 article appearing in Fortune noted that “Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks are warning that treating the global AI arms race like the Manhattan Project could backfire. Instead of reckless acceleration, they propose a strategy of deterrence, transparency, and international cooperation—before superhuman AI spirals out of control.” It’s unfortunate that Mr. Schmidt didn’t think more about his planetary-level “oops” before he decided to be so heavily instrumental in developing its capabilities.

The acceleration of frenzied AI development has now been green-lit by the Trump administration with US Vice President JD Vance’s deep ties to Big Tech becoming more and more apparent. This position is easily parsed—full speed ahead. One of Trump’s first official acts was to announce the Stargate Project, a $500 billion investment in AI infrastructure. Both President Donald Trump and Vance have made their position crystal clear about not attempting in any way to slow down progress by developing AI guardrails and regulation even to the point of attempting to preclude states from enacting their own regulation as part of the so called “Big Beautiful Bill.”

Widening The Public Debate

If there is any bright spot in this grim scenario, it’s this: The dangers of AI militarism are starting to get more widely publicized as AI itself gets increased scrutiny in political circles and the mainstream media. In addition to the Fortune article and other media treatments, a recent article in Politico discussed how AI models seem to be predisposed toward military solutions and conflict:

Last year Schneider, director of the Hoover Wargaming and Crisis Simulation Initiative at Stanford University, began experimenting with war games that gave the latest generation of artificial intelligence the role of strategic decision-makers. In the games, five off-the-shelf large language models or LLMs—OpenAI’s GPT-3.5, GPT-4, and GPT-4-Base; Anthropic’s Claude 2; and Meta’s Llama-2 Chat—were confronted with fictional crisis situations that resembled Russia’s invasion of Ukraine or China’s threat to Taiwan. The results? Almost all of the AI models showed a preference to escalate aggressively, use firepower indiscriminately, and turn crises into shooting wars—even to the point of launching nuclear weapons. “The AI is always playing Curtis LeMay,” says Schneider, referring to the notoriously nuke-happy Air Force general of the Cold War. “It’s almost like the AI understands escalation, but not deescalation. We don’t really know why that is.”

Personally, I don’t think “why that is” is much of a mystery. There’s a widespread perception that AI is a fairly recent development coming out of the high-tech sector. But this is a somewhat misleading picture frequently painted or poorly understood by corporate-influenced media journalists. The reality is that AI development was a huge ongoing investment on the part of government agencies for decades. According to the Brookings Institution, in order to advance an AI arms race between the US and China, the federal government, working closely with the military, has served as an incubator for thousands of AI projects in the private sector under the National AI Initiative act of 2020. The COO of Open AI, the company that created ChatGPT, openly admitted to Timemagazine that government funding has been the main driver of AI development for many years.

This national AI program has been overseen by a surprising number of government agencies. They include but are not limited to government alphabet soup agencies like DARPA, DOD, NASA, NIH, IARPA, DOE, Homeland Security, and the State Department. Technology is power and, at the end of the day, many tech-driven initiatives are chess pieces in a behind-the-scenes power struggle taking place in an increasingly opaque technocratic geopolitical landscape. In this mindset, whoever has the best AI systems will gain not only technological and economic superiority but also military dominance. But, of course, we have seen this movie before in the case of the nuclear arms race.

The Politico article also pointed out that AI is being groomed to make high-level and human-independent decisions concerning the launch of nuclear weapons:

The Pentagon claims that won’t happen in real life, that its existing policy is that AI will never be allowed to dominate the human “decision loop” that makes a call on whether to, say, start a war—certainly not a nuclear one. But some AI scientists believe the Pentagon has already started down a slippery slope by rushing to deploy the latest generations of AI as a key part of America’s defenses around the world. Driven by worries about fending off China and Russia at the same time, as well as by other global threats, the Defense Department is creating AI-driven defensive systems that in many areas are swiftly becoming autonomous—meaning they can respond on their own, without human input—and move so fast against potential enemies that humans can’t keep up.

Despite the Pentagon’s official policy that humans will always be in control, the demands of modern warfare—the need for lightning-fast decision-making, coordinating complex swarms of drones, crunching vast amounts of intelligence data, and competing against AI-driven systems built by China and Russia—mean that the military is increasingly likely to become dependent on AI. That could prove true even, ultimately, when it comes to the most existential of all decisions: whether to launch nuclear weapons.

The AI Technocratic Takeover: Planned for Decades

Learning the history behind the military’s AI plans is essential to understanding its current complexities. Another eye-opening perspective on the double threat of AI and nuclear working in tandem was offered by Peter Byrne in “Into the Uncanny Valley: Human-AI War Machines”:

In 1960, J.C.R. Licklider published “Man-Computer Symbiosis” in an electronics industry trade journal. Funded by the Air Force, Licklider explored methods of amalgamating AIs and humans into combat-ready machines, anticipating the current military-industrial mission of charging AI-guided symbionts with targeting humans…

Fast forward sixty years: Military machines infused with large language models are chatting verbosely with convincing airs of authority. But, projecting humanoid qualities does not make those machines smart, trustworthy, or capable of distinguishing fact from fiction. Trained on flotsam scraped from the internet, AI is limited by a classic “garbage in-garbage out” problem, its Achilles’ heel. Rather than solving ethical dilemmas, military AI systems are likely to multiply them, as has been occurring with the deployment of autonomous drones that cannot reliably distinguish rifles from rakes, or military vehicles from family cars…. Indeed, the Pentagon’s oft-echoed claim that military artificial intelligence is designed to adhere to accepted ethical standards is absurd, as exemplified by the live-streamed mass murder of Palestinians by Israeli forces, which has been enabled by dehumanizing AI programs that a majority of Israelis applaud. AI-human platforms sold to Israel by Palantir, Microsoft, Amazon Web Services, Dell, and Oracle are programmed to enable war crimes and genocide.

The role of the military in developing most of the advanced technologies that have worked their way into modern society still remains beneath the threshold of public awareness. But in the current environment characterized by the unholy alliance between corporate and government power, there no longer seems to be an ethical counterweight to unleashing a Pandora’s box of seemingly out-of-control AI technologies for less than noble purposes.

That the AI conundrum has appeared in the midst of a burgeoning world polycrisis seems to point toward a larger-than-life existential crisis for humanity that’s been ominously predicted and portrayed in science fiction movies, literature, and popular culture for decades. Arguably, these were not just films for speculative entertainment but in current circumstances can be viewed as warnings from our collective unconscious that have largely gone unheeded. As we continue to be force-fed AI, the voting public needs to find a way to push back against this onslaught against both personal autonomy and the democratic process.

No one had the opportunity to vote on whether we want to live in a quasi-dystopian technocratic world where human control and agency is constantly being eroded. And now, of course, AI itself is upon us in full force, increasingly weaponized not only against nation-states but also against ordinary citizens. As Albert Einstein warned, “It has become appallingly obvious that our technology has exceeded our humanity.” In a troubling ironic twist, we know that Einstein played a strong role in developing the technology for nuclear weapons. And yet somehow, like J. Robert Oppenheimer, he eventually seemed to understand the deeper implications of what he helped to unleash.

Can we say the same about today’s AI CEOs and other self-appointed experts as they gleefully unleash this powerful force while at the same time casually proclaiming that they don’t really know if AI and AGI might actually spell the end of humanity and Planet Earth itself?



Source link

Tags: DangersFastMilitarismWaking
ShareTweetShare
Previous Post

Holders Cross 8.1 Mil, Ahead Of XRP & ADA

Next Post

Gold ETF inflows jump fourfold in September to Rs 8,363 crore, hit record high

Related Posts

edit post
The Sunday Morning Movie Presents: Faust (1926) Run Time: 1H 46M Plus Halloween Bonuses!!

The Sunday Morning Movie Presents: Faust (1926) Run Time: 1H 46M Plus Halloween Bonuses!!

by TheAdviserMagazine
November 2, 2025
0

Welcome gentle readers to another installment of the Sunday Morning Movie. This week it’s a Halloween treat: Faust, a black...

edit post
Thomas Jefferson The Ancient Coin Collector

Thomas Jefferson The Ancient Coin Collector

by TheAdviserMagazine
November 2, 2025
0

QUESTION: Marty, is it true that Thomas Jefferson had a large Roman and Greek collection of coins? Also, will you...

edit post
The Truman Cover-Up Of Hiroshima & Nagasaki

The Truman Cover-Up Of Hiroshima & Nagasaki

by TheAdviserMagazine
November 1, 2025
0

After the bombing of Hiroshima on August 6th, 1945, there was no high-level meeting or order from Truman to stop...

edit post
Individual Liberty in Libertarian and Conservative Philosophy

Individual Liberty in Libertarian and Conservative Philosophy

by TheAdviserMagazine
November 1, 2025
0

Readers will be aware that Murray Rothbard conceptualized all rights as property rights, derived from the principle of self-ownership. His...

edit post
Professor Jesús Huerta de Soto’s Acceptance Address at the Casa Rosada

Professor Jesús Huerta de Soto’s Acceptance Address at the Casa Rosada

by TheAdviserMagazine
November 1, 2025
0

First and foremost, I wish to express my heartfelt and humble gratitude for this undeserved and extraordinary honor, the Order...

edit post
Links 11/1/2025 | naked capitalism

Links 11/1/2025 | naked capitalism

by TheAdviserMagazine
November 1, 2025
0

The violence of facelifts Unherd 🚨3I/ATLAS Just Got Bluer Than the Sun and Nobody Knows Why 3I/ATLAS is changing fast....

Next Post
edit post
Gold ETF inflows jump fourfold in September to Rs 8,363 crore, hit record high

Gold ETF inflows jump fourfold in September to Rs 8,363 crore, hit record high

edit post
“Hot, Hot, Hot”: Robert Kiyosaki on silver and Ethereum, sees white metal hitting

“Hot, Hot, Hot”: Robert Kiyosaki on silver and Ethereum, sees white metal hitting $75

  • Trending
  • Comments
  • Latest
edit post
77-year-old popular furniture retailer closes store locations

77-year-old popular furniture retailer closes store locations

October 18, 2025
edit post
Pennsylvania House of Representatives Rejects Update to Child Custody Laws

Pennsylvania House of Representatives Rejects Update to Child Custody Laws

October 7, 2025
edit post
What to Do When a Loved One Dies in North Carolina

What to Do When a Loved One Dies in North Carolina

October 8, 2025
edit post
Another Violent Outburst – Democrats Inciting Civil Unrest

Another Violent Outburst – Democrats Inciting Civil Unrest

October 24, 2025
edit post
Probate vs. Non-Probate Assets: What’s the Difference?

Probate vs. Non-Probate Assets: What’s the Difference?

October 17, 2025
edit post
California Attorney Pleads Guilty For Role In 2M Ponzi Scheme

California Attorney Pleads Guilty For Role In $912M Ponzi Scheme

October 15, 2025
edit post
Startup Business Loans for Bad Credit or No Collateral

Startup Business Loans for Bad Credit or No Collateral

0
edit post
Sam Altman says ‘enough’ to questions about OpenAI’s revenue

Sam Altman says ‘enough’ to questions about OpenAI’s revenue

0
edit post
1 Stock to Buy, 1 Stock to Sell This Week: Palantir, Pfizer

1 Stock to Buy, 1 Stock to Sell This Week: Palantir, Pfizer

0
edit post
In the race toward AI, copper is the strategic edge the world overlooks

In the race toward AI, copper is the strategic edge the world overlooks

0
edit post
Guide to Tax Form 1099-Q

Guide to Tax Form 1099-Q

0
edit post
A few points to note as Take-Two Interactive (TTWO) gears up for Q2 2026 earnings

A few points to note as Take-Two Interactive (TTWO) gears up for Q2 2026 earnings

0
edit post
Sam Altman says ‘enough’ to questions about OpenAI’s revenue

Sam Altman says ‘enough’ to questions about OpenAI’s revenue

November 2, 2025
edit post
The nation’s largest police fleet of Tesla Cybertrucks is about to hit the streets of Las Vegas

The nation’s largest police fleet of Tesla Cybertrucks is about to hit the streets of Las Vegas

November 2, 2025
edit post
Michael Saylor Teases 13th Straight Bitcoin Buy as Trump Unveils New U.S.-China Trade Deal

Michael Saylor Teases 13th Straight Bitcoin Buy as Trump Unveils New U.S.-China Trade Deal

November 2, 2025
edit post
Energean signs MoU to export Israeli gas to Cyprus

Energean signs MoU to export Israeli gas to Cyprus

November 2, 2025
edit post
Here’s What the U.S.-China Deal Means For Every American

Here’s What the U.S.-China Deal Means For Every American

November 2, 2025
edit post
Top Wall Street analysts suggest these 3 dividend stocks for enhanced total returns

Top Wall Street analysts suggest these 3 dividend stocks for enhanced total returns

November 2, 2025
The Adviser Magazine

The first and only national digital and print magazine that connects individuals, families, and businesses to Fee-Only financial advisers, accountants, attorneys and college guidance counselors.

CATEGORIES

  • 401k Plans
  • Business
  • College
  • Cryptocurrency
  • Economy
  • Estate Plans
  • Financial Planning
  • Investing
  • IRS & Taxes
  • Legal
  • Market Analysis
  • Markets
  • Medicare
  • Money
  • Personal Finance
  • Social Security
  • Startups
  • Stock Market
  • Trading

LATEST UPDATES

  • Sam Altman says ‘enough’ to questions about OpenAI’s revenue
  • The nation’s largest police fleet of Tesla Cybertrucks is about to hit the streets of Las Vegas
  • Michael Saylor Teases 13th Straight Bitcoin Buy as Trump Unveils New U.S.-China Trade Deal
  • Our Great Privacy Policy
  • Terms of Use, Legal Notices & Disclosures
  • Contact us
  • About Us

© Copyright 2024 All Rights Reserved
See articles for original source and related links to external sites.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • Financial Planning
    • Financial Planning
    • Personal Finance
  • Market Research
    • Business
    • Investing
    • Money
    • Economy
    • Markets
    • Stocks
    • Trading
  • 401k Plans
  • College
  • IRS & Taxes
  • Estate Plans
  • Social Security
  • Medicare
  • Legal

© Copyright 2024 All Rights Reserved
See articles for original source and related links to external sites.