A top US Army general stationed in South Korea said he’s been turning to an artificial intelligence chatbot to help him think through key command and personal decisions — the latest sign that even the Pentagon’s senior leaders are experimenting with generative AI tools.

Maj. Gen. William “Hank” Taylor, commanding general of the Eighth Army, told reporters at the Association of the United States Army conference in Washington, DC, that he’s been using ChatGPT to refine how he makes choices affecting thousands of troops.

“Chat and I have become really close lately,” Taylor said during a media roundtable Monday, though he shied away from giving examples of personal use.

His remarks on ChatGPT, developed by OpenAI, were reported by Business Insider.

“I’m asking to build, trying to build models to help all of us,” Taylor was quoted as saying.

He added that he’s exploring how AI could support his decision-making processes — not in combat situations, but in managing day-to-day leadership tasks.

“As a commander, I want to make better decisions,” the general explained.

“I want to make sure that I make decisions at the right time to give me the advantage.”

Taylor, who also serves as chief of staff for the United Nations Command in South Korea, said he views the technology as a potential tool for building analytical models and training his staff to think more efficiently.

The comments mark one of the most direct acknowledgments to date of a senior American military official using a commercial chatbot to assist in leadership or operational thinking.

The US military has been pushing to integrate artificial intelligence into its operations at every level — from logistics and surveillance to battlefield tactics — as rival nations like China and Russia race to do the same.

Officials say AI-driven systems could allow faster data processing and more precise targeting, though they also have also raised concerns about reliability and accountability when software takes on roles traditionally reserved for human judgment.

The Pentagon has said future conflicts could unfold at “machine speed,” requiring split-second decisions that exceed human capability.

Former Air Force Secretary Frank Kendall warned last year that rapid advances in autonomous weapons mean “response times to bring effects to bear are very short,” and that commanders who fail to adapt “won’t survive the next battlefield.”

AI has already been tested in combat simulations, including an experiment by the Air Force and the Defense Advanced Research Projects Agency in which an algorithm piloted a modified F-16 jet during a mock dogfight.

Other programs are being used to sift through satellite data, track logistics and streamline administrative paperwork for units in the field.

The Army’s Special Operations Forces have adopted similar tools to reduce what they call the “cognitive burden” on operators — using AI to draft reports, process mission data and analyze intelligence at scale.

At the same time, Pentagon officials are urging caution.

Defense leaders have warned that generative AI systems can leak sensitive information or produce faulty conclusions if the data is incomplete or manipulated.

Taylor acknowledged one of the challenges of using the cutting-edge tech is keeping pace with the rapid evolution of AI tools — including ensuring they meet the military’s strict security requirements.

ChatGPT has drawn global scrutiny as governments and companies rush to understand its promise and its pitfalls.

While newer versions of the program are capable of complex reasoning and analysis, they’ve also been shown to produce errors and fabrications.

Share.

Leave A Reply

Exit mobile version