DOGE’s Plans to Replace Humans With AI Are Already Under Way

If you have tips about the remaking of the federal government, you can contact Matteo Wong on Signal at @matteowong.52.
A new phase of the president and the Department of Government Efficiency’s attempts to downsize and remake the civil service is under way. The idea is simple: use generative AI to automate work that was previously done by people.
The Trump administration is testing a new chatbot with 1,500 federal employees at the General Services Administration and may release it to the entire agency as soon as this Friday—meaning it could be used by more than 10,000 workers who are responsible for more than $100 billion in contracts and services. This article is based in part on conversations with several current and former GSA employees with knowledge of the technology, all of whom requested anonymity to speak about confidential information; it is also based on internal GSA documents that I reviewed, as well as the software’s code base, which is visible on GitHub.
The bot, which GSA leadership is framing as a productivity booster for federal workers, is part of a broader playbook from DOGE and its allies. Speaking about GSA’s broader plans, Thomas Shedd, a former Tesla engineer who was recently installed as the director of the Technology Transformation Services (TTS), GSA’s IT division, said at an all-hands meeting last month that the agency is pushing for an “AI-first strategy.” In the meeting, a recording of which I obtained, Shedd said that “as we decrease [the] overall size of the federal government, as you all know, there’s still a ton of programs that need to exist, which is a huge opportunity for technology and automation to come in full force.” He suggested that “coding agents” could be provided across the government—a reference to AI programs that can write and possibly deploy code in place of a human. Moreover, Shedd said, AI could “run analysis on contracts,” and software could be used to “automate” GSA’s “finance functions.”
A small technology team within GSA called 10x started developing the program during President Joe Biden’s term, and initially envisioned it not as a productivity tool but as an AI testing ground: a place to experiment with AI models for federal uses, similar to how private companies create internal bespoke AI tools. But DOGE allies have pushed to accelerate the tool’s development and deploy it as a work chatbot amid mass layoffs (tens of thousands of federal workers have resigned or been terminated since Elon Musk began his assault on the government). The chatbot’s rollout was first noted by Wired, but further details about its wider launch and the software’s previous development had not been reported prior to this story.
The program—which was briefly called “GSAi” and is now known internally as “GSA Chat” or simply “chat”—was described as a tool to draft emails, write code, “and much more!” in an email sent by Zach Whitman, GSA’s chief AI officer, to some of the software’s early users. An internal guide for federal employees notes that the GSA chatbot “will help you work more effectively and efficiently.” The bot’s interface, which I have seen, looks and acts similar to that of ChatGPT or any similar program: Users type into a prompt box, and the program responds. GSA intends to eventually roll the AI out to other government agencies, potentially under the name “AI.gov.” The system currently allows users to select from models licensed from Meta and Anthropic, and although agency staff currently can’t upload documents to the chatbot, they likely will be permitted to in the future, according to a GSA employee with knowledge of the project and the chatbot’s code repository. The program could conceivably be used to plan large-scale government projects, inform reductions in force, or query centralized repositories of federal data, the GSA worker told me.
Spokespeople for DOGE did not respond to my requests for comment, and the White House press office directed me to GSA. In response to a detailed list of questions, Will Powell, the acting press secretary for GSA, wrote in an emailed statement that “GSA is currently undertaking a review of its available IT resources, to ensure our staff can perform their mission in support of American taxpayers,” and that the agency is “conducting comprehensive testing to verify the effectiveness and reliability of all tools available to our workforce.”
At this point, it’s common to use AI for work, and GSA’s chatbot may not have a dramatic effect on the government’s operations. But it is just one small example of a much larger effort as DOGE continues to decimate the civil service. At the Department of Education, DOGE advisers have reportedly fed sensitive data on agency spending into AI programs to identify places to cut. DOGE reportedly intends to use AI to help determine whether employees across the government should keep their job. In another TTS meeting late last week—a recording of which I reviewed—Shedd said he expects that the division will be “at least 50 percent smaller” within weeks. (TTS houses the team that built GSA Chat.) And arguably more controversial possibilities for AI loom on the horizon: For instance, the State Department plans to use the technology to help review the social-media posts of tens of thousands of student-visa holders so that the department may revoke visas held by students who appear to support designated terror groups, according to Axios.
Rushing into a generative-AI rollout carries well-established risks. AI models exhibit all manner of biases, struggle with factual accuracy, are expensive, and have opaque inner workings; a lot can and does go wrong even when more responsible approaches to the technology are taken. GSA seemed aware of this reality when it initially started work on its chatbot last summer. It was then that 10x, the small technology team within GSA, began developing what was known as the “10x AI Sandbox.” Far from a general-purpose chatbot, the sandbox was envisioned as a secure, cost-effective environment for federal employees to explore how AI might be able to assist their work, according to the program’s code base on GitHub—for instance, by testing prompts and designing custom models. “The principle behind this thing is to show you not that AI is great for everything, to try to encourage you to stick AI into every product you might be ideating around,” a 10x engineer said in an early demo video for the sandbox, “but rather to provide a simple way to interact with these tools and to quickly prototype.”
But Donald Trump appointees pushed to quickly release the software as a chat assistant, seemingly without much regard for which applications of the technology may be feasible. AI could be a useful assistant for federal employees in specific ways, as GSA’s chatbot has been framed, but given the technology’s propensity to make up legal precedents, it also very well could not. As a recently departed GSA employee told me, “They want to cull contract data into AI to analyze it for potential fraud, which is a great goal. And also, if we could do that, we’d be doing it already.” Using AI creates “a very high risk of flagging false positives,” the employee said, “and I don’t see anything being considered to serve as a check against that.” A help page for early users of the GSA chat tool notes concerns including “hallucination”—an industry term for AI confidently presenting false information as true—“biased responses or perpetuated stereotypes,” and “privacy issues,” and instructs employees not to enter personally identifiable information or sensitive unclassified information. How any of those warnings will be enforced was not specified.
Of course, federal agencies have been experimenting with generative AI for many months. Before the November election, for instance, GSA had initiated a contract with Google to test how AI models “can enhance productivity, collaboration, and efficiency,” according to a public inventory. The Departments of Homeland Security, Health and Human Services, and Veterans Affairs, as well as numerous other federal agencies, were testing tools from OpenAI, Google, Anthropic, and elsewhere before the inauguration. Some kind of federal chatbot was probably inevitable.
But not necessarily in this form. Biden took a more cautious approach to the technology: In a landmark executive order and subsequent federal guidance, the previous administration stressed that the government’s use of AI should be subject to thorough testing, strict guardrails, and public transparency, given the technology’s obvious risks and shortcomings. Trump, on his first day in office, repealed that order, with the White House later saying that it had imposed “onerous and unnecessary government control.” Now DOGE and the Trump administration appear intent on using the entire federal government as a sandbox, and the more than 340 million Americans they serve as potential test subjects.
Source link