The Trump administration’s sweeping efforts to overhaul the federal workforce and inject generative artificial intelligence into government operations have collided with legal, logistical, and human realities, leaving agencies across Washington in a state of flux as summer 2025 draws to a close.
Late on August 29, 2025, a federal judge in San Francisco issued an emergency order pausing nearly all of President Donald Trump’s attempts to dramatically downsize the federal workforce. As reported by multiple outlets, Judge Susan Illston’s ruling came in response to a lawsuit filed by labor unions and major cities, challenging the administration’s cost-cutting campaign. The temporary restraining order halts compliance with Trump’s February 2025 executive order and a subsequent memo from the Department of Government Efficiency (DOGE) and the Office of Personnel Management (OPM), which had set the stage for large-scale layoffs and reorganizations across agencies ranging from Agriculture and Energy to the Social Security Administration and the Environmental Protection Agency.
"The Court holds the President likely must request Congressional cooperation to order the changes he seeks, and thus issues a temporary restraining order to pause large-scale reductions in force in the meantime," wrote Judge Illston in her order, according to coverage by Raw Story. The restraining order, which expires in 14 days, doesn’t require agencies to rehire anyone but does force a halt to further implementation of the executive order and related layoffs for now.
The legal pushback comes amid a period of extraordinary upheaval within the federal government. Since Trump returned to office, at least 75,000 federal employees have taken deferred resignation, and thousands of probationary workers have already been let go. The administration, arguing that the federal government is bloated and expensive, has pressed forward with plans for an estimated 300,000 job cuts by the end of 2025, according to MIT Technology Review. The Department of Health and Human Services, for instance, announced in March that it would lay off 10,000 workers and centralize divisions. A union representing federal workers who monitor health hazards for mineworkers said it was poised to lose 221 of 222 employees in its Pittsburgh office. In Vermont, a local farmer missed an important planting window after disaster aid inspections were delayed due to staff shortages. Meanwhile, reduced staffing at the Social Security Administration has led to longer wait times for recipients.
Yet critics argue that the administration’s approach has been too aggressive and legally questionable. Plaintiffs in the San Francisco lawsuit—including the American Federation of Government Employees, the Alliance for Retired Americans, and the cities of San Francisco, Chicago, and Baltimore—contend that the president, DOGE, and OPM have acted outside their authority and failed to seek the necessary cooperation from Congress. "They are not waiting for these planning documents to go through long processes," said Danielle Leonard, an attorney for the plaintiffs. "They're not asking for approval, and they're not waiting for it."
Government lawyers, on the other hand, maintain that the executive order and DOGE memo merely set out general principles and invite legislative engagement. "It expressly invites comments and proposals for legislative engagement as part of policies that those agencies wish to implement," argued Eric Hamilton, a deputy assistant attorney general, according to Raw Story. "It is setting out guidance." Nevertheless, Judge Illston emphasized that Congress created all the agencies included in the plaintiffs’ request and that the Constitution requires the president to act with Congressional cooperation on such sweeping changes.
Complicating matters further is the administration’s rapid embrace of generative AI technology within federal agencies. The General Services Administration and Social Security Administration have rolled out ChatGPT-like chatbots for their workers, while the Department of Veterans Affairs is using generative AI to write code. The U.S. Army has deployed CamoGPT, an AI tool designed to review documents and eliminate references to diversity, equity, and inclusion. The Department of Education has proposed using generative AI to answer questions from students and families about financial aid and loan repayment. According to MIT Technology Review, these tools are intended to automate tasks previously performed by government workers, raising the specter of even deeper job cuts.
But the technology’s readiness for such high-stakes work is hotly debated. “We’re in an insane hype cycle,” said Meg Young, a researcher at the nonprofit Data & Society, in comments reported by MIT Technology Review. She and other experts warn that generative AI remains error-prone and ill-suited for complex legal or bureaucratic tasks. For example, the General Services Administration wants to use AI to streamline procurement—the process by which the government buys goods and services. Yet lawyers may find generative AI too unreliable for contract negotiations involving millions of dollars. “If you have a chatbot generating new terms, it’s creating a lot of work and burning a lot of legal time,” Young noted. “The most time-saving thing is to just copy and paste.”
Indeed, a 2024 study found that legal research chatbots made factual errors between 17% and 33% of the time. The mistakes ranged from subtle misinterpretations to outright fabrications—such as lawyers citing nonexistent cases generated by ChatGPT, or a chatbot incorrectly claiming that the Nebraska Supreme Court overruled the U.S. Supreme Court. “That remains inscrutable to me,” said Faiz Surani, a co-author of the study. “Most high schoolers could tell you that’s not how the judicial system works in this country.” The study also found that AI chatbots sometimes failed to recognize inaccuracies in the prompts they received, and could mistakenly cite laws that had been overturned.
Despite these challenges, some pilot programs have shown promise. In Pennsylvania, a partnership with OpenAI allowed 175 state employees to use ChatGPT for administrative tasks such as writing emails and summarizing documents, saving an average of 95 minutes per day. However, experts like Young caution that such deployments must be measured and carefully integrated into existing workflows—a far cry from the Trump administration’s accelerated rollout of tools like GSAi to 13,000 employees.
Meanwhile, the administration’s broader restructuring of the national security apparatus has led to confusion and coordination challenges. As reported by Benzinga, Trump has slashed the National Security Council staff from roughly 400 to fewer than 150 people, part of the same federal workforce cuts that have affected over 1,350 State Department employees. National security adviser Mike Waltz was removed after just three months, with his duties reassigned to Secretary of State Marco Rubio. The streamlined structure has left officials operating without clear guidance: for example, a State Department official announced an African leaders summit in May without White House confirmation, and Trump was unaware of a Pentagon pause on Ukraine weapons deliveries until the freeze became public.
“In many respects, the national security process has ceased to exist,” said David Rothkopf, an NSC historian and Trump critic, in comments reported by Benzinga. Former Secretary of State Condoleezza Rice noted that the administration’s approach “depends a lot on the president.” White House press secretary Karoline Leavitt summed up the administration’s attitude bluntly: “We don’t really care if your feelings are hurt. We just need to get a job done.”
As the federal government navigates this turbulent period, the future of both its workforce and its reliance on AI remains uncertain. The next two weeks will be pivotal, as Judge Illston’s restraining order temporarily halts the administration’s downsizing plans and agencies, workers, and policymakers alike grapple with the promise and peril of a more automated, leaner government.