Is Elon Musk planning to use artificial intelligence to manage the US government? That appears to be his strategy, but experts argue it’s a “very bad idea.”
Musk has already laid off tens of thousands of federal government employees through his Department of Government Efficiency (DOGE). Reports suggest he’s requiring the remaining workers to send weekly emails with five bullet points summarizing their accomplishments. This will undoubtedly flood DOGE with hundreds of thousands of emails, prompting Musk to rely on artificial intelligence to process responses and help decide who should keep their jobs. Part of the plan reportedly involves replacing many government workers with AI systems.
The specifics of these AI systems, including their design and functionality, remain unclear—something Democrats in Congress are demanding to know. Experts warn that deploying AI in the federal government without rigorous testing and validation could lead to disastrous outcomes.
“To use AI tools responsibly, they need to be designed with a specific purpose in mind. They need to be tested and validated. It’s not clear whether any of that is being done here,” says Cary Coglianese, a professor of law and political science at the University of Pennsylvania.
Coglianese expressed skepticism about using AI to make decisions about job terminations, citing the potential for errors, biases, and other problems. “There is a very real potential for mistakes to be made, for the AI to be biased, and for other potential issues to arise,” he adds.
“It’s a very bad idea. We don’t know anything about how an AI would make such decisions—including how it was trained, the underlying algorithms, or the data it’s using—or why we should trust it,” says Shobita Parthasarathy, a professor of public policy at the University of Michigan.
These concerns don’t seem to deter the current government, especially with Musk—a billionaire entrepreneur and close adviser to former President Donald Trump—leading these initiatives.
For example, the US Department of State is reportedly planning to use AI to scan the social media accounts of foreign nationals to identify potential Hamas supporters and revoke their visas. However, the government has not been transparent about how such systems operate.
Undetected Harms
“The Trump administration is determined to pursue AI at all costs, but I’d like to see AI used fairly, justly, and equitably,” says Hilke Schellmann, a journalism professor at New York University and an AI expert. “There could be a lot of undetected harms.”
AI experts warn that government use of AI can go wrong in many ways, emphasizing the need for careful and conscientious adoption. Coglianese points out that governments worldwide, including in the Netherlands and the United Kingdom, have encountered issues with poorly implemented AI systems that make errors or exhibit bias, leading to wrongful denial of welfare benefits, for instance.
In the US, Michigan faced problems with an AI system designed to detect unemployment fraud, which incorrectly flagged thousands of cases. Many affected individuals were harshly penalized, accused of fraud, arrested, or even forced into bankruptcy. After five years, the state admitted the system was flawed and eventually refunded $21 million to those wrongly accused.
“Most of the time, the officials purchasing and deploying these technologies know little about how they work, their biases, and their limitations,” says Parthasarathy. “Because low-income and marginalized communities often interact most with government social services—such as unemployment benefits, foster care, and law enforcement—they tend to be most impacted by problematic AI.”
AI has also caused issues in courts when used to determine parole eligibility and in police departments when predicting crime hotspots. Schellmann notes that police AI systems are typically trained on historical data, which can lead to over-policing in already heavily policed areas, particularly communities of color.
AI Does Not Understand Anything
One major challenge with replacing federal workers with AI is the vast array of specialized roles in government. For example, an IT professional in the Department of Justice may have a very different role from one in the Department of Agriculture, even with the same job title. An AI system would need to be highly complex and extensively trained to even perform mediocrely in replacing human workers.
“You can’t just randomly cut people’s jobs and replace them with any AI,” says Coglianese. “The tasks these workers perform are often highly specialized and specific.”
Schellmann suggests that AI could handle predictable or repetitive tasks but shouldn’t fully replace human workers. While it’s theoretically possible to develop AI tools for diverse roles over many years, this would be an incredibly challenging task—and not what the government appears to be doing currently.
“These workers have real expertise and a nuanced understanding of the issues, which AI does not. AI doesn’t actually ‘understand’ anything,” says Parthasarathy. “It uses computational methods to identify patterns based on historical data. This limits its utility and risks reinforcing historical biases.”
The Biden administration issued an executive order in 2023 focused on the responsible use of AI in government, including testing and verification. However, the Trump administration rescinded this order in January 2024. Schellmann warns this makes it less likely that AI will be used responsibly or that researchers can understand its application.
That said, if developed responsibly, AI can be highly beneficial. It can automate repetitive tasks, allowing workers to focus on more critical issues, or assist in solving complex problems. However, it requires time and care to be deployed correctly.
“This isn’t to say we can’t use AI tools wisely,” says Coglianese. “But governments go astray when they rush into implementation without proper public input and thorough validation of how the algorithms actually work.”
Stay updated with the latest news and updates by visiting ZTC News and ZNews Today. Explore in-depth stories, breaking news, and more on these trusted platforms!