Can AI Help DOGE Slash Government Budgets?

Can AI help DOGE slash government budgets? It’s complex.

Author: James O’Donnell

No tech leader before has played the role in a new presidential administration that Elon Musk is playing now. Under his leadership, DOGE has entered offices in a half-dozen agencies and counting, begun building AI models for government data, accessed various payment systems, had its access to the Treasury halted by a federal judge, and sparked lawsuits questioning the legality of the group’s activities.

The stated goal of DOGE’s actions, per a statement from a White House spokesperson to theNew York Times on Thursday, is “slashing waste, fraud, and abuse.”

As I point out in my story published Friday, these three terms mean very different things in the world of federal budgets, from errors the government makes when spending money to nebulous spending that’s legal and approved but disliked by someone in power.

Many of the new administration’s loudest and most sweeping actions—like Musk’s promise to end the entirety of USAID’s varied activities or Trump’s severe cuts to scientific funding from the National Institutes of Health—might be said to target the latter category. If DOGE feeds government data to large language models, it might easily find spending associated with DEI or other initiatives the administration considers wasteful as it pushes for $2 trillion in cuts, nearly a third of the federal budget.

But the fact that DOGE aides are reportedly working in the offices of Medicaid and even Medicare—where budget cuts have been politically untenable for decades—suggests the task force is also driven by evidence published by the Government Accountability Office. The GAO’s reports also give a clue into what DOGE might be hoping AI can accomplish.

Here’s what the reports reveal: Six federal programs account for 85% of what the GAO calls improper payments by the government, or about $200 billion per year, and Medicare and Medicaid top the list. These make up small fractions of overall spending but nearly 14% of the federal deficit. Estimates of fraud, in which courts found that someone willfully misrepresented something for financial benefit, run between $233 billion and $521 billion annually.

So where is fraud happening, and could AI models fix it, as DOGE staffers hope?To answer that, I spoke with Jetson Leder-Luis, an economist at Boston University who researches fraudulent federal payments in health care and how algorithms might help stop them.

“By dollar value [of enforcement], most health-care fraud is committed by pharmaceutical companies,” he says.

Often those companies promote drugs for uses that are not approved, called “off-label promotion,” which is deemed fraud when Medicare or Medicaid pay the bill. Other types of fraud include “upcoding,” where a provider sends a bill for a more expensive service than was given, and medical-necessity fraud, where patients receive services that they’re not qualified for or didn’t need. There’s also substandard care, where companies take money but don’t provide adequate services.

The way the government currently handles fraud is referred to as “pay and chase.” Questionable payments occur, and then people try to track it down after the fact. The more effective way, as advocated by Leder-Luis and others, is to look for patterns and stop fraudulent payments before they occur.

This is where AI comes in. The idea is to use predictive models to find providers that show the marks of questionable payment. “You want to look for providers who make a lot more money than everyone else, or providers who bill a specialty code that nobody else bills,” Leder-Luis says, naming just two of many anomalies the models might look for. In a 2024 study by Leder-Luis and colleagues, machine-learning models achieved an eightfold improvement over random selection in identifying suspicious hospitals.

The government does use some algorithms to do this already, but they’re vastly underutilized and miss clear-cut fraud cases, Leder-Luis says. Switching to a preventive model requires more than just a technological shift. Health-care fraud, like other fraud, is investigated by law enforcement under the current “pay and chase” paradigm. “A lot of the types of things that I’m suggesting require you to think more like a data scientist than like a cop,” Leder-Luis says.

One caveat is procedural.Building AI models, testing them, and deploying them safely in different government agencies is a massive feat, made even more complex by the sensitive nature of health data.

Critics of Musk, like the tech and democracy group Tech Policy Press, argue that his zeal for government AI discards established procedures and is based on a false idea “that the goal of bureaucracy is merely what it produces (services, information, governance) and can be isolated from the process through which democracy achieves those ends: debate, deliberation, and consensus.”

Jennifer Pahlka, who served as US deputy chief technology officer under President Barack Obama, argued in a recent op-ed in theNew York Times that ineffective procedures have held the US government back from adopting useful tech. Still, she warns, abandoning nearly all procedure would be an overcorrection.

Democrats’ goal “must be a muscular, lean, effective administrative state that works for Americans,” she wrote. “Mr. Musk’s recklessness will not get us there, but neither will the excessive caution and addiction to procedure that Democrats exhibited under President Joe Biden’s leadership.”

The other caveat is this: Unless DOGE articulates where and how it’s focusing its efforts, our insight into its intentions is limited. How much is Musk identifying evidence-based opportunities to reduce fraud, versus just slashing what he considers “woke” spending in an effort to drastically reduce the size of the government? It’s not clear DOGE makes a distinction.

Credits: TCA, LLC.

Discover more from thinkly gold

Subscribe now to keep reading and get access to the full archive.

Continue reading