Close
Updated:

AI Is Billing Your Patients. Who Holds the Bag? – The Rise of Ambient Billing and Coding Tools (and the Legal Minefield Your Practice Is Walking Into)

Your AI coding tool just “found” $1.2 million in missed revenue. Congratulations. You’ve also just handed the Department of Justice a roadmap to a False Claims Act investigation.

A new generation of AI-powered billing and coding tools are infiltrating revenue cycle management departments across the country. These tools crawl electronic medical records, read clinical notes, and generate billing code suggestions in seconds. They promise to speed up billing/coding, accurately document, eliminate undercoding, capture missed diagnoses, and boost reimbursement.

Vendors call it “revenue optimization.”

The DOJ might call it something else.

In FY 2025, the DOJ recovered a record $6.8 billion in False Claims Act settlements, $5.7 billion of it from healthcare. The agency has explicitly identified AI-enabled billing processes as an emerging enforcement priority. And in January 2026, Kaiser Permanente paid $556 million—the largest Medicare Advantage FCA settlement in history—for doing exactly what AI billing and coding tools are designed to do: mining charts to find boost diagnoses/risk scores.

The question isn’t whether your practice should use AI billing tools. It’s whether it has implemented a legal framework to ensure compliance and defensibility. And, whether your practice is prepared for the legal consequences if it is using them wrong.

What Are Ambient Billing and Coding Tools?

Ambient billing and coding tools are AI platforms that ingest data from electronic medical records—structured fields, unstructured clinical notes, lab results, problem lists, medication records—and generate recommended CPT, ICD-10-CM, and HCPCS codes for billing purposes. Some operate in the background of the EHR. Others function as standalone platforms that process completed charts.

Unlike ambient scribes, which record and transcribe clinical encounters in real time, ambient billing and coding tools sit on the back end of the revenue cycle. They analyze what was documented, recommend billing/coding, and sometimes tell your coders what they “missed”. The pitch is compelling: faster coding, fewer missed charges, higher capture rates, less administrative burden.

The reality is more complicated.

The Four Risks Your Vendor Won’t Tell You About

  1. The One-Way Review Trap

This is the single most dangerous legal exposure created by AI billing tools, and many of them are engineered to trigger it.

The DOJ has aggressively prosecuted what it calls “one-way chart reviews”: programs designed exclusively to find missed codes (revenue additions) without simultaneously identifying unsupported codes (revenue that should be returned). The theory is simple: if you have the technology to find money the government owes you, you have the technology to find money you owe the government. Choosing to look in only one direction can be alleged as fraud.

AI coding tools can be designed as one-way revenue engines that scan charts for undercoding opportunities. They do not typically scan for over-coding, unsupported diagnoses, or codes that should be deleted. Analogies exist in the DOJ’s major Medicare Advantage enforcement action of the last five years:

  • Kaiser Permanente: $556 million (Jan. 2026)—chart mining to add diagnoses that inflated risk scores
  • DaVita/Healthcare Partners: $270 million (2018)—one-way chart reviews, downstream provider held liable
  • Cigna: $172 million (2023)—one-sided reviews, failure to delete unsupported codes
  • Independent Health/DxID: $100 million (2024)—vendor on contingency fee captured invalid codes
  • UCHealth: $23 million (2024)—automated billing rule systematically upcoded EM claims
  1. The “Human in the Loop” Theory

Every vendor will tell you their tool “keeps the human in the loop.” That’s a marketing claim, not a legal defense.

Under the FCA, reckless disregard for the truth satisfies the scienter standard for fraud. If your coder receives 200 AI suggestions per shift and accepts 85% of them with an average review time of 0.5 seconds per chart, the government might argue that no meaningful human review occurred. The “human in the loop” was a rubber stamp.

OIG’s February 2026 Medicare Advantage Industry Compliance Program Guidance explicitly calls out “querying physicians via electronic medical record platforms, including prompts generated by artificial intelligence algorithms, to add risk-adjusting diagnoses” as a potentially abusive practice. The guidance makes clear that human-in-the-loop must be more than a formality—it must be designed to prevent automation bias and ensure clinical linkage.

The practical reality: if your coders can’t explain why they accepted a specific AI suggestion, citing the specific clinical documentation that supports it, your Human In The Loop defense will collapse in litigation.

  1. The Care Plan Disconnect

The DOJ’s newest enforcement theories in Medicare Advantage cases go beyond code accuracy. It asks: did the diagnosis actually change how the patient was treated?

This is the “individualized care plan” theory. Applied to AI, if an AI tool crawls a chart, identifies a reference to severe malnutrition buried in a six-month-old lab result, and prompts the coder to add the corresponding HCC code — but the treating physician’s actual care plan for that visit focused exclusively on chemotherapy and never addressed the malnutrition — the diagnosis is arguably a billing artifact, not a clinical reality. Whistleblowers will try and call this fraud.

Obviously, this creates particularized risks if the billing/coding AI “looks backwards” towards old and aged visits. Potential re-billing of prior claims should be approached with caution.

For forward looking encounters, CMS and OIG have been unambiguous: diagnoses should be supported by contemporaneous, face-to-face encounter documentation demonstrating that the provider actively evaluated and treated the condition (the “MEAT” standard: Monitoring, Evaluation, Assessment, Treatment). AI tools that harvest diagnoses from historical problem lists or isolated lab values, or even from brief mentions in clinical conversations without verifying current clinical management, produce exactly the kind of one-sided claims that trigger enforcement actions.

  1. The Black Box Problem

Many AI coding vendors treat their algorithms as proprietary black boxes. They’ll tell you the tool suggests a 99215. They won’t tell you why.

This creates an impossible compliance position. When an auditor asks your practice to defend a code, you need to point to specific clinical documentation and explain, in coding-guideline terms, how that documentation supports the code. “The AI suggested it” is not a fantastic defense. If you cannot audit the algorithm’s logic, you cannot defend the claims it generates.

Regulators are already addressing this AI risk in analogous situations. In the insurer context, New York’s Department of Financial Services Insurance Circular Letter No. 7 states that entities cannot rely on the proprietary nature of a third-party vendor’s algorithmic processes to justify a lack of transparency in how a decision was reached.

How The Health Law Partners Approaches AI Billing Risk

At HLP, deploying AI in the revenue cycle is not an IT decision—it is a legal and compliance decision that requires regulatory architecture before go-live.

Pre-Deployment: The Legal Foundation

Before any AI billing tool touches a patient record, HLP works with clients to:

  • Conduct a regulatory risk assessment mapping the tool’s specific functionality against FCA enforcement theories, OIG guidance, and state-specific requirements
  • Negotiate vendor contracts requiring algorithmic transparency, explainability, audit rights, change-notification obligations, and thoughtfully approaching compensation structures tied to revenue lift or risk-score increases (which have attendant state and Federal risks)
  • Draft internal policies establishing meaningful human-in-the-loop standards, bidirectional review requirements, prohibited practices, and 60-day overpayment protocols
  • Establish pre-implementation baselines for all key billing metrics (E/M distribution, HCC capture rates, RAF scores, Modifier 25 utilization, code additions-to-deletions ratios) so that post-implementation changes can be measured and defended

Ongoing: The Compliance Architecture

Once the tool is live, HLP helps clients build and maintain:

  • Bidirectional audit programs that track code additions and deletions monthly, with investigation triggers
  • Anti-rubber-stamping controls that monitor individual coder acceptance rates and review timestamps
  • Care plan integration checks requiring verification that every AI-suggested diagnosis is reflected in the provider’s active, individualized care plan for that specific encounter
  • Specialty-specific risk matrices addressing coding vulnerabilities unique to each practice ​

When Things Go Wrong: Rapid Response

Things don’t always go right on the front end. If the government, a whistleblower, an auditor, or an internal audit reveals (or alleges) that an AI tool generated unsupported claims, HLP provides:

  • Immediate legal assessment of defenses, scope, and legal obligations
  • 60-day overpayment clock management (if applicable)
  • Vendor accountability enforcement under negotiated contract provisions

The Bottom Line

AI billing and coding tools are not inherently dangerous. HLP works with numerous clients that develop and sell AI billing/coding programs, and helps them build tools with this regulatory backdrop in mind. They can be fantastic tools for practices, bolstering efficiency, compliance, patient care, and oftentimes identifying legitimate historical undercoding.

But, deploying AI billing/coding tools without regulatory infrastructure is dangerous. Vendor’s contracts will often heavily disclaim and limit their liability on compliance issues (many times appropriately, sometimes in too heavy-handed a fashion). Vendor contracts typically don’t, even on paper, absorb your FCA risk. Nor could they wholly, since the billing provider is the billing provider, bearing responsibility for what they submit.

The enforcement environment has never been more aggressive. OIG’s February 2026 Medicare Advantage guidance specifically names AI-generated coding prompts as a risk adjustment abuse vector. The Kaiser settlement proves the government will pursue nine-figure recoveries. And the UCHealth case proves that automated coding logic can generate eight-figure liability.

The question is not if the government will scrutinize AI-assisted billing. It is when, and whether your practice will be able to demonstrate that it deployed the technology within a defensible compliance framework.

If your practice is using, or considering, an AI billing or coding tool, contact The Health Law Partners for a confidential regulatory risk assessment. We help healthcare providers harness AI’s operational benefits while minimizing its legal risks.

* * *

Clinton Mikel, Esq. is a healthcare regulatory attorney and shareholder at The Health Law Partners, P.C. He advises healthcare providers on AI compliance, HIPAA, the False Claims Act, and other regulatory matters. Contact him at cmikel@thehlp.com or visit www.thehlp.com.