The Pros and Cons of Ambient AI
A Primer for Health Professionals
The healthcare system has been contending with burnout long before I began practicing. Symptoms of burnout include depersonalization, depression, emotional exhaustion, cynicism, and decreased motivation. It can even show up as reduced efficiency and increased medical errors.
With an increasingly older population, reduced healthcare coverage, and a never ending shortage of doctors and other healthcare workers, we risk having greater incidences of burnout from the healthcare system. Some may leave medicine altogether, while others likely won’t consider the medical profession in the first place.
The promise of AI in medicine is that not only can it potentially make the lives of patients better with improved diagnostics and treatment, but it can also improve the lives of the doctors and practitioners at the bedside by reducing administrative overhead, improving cognitive efficiency, and allowing more time for interaction between the patient and provider.
Since writing progress notes and navigating the EHR has been a frustrating, time-consuming experience, doctors and EHR companies have looked for ways to streamline the process. One of the most exciting attempts is ambient AI, which uses voice recognition technology to record and transcribe conversations, interpret and summarize the content, and create clinical documentation for review. It is not a clinical decision support tool. It does not provide recommendations for diagnoses or treatment plans.
Since 2023, when ambient technology, using generative AI, really took off, early studies seem to validate the hypothesis that it can save time and reduce burnout. In one study, published in 2025, the Permanente Medical Group (PMG), looked at 2.5 million uses of ambient AI over 1 year. They demonstrated reduced physician workload and savings of 15,700 hours (1794 days) in documentation when compared with nonusers.
These findings seemed to be validated last year in a novel, randomized control trial, published in the New England Journal of Medicine AI, that assessed 238 outpatient physicians in 14 specialties. The researchers randomly assigned the physicians to either one of two AI scribe application or to a usual-care control group. Findings demonstrated a decrease in time-in-note versus the control group with one of the applications (Nabla), but no significant change with the second application (DAX).
Most recently, In February, Epic Systems, the largest EHR company in the U.S., announced its entry into the ambient AI market. Companies already existing in this space include Abridge, Tandem, Heidi, Doximity Scribe, Ambience Healthcare, Microsoft Dragon Ambient eXperience (DAX), and Nabla. Epic Systems plans to release an ambient charting feature that not only will create clinic notes, but based on the what was “heard in the visit,” it will act as a clinical decision support tool. A menu will open to suggest orders like lab tests and imaging.
Needless to say, this is a rapidly evolving space. Hospitals and clinics will continue to roll out ambient AI not only to save time, but also to appear technologically progressive, and to improve documentation — and therefore billing and reimbursement. Spending less time writing notes or staring at a computer is every doctor’s dream. We want to be interacting with patients or optimizing our treatments or performing procedures.
But no technology is benign. Like all medical interventions, there are benefits and risks that must be considered when using AI. Side effects, both unintended and anticipated, are inevitable. So in this next section, we will discuss the the pros and cons, listed in no particular order. Finally, we’ll conclude with some resources that you can explore if you plan to introduce ambient AI into your clinic.
PROS
More time for patient interaction and connection
Saves time with documentation both in clinic and during pajama time
The main reason physicians tend to despise the EHR is that it takes away from meaningful time with patients. Various studies through the years have shown that physicians, especially primary care physicians, spend up to half their day interacting with the EHR. This is in addition to interacting with patients and other administrative tasks.
By reducing time on documentation with ambient AI, presumably clinicians can now spend more time with their patients. They can also spend less time at work and reduce pajama time (new vocabulary word!) or time spent on the EHR between 5:30 PM and 7 AM).
May help reduce physician burnout
It is important to point out that there have not been any studies that definitively prove that ambient AI has reduced physician burnout. So far it is correlative and theoretical. In one study assessing whether ambient AI can reduce the burden of clinical documentation, the researchers allowed for open feedback from users. The authors note that “ambient-scribing generated notes did not eliminate the burden of clinical documentation.” However, they reported that ambient AI did decrease mental effort by recording difficult-to-remember details and by starting the note so that the clinician did not have to write it from scratch.
Improved recall
The benefit of recording every conversation is that there may be categorical or numerical details, names, or relationships that may be missed in casual conversation. Ambient AI can help clinicians remember this information so that it can be reviewed later and appropriately documented.
CONS
Skill depreciation in listening and writing
Learning to slow down and listen to someone speaking is a lifelong practice. But just because we have more face-to-face time with a patient, that does not necessarily mean we will hear everything they have to say. Having the back up of ambient AI may just lead to blank-faced providers nodding passively, wondering what they will make for dinner, while secure in their belief that the AI will record what is pertinent and necessary.
Just as importantly, writing is a form of critical thinking. Whenever I am writing a note on a patient in the emergency department or the ICU, writing the note from scratch forces me to think about the patient’s story, the timing of symptoms, the factors at play. Most of the time I am writing my HPI while the patient is still in the hospital, so I will occasionally remember a question that I forgot to ask or a detail that I need to clarify. Writing helps me realize the gaps in my thought process. If medical students and residents are no longer writing notes, how they think about patients is bound to be impacted.
Uncertain variability between ambient AI programs
The randomized control trial in the NEJM that I discuss above showed a decrease in time-in-note with one ambient AI application but not the other. More research needs to be conducted to test these various programs in different practices, with different users, on different patients. Federal and state regulations should be established to ensure that the ambient AI applications are achieving an acceptable standard, especially since they are not typically reviewed by the US Food and Drug Administration (FDA).
Potential increase in healthcare spending (upcoding, maximizing revenue) with loss of trust
Sooner or later, capitalism finds a way to take a positive experience and extract as much value as it can. Ambient AI applications may eventually focus not just on transcribing and interpreting the patient-physician interaction, but it is now re-interpreting it to bill for more services.
For example, it may transform a preventative visit into a problem-based visit or code for higher level of services. It may prompt the physician to ask questions in an attempt to maximize the type, number, and severity of diagnoses. Finally, as in the case of Epic incorporating clinical decision support into ambient AI, it may prompt the clinician to order tests that may be overdue or, in some cases, unreasonable.
While patients may get more face-to-face time with their physician (although not guaranteed), AI-driven upcoding may lead to higher cost-sharing for patients. Already people are sick of surprise billing, so this might deter patients from seeking care and further erode trust of the public with the healthcare system.
We must make an important caveat that not all upcoding is malicious or illegitimate. Under-documentation is common, especially when a busy clinician does not have time or the cognitive capacity to list every past condition or elucidate the full complexity of his decisions. In some cases, ambient AI-assisted upcoding may actually get a clinic the proper reimbursement it deserves for the service provided.
I highly recommend reading this policy brief about how ambient AI is fueling a coding arms race between providers and payers of medical care. It is an excellent, nuanced discussion on how ambient AI might affect reimbursement in fee-for-service models and in Medicare Advantage plans. Not only will ambient AI lead to upcoding, but payers (Medicare, Medicaid, insurance companies) might respond by downcoding, cut base rates, and audit more carefully. The arms race between hospitals and payers might ultimately lead to higher taxes to pay for higher risk-adjusted Medicare Advantage plans or higher premiums in fee-for-service markets.
More Work, Not Less; Less Autonomy, Not More
One of the ironies of email technology is how it seemed to create more work, not less. Same thing with the EHR, where metrics like number of clicks to order a test, which had never existed before, all of a sudden became intensely interesting to clinical informaticians. Ambient AI technology may save the physician more time, but they may be forced to see more patients to pay for the Ambient AI application (prices range $100 to $500 a month per physician). And if a for-profit hospital system or private equity firm owns the clinic, it is in their best interest to extract maximum value.
Another bane of many physician’s existence is responding to queries from medical coders who seek to bill for higher levels of service or diagnosis codes than what the physician originally intended. Administrators usually strongly “encourage” their employees to go along with the recommendations. Usually, the recommendations are unobjectionable after conversations with the human coder. But if the coder is now an unrelenting AI bot, then it may be an additional cause of stress.
Interoperability Challenges
Interoperability has long been a major issue in healthcare. The various devices, software, and systems exist in silos and then must be patched together. If third party ambient AI scribes are unable to write directly into the EHR note, but instead have their own templates and formats, this can be yet another frustrating sticking point for clinicians who battle daily with their computers. This can lead to copy-and-paste or reformatting errors.
Potential for errors, liability, and bias
It should go without saying, based on generative AI’s track record on hallucinations, that the chance of ambient AI making errors is inevitable. Legally, the onus is on the clinician to ensure that the medical documentation is accurate. If the application misinterprets what was stated or invents something new in the medical record, then the physician will be liable.
Automation bias, or the tendency for humans to prefer computer- or automated-interpretations over human or non-automated ones, even if there is contradictory evidence, is in play here. (Cut to an ER doctor nodding sadly as she receives the EKG of a 12 year-old patient with a sprained ankle who was emergently transferred from urgent care. The words “ACUTE STEMI” printed across the top. Why was the EKG even ordered on this patient you ask? Glad you asked. It was probably the AI at the urgent care triage.)
Ethical issues (consent, privacy, security)
Many states have laws that require both parties to consent to recording. A formal and clear consent process must be established that allows patients the opportunity to refuse recording. It should also explain how long the recording is stored, how it will be eventually destroyed, who can access the recording and AI summary, how it will be used, and what other ways it may be used (training other generative AI).
What if the patient wants to say something off-the-record, like when discussing domestic violence, drug use, or criminal activity? Protocols will have to be established to address what happens if the clinician stops recording. In court how will that be interpreted if a legal issue arises later?
It still remains to be seen how it can be adopted beyond the clinic
Ambient AI may work well in a controlled, quiet environment like the clinic. But how will it work in the hospital or emergency department, where burnout rates are much higher? When your world sounds like beeping monitors and ventilators, mumbling or screaming patients, and ceaseless interruptions, how will ambient AI cut through the noise to save the clinician time and energy?
End of valuable training and experiential pipeline for human scribes
For a long time, it has been a rite of passage for aspiring medical students to become scribes and witness the practice of medicine, while learning the complex language, earning money, and providing a valuable service. By shutting down this pipeline, it will make it harder for future medical students to find opportunities to obtain these transformative experiences in what is supposed to be a human-centric field.
How Should We Integrate AI into Clinics and Hospitals
Ambient AI is already being rolled out to hospitals and clinics across the country. There is no going back, despite the fact that the speed of adoption may lead to unintended consequences. Fortunately it is still relatively early, so the government and healthcare organizations have an opportunity to shape how ambient AI can be used fairly, safely, and effectively.
Here are some recommendations and ideas for healthcare leaders who want to integrate ambient AI into healthcare systems, clinics, hospitals, and government:
Create a national registry that keeps track of how and what brand of ambient AI is using in healthcare systems, clinics, and hospitals
Establish a standard way of evaluating ambient AI empirically, financially, and in terms of health outcomes so that we can reduce health disparities that might occur (perhaps it is more accurate with some accents versus others)
The government should ensure that Medicare claims auditing is robust to manage the coding arms race between healthcare systems and payers so that patients and tax payers aren’t subject to surprise billing, and that the clinician is aware of any upcoding or downcoding that occurs on their services
Hospitals should create committees or departments that register, evaluate, and monitor ambient AI models and tools in the system. The committees would be responsible for regulatory compliance, lifecycle governance, risk management, and quality assurance
Hospitals and their AI committees should work with AI companies to ensure that clinicians are properly trained to use the tools across various treatment settings and then re-trained for every update
Hospital systems and their informatics departments should ensure that all AI-generated clinic notes must be reviewed by the clinician before being signed
Help clinicians trust AI technology using a validated clinician survey like the Theory of trust and acceptance of AI technology (TRrAAIT)
The University of Wisconsin has released a publicly available framework and protocols for introducing ambient AI into healthcare systems. The paper is here and the resources are here
Primary Resources
Afshar M, Resnik F, Baumann MR, Hintzke J, Lemmon K, Sullivan AG, Shah T, Stordalen A, Oberst M, Dambach J, Mrotek LA, Quinn M, Abramson K, Kleinschmidt P, Brazelton T, Twedt H, Kunstman D, Wills G, Long J, Patterson BW, Liao FJ, Rasmussen S, Burnside E, Goswami C, Gordon JE. A Novel Playbook for Pragmatic Trial Operations to Monitor and Evaluate Ambient Artificial Intelligence in Clinical Practice. NEJM AI. 2025 Sep;2(9):10.1056/aidbp2401267. doi: 10.1056/aidbp2401267. Epub 2025 Aug 28. PMID: 40959192; PMCID: PMC12435388.
Cohen IG, Ritzman J, Cahill RF. Ambient Listening—Legal and Ethical Issues. JAMA Netw Open. 2025;8(2):e2460642. doi:10.1001/jamanetworkopen.2024.60642
Dai, T., Kvedar, J.C. & Polsky, D. Policy brief: ambient AI scribes and the coding arms race. npj Digit. Med. 8, 780 (2025). doi.org/10.1038/s41746-025-02272-z
Gerke S, Simon DA, Roman BR. Liability Risks of Ambient Clinical Workflows With Artificial Intelligence for Clinicians, Hospitals, and Manufacturers. JCO Oncol Pract. 2025 Aug 1:OP2401060. doi: 10.1200/OP-24-01060. Epub ahead of print. PMID: 40749149; PMCID: PMC12626409.
Goodson DA, Garcia B, Hogarth M, Tu SP. Artificial intelligence and physician burnout: A productivity paradox. Learn Health Syst. 2025 Apr 23;9(4):e70013. doi: 10.1002/lrh2.70013. PMID: 41169643; PMCID: PMC12569468.
Lukac PJ, Turner W, Vangala S, Chin AT, Khalili J, Shih YT, Sarkisian C, Cheng EM, Mafi JN. Ambient AI Scribes in Clinical Practice: A Randomized Trial. NEJM AI. 2025 Dec;2(12):10.1056/aioa2501000. doi: 10.1056/aioa2501000. Epub 2025 Nov 26. PMID: 41497288; PMCID: PMC12768499.
Nong P, Neprash HT. Unintended Consequences of Using Ambient Artificial Intelligence Scribes for Billing. JAMA Health Forum. 2026;7(1):e255771. doi:10.1001/jamahealthforum.2025.5771
Stevens AF, Stetson P. Theory of trust and acceptance of artificial intelligence technology (TrAAIT): An instrument to assess clinician trust and acceptance of artificial intelligence. J Biomed Inform. 2023 Dec;148:104550. doi: 10.1016/j.jbi.2023.104550. Epub 2023 Nov 20. Erratum in: J Biomed Inform. 2025 Aug;168:104863. doi: 10.1016/j.jbi.2025.104863. PMID: 37981107; PMCID: PMC10815802.


