garance burke: yeah. talking with more than a dozen engineers and academic researchers, my co reporter helga shaman and i found that this particular ai powered transcription tool makes things up that can include racial commentary, sometimes even violent rhetoric, and, of course, what we're talking about here, you know, incorrect words regarding medical diagnoses. so that obvious leads to a lot of concerns about its use, particularly in really sensitive settings like in hospitals. john: we asked openai about this, and here's what they told us. they said, we take this issue seriously and are continually working to improve the accuracy of our models, including reducing hallucinations. for whisper, our usage policies prohibit use in certain high stakes decision making contexts, and our model card for open source use includes recommendations against use in high risk domains. given those warnings, why do so many medical centers use this? garance burke: you know, i think we're at a time when a lot of health care