California has passed a new AI law, Assembly Bill No. 3030, which establishes disclaimer requirements for healthcare providers sending unvetted messages to patients generated by artificial intelligence. AB 3030 is effective January 1, 2025. Under the new law, when a covered provider uses AI to generate a patient communication concerning a patient’s clinical information, that communication must include a disclaimer saying that the communication was generated by AI. Read the full Alert on the Duane Morris website.
What Should GenAI Not Do in Healthcare?
With the advent of generative AI models like Med-PaLM and ChatGPT, providers can now type complex medical questions into a chat box and receive sophisticated (and hopefully accurate) answers. This ability surpasses previous AI applications in the potential to serve patients, but also in the potential to run afoul of laws like corporate practice of medicine (CPOM) rules, the False Claims Act (FCA), and FDA regulations. These concerns — on top of the risk of a generative AI model fabricating answers, known as “hallucinations” — mean that providers should proceed with extreme caution before implementing generative AI tools into their practices.
Read the full article by Matthew Mousley on the Wharton Healthcare Quarterly website.
AI-Related Healthcare Fraud on DOJ’s Radar
Artificial intelligence (AI) can enhance efficiencies in providing healthcare in many ways, one of which is by utilizing algorithms to read medical records and thereby assist providers in better understanding their patients and treatments that may be available. Increasingly, electronic medical review (EMR) software companies are utilizing AI to boost their products, offering hospitals, healthcare facilities, and physicians powerful tools that can enhance their decision-making as to operations and treatment.