With the advent of generative AI models like Med-PaLM and ChatGPT, providers can now type complex medical questions into a chat box and receive sophisticated (and hopefully accurate) answers. This ability surpasses previous AI applications in the potential to serve patients, but also in the potential to run afoul of laws like corporate practice of medicine (CPOM) rules, the False Claims Act (FCA), and FDA regulations. These concerns — on top of the risk of a generative AI model fabricating answers, known as “hallucinations” — mean that providers should proceed with extreme caution before implementing generative AI tools into their practices.
Read the full article by Matthew Mousley on the Wharton Healthcare Quarterly website.