The rapid rise in the use of artificial intelligence (AI) across a number of sectors – including elections, health care, media, education, and much more – has huge implications, many of which are extremely concerning. Guardrails to protect privacy, workers, our democracy, and much more are essential as AI use continues to grow.
In the health care arena, there needs to be more oversight across the process of developing, testing, and implementing AI technologies. Further action by states, the federal government, and Congress is necessary to ensure that both patients and providers are protected and that safe uses of AI in health care are made available to everyone who needs them.
Some of the initial uses of AI in health care have raised concerns, including the automation of insurer denials of care, the use of potentially vulnerable programs for doctor’s notes, the rise of therapy chatbots, and the use of AI to potentially replace nurses.
One of the key places oversight of AI in healthcare will take place in the federal government is through the Department of Health and Human Services’s AI Task Force, which the Biden Administration established through its comprehensive Executive Action on Artificial Intelligence. The task force must move quickly in many areas to ensure human review and oversight, strong data protections and continuous monitoring of adverse events. Robust mitigation efforts for such events must underlie any expansion of AI in health care.
Greater transparency when AI is used is also needed, including clear and understandable disclosures to patients and providers. Further necessary guardrails include the right to request human review of any health care decision that may have an impact on a patient as well as the right to have that decision appealed and reviewed by an actual human.
Given recent high-profile data breaches in health care, it will also be crucial for providers and patients that there be sufficient privacy requirements around the development, testing, and ongoing evaluation of AI-enabled technologies in health care. Regulators should ensure that AI tools only collect task-necessary data and delete it after use.
In addition, the use of AI must not compound health care disparities or contribute to racial or ethnic discrimination or bias. AI databases used for training generative AI systems must be reflective of the patient population that it is intended to serve. In addition, AI algorithms should be focused on improving equity instead of just reproducing current patterns and biases in our health care system. Because AI systems are susceptible to bias from the data they are being trained on, it is important that federal agencies exercise care in their use of training data and continuously monitor their AI systems for bias.
Steps also must be taken to ensure meaningful use standards for AI in Medicare, Medicaid and other health care programs to protect patients, help providers and institutions to better use AI, and improve the opportunities for oversight and accountability. This must also include technical assistance, particularly for low-resource settings, to ensure that the uptake of AI in health care is as equitable as possible.
Finally, there must be rigorous enforcement and clear penalties for companies that have consistently violated rules and requirements on AI. This will be particularly important for Medicare, Medicaid, and other health care programs where the federal government plays a direct role. In addition, companies that are found to have knowingly concealed harms or significant potential harms should face felony criminal prosecution, not only for the companies but also their top-level responsible corporate offices.
While the risks are great, there are also many potentially promising uses of AI in health care. However, unless we have sufficiently robust regulation and oversight we risk putting corporate greed ahead of patient need.
Artificial Intelligence is not very "intelligent" -- particularly when it is "correcting" grammar, spelling and word usage. In my experience, it puts an apostrophe in any word that ends with the letter "s". It changes words that it evidently doesn't like into words that either make no sense whatsoever in the context where it inserts them or completely changes the meaning of text it does not "like," and misspells words that should be familiar to nearly everyone. Imagine the implications of these flaws showing up in medical charting!