Artificial Intelligence & Machine Learning , Healthcare , Industry Specific

Health Entities Should Vet Risks of ChatGPT Use

AI Tools Aid Productivity for Clinicians But May Have Patient Data Risks
Health Entities Should Vet Risks of ChatGPT Use

Clinicians should think twice about using artificial intelligence tools as productivity boosters, healthcare attorneys warned after a Florida doctor publicized on TikTok how he had used ChatGPT to write a letter to an insurer arguing for patient coverage.

See Also: Cloud Analytics & Data Masking: Making the Most of Machine Learning on the Public Clouds

Palm Beach-based rheumatologist Dr. Clifford Stermer showed on the social media platform how he had asked ChatGPT to write a letter to UnitedHealthcare asking it to approve a costly anti-inflammatory for a pregnant patient.

"Save time. Save effort. Use these programs, ChatGPT, to help out in your medical practice," he told the camera after demonstrating a prompt for the tool to reference a study concluding that the prescription was an effective treatment for pregnant patients with Crohn's disease.

Stermer did not respond to Information Security Media Group's request for additional details about the use of ChatGPT in his practice or about potential data security and privacy considerations.

Privacy experts interviewed by ISMG did not say Stermer's use of ChatGPT violated HIPAA or any other privacy or security regulations.

But the consensus advice is that healthcare sector entities must carefully vet the use of ChatGPT or similar AI-enabled tools for potential patient data security and privacy risks. Technology such as ChatGPT presents tempting opportunities for overburdened clinicians and other staff to boost productivity and ease mundane tasks.

"This is a change to the environment that requires careful and thoughtful attention to identify appropriate risks and implement appropriate mitigation strategies," says privacy attorney Kirk Nahra of the law firm WilmerHale, speaking about artificial intelligence tools in the clinic.

"This is a good reason why security is so hard - the threats change constantly and require virtually nonstop diligence to stay on top of changing risks."

Entities must be careful in their implementations of promising new AI tech tools, warns technology attorney Steven Teppler, chair of the cybersecurity and privacy practice of law firm Mandelbaum Barrett PC.

"Right now, the chief defense is increased diligence and oversight," Teppler says. "It appears that, from a regulatory perspective, ChatGPT capability is now in the wild."

Besides an alert this week from the U.S. Department of Health and Human Service's Health Sector Cyber Coordination Center warning healthcare entities over hackers' exploitation of ChatGPT for the creation of malware and convincing phishing scams, other government agencies have yet to announce public guidance.

While HHS' Office for Civil Rights has not issued formal guidance on ChatGPT or similar AI tools, the agency in a statement to ISMG on Thursday says, "HIPAA regulated entities should determine the potential risks and vulnerabilities to electronic protected health information before adding any new technology into their organization."*

"Until we have some detective capability, it will present a threat that must be addressed by human attention," Teppler says about the potential risks involving ChatGPT and similar emerging tools in healthcare.

The Good and the Bad

Most, if not all, technologies "can be used for good or evil, and ChatGPT is no different," says Jon Moore, chief risk officer at privacy and security consultancy Clearwater.

Healthcare organizations should have a policy in place preventing the use of tools such as ChatGPT without prior approval or, at a minimum, not allowing the entry of any electronic protected health information or other confidential information into them, Moore says.

"If an organization deems the risk of a breach still too high, it might also elect to block access to the sites so employees are unable to reach them at all from their work environment."

Besides potential HIPAA and related compliance issues, the use of emerging AI tools without proper diligence can presents additional concerns, such as software quality, coding bias and other problems.

"Without testing, true peer review and other neutral evaluation tools, implementation should not be in a monetization prioritized 'release first and fix later' typical tech product/service introduction," Teppler says.

"If things go wrong, and AI is to blame, who bears liability?"

* Update Jan. 19, 2023, UTC 13:24: Adds statement from HHS OCR.


About the Author

Marianne Kolbasuk McGee

Marianne Kolbasuk McGee

Executive Editor, HealthcareInfoSecurity, ISMG

McGee is executive editor of Information Security Media Group's HealthcareInfoSecurity.com media site. She has about 30 years of IT journalism experience, with a focus on healthcare information technology issues for more than 15 years. Before joining ISMG in 2012, she was a reporter at InformationWeek magazine and news site and played a lead role in the launch of InformationWeek's healthcare IT media site.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing careersinfosecurity.eu, you agree to our use of cookies.