While AI can benefit healthcare in a number of ways – improving patient outcomes, increasing efficiency, enhancing privacy and security – it can have pitfalls if misused. Iliana Peters, Shareholder at Polsinelli, and Chris Arnold, FairWarning’s VP of Product Management & Engineering, discussed potential privacy and security issues of AI in healthcare on a recent webinar, along with recommendations for healthcare professionals considering or implementing AI-enabled technology.
Here are five considerations for adopting AI, and ways that privacy and security experts can become involved in how AI is being used across the organization.
#1: Access Controls
The nature of machine learning is that it needs large amounts of data to learn accurately and effectively, said Iliana. As a result, you end up with large data pools that increase vulnerability.
That makes access controls one particular area of concern, Chris said. Healthcare organizations should ask themselves:
- Who can see that data?
- Who has permission to the data?
- Who has permission to make changes to the rules/algorithms/models the machine is using to learn?
And it’s not just who, but what, added Iliana. There may be other applications, systems, or enterprises accessing your data. It’s essential to understand all the people and entities that have access to your data, and ensure controls are appropriate for the level of access necessary to each.
#2: Who Built Your Robot?
Next, said Chris, you should ask yourself: ‘Who built that machine learning system?’
- Do you know who built the algorithm or model you’re using?
- How can you trust them?
- What can you put in place to make sure the people giving you this information are doing so securely and with the best intentions?
AI in healthcare should take care to avoid bias and exacerbating disparities. Learn the goals and motivations of those who programmed the machine, understand the data it’s drawing from, and ask questions about any potential gaps that could compromise the outcome.
#3: Data Integrity
As highlighted by the AMA’s recent AI policy recommendations, “garbage in, garbage out” applies just as much with AI as it does with everything else. If your database only includes information from a cohort group of a million men, then your clinical decision support for women and children may be weak. It’s important to consider these built-in biases, Iliana emphasized.
“You need robust data that is as bias-free as possible, and you need robust programming that is as bias-free as possible,” Iliana said. “That ensures a good outcome.”
#4: Provider Security
It’s equally important to assess the security posture of any provider of AI-enabled technology, said Iliana.
“Is security baked in?” Iliana added. “You don’t want to have to be asking questions later on about how the data of any particular technology you’re using is being secured by that technology.”
- Are the people who developed your technology reputable?
- Can you rely on them and talk with them about the security controls that they have built into that technology to ensure your data is protected?
Chris added that evidence of a security-rich DNA can come in the form of SOC 2 Type 2 certification, HITRUST attestation, ISO 9001 compliance, and more. What security regulations are the provider following? And what is its security posture? Referenceable customers and past successes can also help demonstrate the partner’s commitment to security.
#5: Board Communications
Not only can you use your knowledge to discuss the benefits of AI with your board, but also to highlight ways to better protect the privacy and security of patient data. All purchases should go through the regular IT requisition process, regardless of who’s championing the purpose. This helps ensure all relevant parties understand the risks involved and can put plans in place to mitigate them.
And make sure your board understands that not all technology is suitable for all uses.
“These types of tools may not be designed for clinical decision support – they may just be designed for research,” Iliana explained.