Daniel Zhu

The basics of data privacy and security in AI tools

As AI tools proliferate in post-acute care, more attention is being paid to risks related to data privacy and security. As your organization begins to evaluate these tools, it’s important to understand some basics about how they gather, use and store data, and how you can evaluate whether they meet regulatory requirements such as those included in HIPAA.

There are two main aspects to consider: how data privacy is handled by the organization and how data works within the models themselves. These are a combination of regulatory and technology considerations. There are also ethical and data bias considerations that should be addressed as these tools are developed. I’ll outline some best practices to consider when working with vendors who develop AI tools to help ensure your patient data stays safe and secure.

Existing regulatory policies

In post-acute care, data is mainly regulated by HIPAA, which uses thorough and time-tested rules about what constitutes private health information, who has access to data, and when data can be shared. These regulations apply whether data is stored on-premises or in the cloud.

HIPAA defines more than a dozen different aspects of data that are considered identifiable—that is, information that can be tied back to a specific person. Those pieces of data need to use a level of higher encryption to help ensure privacy. But HIPAA also has identified specific aspects of data, through their Safe Harbor rules that can be removed so data is safe to use for purposes such as medical research.

This de-identified data is what should be used to train artificial intelligence models and algorithms. Once the body of data is large enough, the model trains on it, learning different patterns it can identify to successfully complete a task. A good example that we’ve talked about before is predicting resident falls. When there’s a lot of data about patients and their outcomes, and there’s a fall in the history, you can feed the information into an AI model and it will identify patterns of events that occurred leading up to a fall.

By sifting through millions of records to find the combinations of different factors that can lead to a fall, AI builds what we commonly refer to as a “model.” And once it learns the pattern, it no longer stores the data it sifted through. This is important because there’s a common misconception that AI tools hold on to everyone’s data, putting it at risk for being compromised.

But that’s not how AI works. Once it learns from a de-identified set of information, it does not retain the information. Once the tool has been tested on its new learning, a clinical team can feed it new information about a specific patient. Then, using the pattern, the model will calculate a score for that patient. The score is what provides value and insights to the end user.

In addition, most concerns about AI data being auditable are unfounded. AI tools are typically separate from your main EHR software, which means any AI-generated scores or recommendations are not considered clinical documentation without validation from a caregiver.

Potential data bias

The patterns that AI tools recognize raise other considerations, including the need for these tools to be  non-discriminatory and safe. Large regulatory bodies, including the FDA and HHS, are doing due diligence to make sure AI is being applied to appropriate data so that anyone using AI tools can get a safe, non-discriminatory result. For now, the path to achieving that is to promote transparency. This means the AI system or algorithm has to explain how it got to its answer from the data it was given, so the process can be checked to ensure the result is safe to use.

There’s another aspect of bias in data, which also touches on ethics. An AI model in itself can be non-discriminatory. But most of the data a model works with is captured in one way or another by a person. That can inherently lead to some degree of bias, in that some patterns that occur in society reinforce certain biases or reinforce certain aspects of data.

In the future, data bias will likely be reduced because there will be less human-entered data and more data automatically added to EHRs from wearable devices, video and voice recordings. But at the moment, the people who develop AI systems need to recognize that there may be different aspects of data that can cause discriminatory results.

What to consider when adopting AI tools

There’s still some hesitation in post-acute care about using AI technology, especially when it comes to data privacy and security. But the journey to safe adoption of these tools begins with awareness: how the technology works, what de-identified data is safe to share, how privacy is protected.

It’s important to remember these tools can help make providers more effective and improve quality of life for a lot of patients. Being aware of both the risks and benefits and asking the right questions when you’re considering new technology are places to start. Find out whether data has been sourced ethically and biases have been explored or disclosed. Understand the existing regulations for protecting data. Ask whether the vendor developing the tool is compliant with the FDA’s clinical decision support guidelines, as well as HIPAA guidelines for data storage and sharing.

Every organization has its own level of risk tolerance in terms of data sharing. But in general, regulatory bodies are doing a good job in mitigating those risks and holding healthcare organizations accountable. Keep up with new developments and ask the right questions to help your organization use these powerful tools most effectively.

Request a demo today for a closer look at MatrixCare.

MatrixCare

MatrixCare provides an extensive range of software solutions and services purpose-built for out-of-hospital care settings. As the multiyear winner of the Best in KLAS award for Long-Term Care Software and Home Health and Hospice EMR, MatrixCare is trusted by thousands of facility-based and home-based care organizations to improve provider efficiencies and promote a better quality of life for the people they serve. As an industry leader in interoperability, MatrixCare helps providers connect and collaborate across the care continuum to optimize outcomes and successfully manage risk in out-of-hospital care delivery.

MatrixCare is a wholly-owned subsidiary of ResMed (NYSE: RMD, ASX: RMD). To learn more, visit matrixcare.com and follow @MatrixCare on X

Two office professionals looking at a laptop

See MatrixCare in action

Start by having a call with one of our experts to see our platform in action.