How will Artificial Intelligence (AI) Impact Healthcare? Part 11

How will Artificial Intelligence (AI) Impact Healthcare? Part 11

In this week’s blog, we’re continuing our discussion on the concerns and vulnerabilities that Artificial intelligence (AI) could bring, specifically to healthcare, if not carefully and ethically implemented.  In Part 10, we covered how accuracy, consistency, and reliability would be minimal standards for an AI healthcare platform, as well as bias.  As mentioned last week, artificial intelligence is rapidly transforming the healthcare industry. AI-driven applications and programs are already being used to diagnose diseases, develop new treatments, and deliver care more efficiently. However, there are also several other potential risks associated with the use of AI in healthcare such as cost, privacy & security, and ethics & liability:

  1. Cost – AI-powered applications and programs can be expensive to develop and implement; simply managing the cloud data “lake” will be a large expense not to mention the cost of encryption and protection protocols that must be running.

This could make them unaffordable or even unavailable for some healthcare providers, especially those in rural or underserved areas.

  1. Other Concerns – As AI programs penetrate further into the mainstream of healthcare institutions there are other issues of less operational and design that need to be managed. Here are a few that we are scrutinizing in our development process.
  • Privacy & Security

AI systems collect and process large amounts of data, including personal medical information. This data could be vulnerable to hacking or other cyberattacks. If this data is compromised, it could lead to identity theft, financial fraud, or even physical harm to patients. Other hacking targets besides the direct breach that could have a more malicious or nefarious intent, might be the data “lake” itself to compromise or corrupt the data to create erroneous results or to modify one or more decision algorithms. This later possible breach is easier to protect and catch so long as the design of the platform accommodates the safeguards to ensure the protection, encryption and security of data and algorithms.

For example, in 2017, a cyberattack on the Anthem health insurance company resulted in the theft of the personal information of over 78 million people. This data included names, addresses, Social Security numbers, and medical information.

  • Ethics & Liability

The use of AI in healthcare raises a number of ethical and liability concerns. For example, who should be responsible for the decisions made by AI systems? What happens if an AI system makes a mistake that harms a patient or many patients? What safeguards can be provided to provide additional assurance the AI system or platform is operating within its design parameters.

As you can read in our previous blogs, AI offers incredible opportunities for increasing patient access to medical care, improving physician-patient relationship and quality of care, increasing physician practice revenue, and lowering the cost of most forms of medical care – most notably the primary care areas including pediatrics, OB/GYN, geriatrics and cardiology. AI is a technology that remains in its infancy and its true advantages are yet to be explored, especially in healthcare where, as I’ve noted, there is a high inertia or resistance to any change.

CONCLUSION

AI Will Revolutionize Healthcare

It is important to be aware of the potential risks and possible unintended adverse consequences of AI in healthcare. These risks need to be carefully considered before AI can be widely adopted in healthcare or in any industry where the public can be affected.

It is also important to remember that AI is a tool, and like any tool, it can be used for good or misused. Here are some additional tips for mitigating bias in AI systems:

  • Use a large and diverse dataset to train the AI system and consistently replenish the data to keep it fresh.
  • Use a variety of methods to evaluate the AI system’s performance. Set specific metrics and continually evaluate their usefulness and credibility.
  • Be transparent about the data used to train the AI system and the methods used to evaluate its performance.
  • Get input from experts in the field to help identify and mitigate bias.
  • Set boundaries and safeguards to ensure that any “hallucination” affects or drift in the accuracy of the results is captured immediately.

By following these tips, developers and users can help to ensure that AI systems are used in a responsible and ethical way.

While there are many advantages with integrating AI into healthcare, there are also areas of concern, warning flags let’s call them, and not red flags. Areas in which we must be astute and truly think ahead of the immediate issue to future ramifications of the design being implemented. We’ve explored a couple here and will explore the three (3) more mentioned above in the next blog.

– Carl L. Larsen, President & Chief Operating Officer of OXIO Health, Inc.