In the face of the Covid-19 pandemic we are witnessing an unprecedented use of digital technologies: video consultations with the doctors are booming across Europe to avoid unnecessary physical meetings, artificial intelligence (AI) are used for screening of the population and assessing infection risks, apps are used to track infected citizens movements. With the virus speeding up healthcare digitalisation we might soon be living in a world where you get a diagnosis without stepping out of your home. Your toothbrush will be smart enough to analyse your saliva, while your contact lenses will measure your blood pressure and glucose levels. You’ll simply need to contact a virtual doctor who will listen to your health concerns and advise you based on the data collected by your wearables.

Sets of coded instructions (algorithms) already exist that can transform the way we take care of our health and treat diseases. AI has become a top political priority and enjoys ever-growing public and private investment. The Covid-19 crisis increases the need for digital remedies. Consumers may benefit but even more than before it is important to be aware of the risks AI might entail for us.

1. Goodbye privacy?

From the time of Hippocrates, doctors and other medical staff have been bound by confidentiality and ethics. But what about algorithms? Surely, the EU’s General Data Protection Regulation is setting the bar high when it comes to the protection of our personal data: the ‘purpose limitation & data minimisation’ principle can be applied to many situations. But AI in health can go very far: Wearables can lead to 24/7 monitoring and predict a patient’s condition.

For example, AI might pick up hand trembling and detect Parkinson’s disease. It’s very beneficial if a disease is caught early on, but at the same time we need to find ways to guarantee that in practice such information is not misused – e.g. sold to third parties, such as insurance companies, even before a patient is given early warning that they should seek a detailed diagnosis. It is also important to ensure that diagnosis is overseen by the doctors.

2. Biased data = biased treatment

AI learns from data sets. But there is the very real risk that algorithms “learn”, and may accentuate, underlying bias present in these data sets. IBM Watson for Oncology is one of the best-known AI examples of why data quality matters. IBM began selling Watson to recommend the best cancer treatments to doctors around the world. Practical experience, however, showed that Watson for Oncology often resulted in unsafe and incorrect treatment recommendations. For example, Watson’s algorithm was largely based on the data of American patients and care methods, and did not perform well at foreign hospitals as their methods were not taken into account during the initial coding of the algorithm.

3. Who holds AI to account when things go wrong?

Some claim that artificial intelligence will always be better than natural stupidity, especially when it comes to decisions about health. But if a doctor makes a mistake when diagnosing and treating your illness, there are rules establishing their responsibility and legal liability. But if for instance a robot performing a surgery makes a fatal mistake our legal systems fail to give a clear definition of who is responsible and accountable.

4. Overdiagnosis

Early diagnosis of such serious conditions as cancer is often key to recovery. There is growing evidence showing that AI can spot cancers better than doctors – however that may not always be a good thing. In the particular case of cancer there is no gold standard as to what constitutes cancer. And not all suspected cancers present a threat to your life. All these complexities can be taken into account by doctors, but not necessarily by algorithms which give binary answers of ‘yes’ or ‘no’. This can lead to overdiagnosis and unnecessary treatments that can be harmful to your health and budget.

5. Emotional intelligence

Being a good doctor is not simply a matter of having a medical degree and sufficient experience – it also requires an ability to understand the patient’s emotions and to empathise with them. Current AI systems are like robots whose intelligence might be high, but they can’t provide emotional support. Technically it might be possible to mimic human emotions, but how ethical is it to encapsulate what makes us human in a set of algorithms?

All in all, AI can be a valuable supportive tool in healthcare. However, before we allow widespread use of algorithms in healthcare, we must resolve quite a few legal and ethical challenges to minimise the risks. Remember Hippocrates’ fundamental principle – do no harm. If algorithms are to be our doctors, why should they be allowed to deviate from this crucial rule?

Want to learn more? Check out our position paper: AI must be smart about our health.

Posted by Jelena Malinina