When treating patients, curatively or palliatively, we need to know something about what the patients want, what the doctors want, and not least, what we should do. What is the right treatment for 90-year-old Olga with lung cancer? The answer depends on what we want to achieve. Do we want Olga to survive as long as possible? In that case, we have set ourselves a goal that can be worked out using an algorithm. Or do we want to limit treatment and give Olga a dignified end? Artificial intelligence can be more difficult to use in the latter case. As Mette Brekke and Ingvild Vatten Alsnes have pointed out, it is possible that artificial intelligence (at least for the time being) will fall short when it comes to treating people with health anxiety, multimorbidity and problems in the workplace or close relationships (7).
Artificial intelligence will undoubtedly change the health service
On the one hand, medical practices today are empirical, evidence-based and descriptive. Here, artificial intelligence can obviously help to boost the medical profession. On the other hand, our profession is normative, filled with values, intentions, discretion and visions. This is where we should be particularly careful. What should be the overall goal of our health service? There is no precise algorithm for that. But what we do know is that the health service is ultimately about people. Treating the health of such autonomous beings is and will remain normative. The fundamental motivation for our health service is thus normative.
Can the use of artificial intelligence be useful for this normative part of the medical profession? That is possible. For example, we can hope for better transparency. When creating algorithms, we also need to reflect on what we want to achieve. By doing so, we may be able to make more conscious and wise choices in relation to the value assumptions that are already hidden. We doctors have clinical judgment, gut feelings, bias tendencies, prejudices and various professional skills. This means that we do not always treat 'equal cases equally' (8). One hope is that artificial intelligence can keep this in check.
Unfortunately, a US study found the opposite. An algorithm that was used to find patients in need of additional medical treatment underestimated the sickest African American patients, and in doing so, perpetuated the racial inequalities in health care (9). In other words, there is a danger that the algorithms adopt our old mistakes. This can have serious consequences. The question is whether we can to identify such errors, and who is responsible for them?
Artificial intelligence will undoubtedly change the health service. And used properly, it can be a very useful tool for doctors, nurses and other healthcare personnel. But as Ingrid Hokstad has pointed out, we doctors should take the reins (10). The future of the art of medicine lies in knowing when to use artificial intelligence and when we should let natural intelligence work undisturbed (11).