uu77

Arts in gesprek met patiënt
Arts in gesprek met patiënt

NWO grants for research into ethical AI in healthcare

Two projects investigating how AI is used in healthcare and other sectors have received a grant from the Dutch Research Council (NWO).

Both research projects are part of the NGF-Call AiNed ELSA Labs of NWO A total of four projects have been funded. In one of the projects, a researcher from uu77 is the main applicant; in the other project, from the University of Amsterdam, researchers from uu77 are involved as co-applicants.

Legal, Regulatory, and Policy Aspects of Clinical Decision Support Systems

Main applicant: Johan Kwisthout, Faculty of Social Sciences.

The researchers will receive a grant of 1.7 million euros for the ELSA Lab Legal, Regulatory, and Policy Aspects of Clinical Decision Support Systems. They will investigate the conditions that decision support systems must meet in order to be applied in healthcare.

The project will start on 1 September 2025. Johan Kwisthout, professor of AI and project leader, is eager to get started: ‘AI is very important to enable individual care with an eye for quality of life. A decision support system can calculate scenarios so that doctor and patient can make choices together when it comes to radical medical procedures. Risks, such as the risk of a tumour spreading, can be weighed against the impact of a treatment, such as the preventive removal of lymph nodes.

'However, by using technology in healthcare, you also have to take into account that you are taking over part of the doctor's responsibility. The doctor does not know all the ins and outs and details of the technology, but must be able to trust the quality of the advice,' warns Kwisthout. “However, because decision support systems are based on AI technologies, trained on data with bias, by unknown algorithms, with unspecified parameters, this creates medical-ethical and legal problems.”

'Within the project, we are investigating the conditions that the development and use of AI must fulfil in order for doctors to be able to use it. We describe what is needed, for example in terms of quality control, training and experience of the doctor, traceability of the technology, and understanding of the patient, to make this joint decision on care with the help of AI not only technically possible, but also legally legitimate, medically and ethically sound, and socially accepted.

Cooperation

The project is a public-private partnership between lawyers, ethicists, AI specialists, patients and doctors. The lab also has room for input from knowledge institutes, companies, the public sector, doctors and patient associations. According to Kwisthout, this is essential: ‘There is no point in drawing up rules that are not technically feasible or unenforceable. And before you draw up regulations that talk about the responsible use of AI, you first need to define what exactly you mean by responsible use.

An oncologist, for example, also uses an MRI scanner without knowing exactly how it works. That oncologist can assume that the development of the scanner is subject to quality control, and has probably followed training to be able to use the scanner properly. That is the direction we need to take with AI. That is why it is so important that we have all these disciplines on board.

ELSA Lab AI for Health Equity: Towards Fairness & Justice in Medical AI

Main applicant: Anniek de Ruijter, University of Amsterdam. Co-applicants from uu77: Pim Haselager and Anco Peeters, Faculty of Social Sciences.

In addition, NWO has released 2.3 million euros for the ELSA Lab AI for Health Equity: Towards Fairness & Justice in Medical AI. AI can improve healthcare, but there is also a risk that existing inequalities will increase. AI systems trained with limited data sets may perform less well for various population groups. This ELSA Lab is identifying these problems and developing solutions for an inclusive digital health infrastructure. Existing regulations, such as the GDPR, are taken into account and guidelines are drawn up to make AI fairer.

The lab is led by Anniek de Ruijter (health law) and Julia van Weert (health communication) from the University of Amsterdam. ‘We are proud to lead this collaboration between law, communication science, AI, healthcare and social engagement’, says de Ruijter. ‘With this lab, we want to ensure that fairness in the application of AI in healthcare is a core value.’

Fair, accessible healthcare

Pim Haselager and Anco Peeters from uu77 are also involved in the lab. They are looking at the role artificial intelligence can play in making healthcare more fair and accessible. Anco Peeters, researcher in the research group Societal Implications of AI: ‘The social and ethical implications of AI are central to our research. Based on this expertise, we contribute to this ELSA lab by investigating what moral concerns there are for how, for example, people with low literacy levels encounter more barriers than others in healthcare questions.’

About AiNed ELSA Labs
 

AI is developing at lightning speed and bringing about far-reaching changes worldwide. This leads to diverse ethical (E), legal (L), social and economic (S) issues (A), according to the idea behind the ELSA Labs. Responsible AI development requires not only technological progress, but also societal embedding and the utilisation of economic opportunities. The assigned projects focus on tackling these challenges.

The project proposal for an ELSA Lab states that ELSA aspects and technology will be studied in conjunction, that AI will be based on public values and human rights, and that frameworks and guidelines for the development of human-centred AI will be tested and developed.