Solidarity & justice Digitalization and datafication are increasingly being used to predict who will benefit from a medical treatment, who will default on a loan payment, who the most appropriate hire is, or what educational program best fits a pupil. For example, by deploying AI and predictive analytics. While this can yield many benefits, by generating categories for differentiating between people, these technologies also risk reinforcing existing inequalities, along gender, ethnic and socio-economic lines, or creating new types of inequalities based on new data profiles. This can weaken solidarity – the sense of reciprocity and obligations of mutual aid we have as citizens – and it can undermine inclusivity and equal access, which underpin our health, education, welfare and legal systems. At iHub we research how the development and implementation of AI and other technologies can be done in ways that ensure solidarity and (sector-specific) ideals of justice and fairness: Where do non-discrimination law and ethical guidelines need updating? Can fairness be designed into AI, and if so, how? How do we build technologies that are optimized for increased access to public services?
Privacy & security Privacy is a core value and fundamental principle of democratic societies. Yet privacy, and its correlate security, are constantly under strain in data-rich environments, such as online platforms or medical research using large datasets. This is because digital data can flow between different contexts much easier than data in the paper-age did. For example, data generated on a mobile app used to monitor physical activity, such as steps, can be sold to third parties, such as advertisers or health insurers. When data travel between contexts in this way privacy is easily violated. This is a legal issue but also a moral one: our expectations of privacy can be violated even if terms and conditions that we consent to stipulate that this sharing of data will take place. At iHub we explore the subjective, social, legal, ethical and technical dimensions of privacy: what does privacy mean for people in different contexts? How can privacy-by-design technologies, such as the IRMA app developed by iHub researchers, be applied in new contexts, including public health or electronic voting?
Freedom & democracy Smart technologies increasingly influence and “nudge” human behavior in ways that users are unaware of. In this value line we research how our freedom is impacted as we go about our lives interacting with digital technologies both on- and off-line, when this acceptable, and how this impacts the democratic functioning of our societies. Is the notion of informed consent sufficient to ensure a meaningful sense of freedom and autonomy? If not, how can it be updated legally, ethically, and technically? Can positive conceptions of freedom be translated into a world where digital surveillance is increasingly normalized? Do filter bubbles, micro-targeting and fake news really reshape politics? Alongside questions of freedom, this value line explores the power of big tech and platforms on our democratic institutions and the public domain: how does the power to develop and design necessary infrastructures, such as platforms for news provision, primary education and public health, undermine democratic control and accountability?
Expertise & meaningful work There are currently many fears about automation and AI replacing humans at work. More realistically, automation is affecting certain tasks, rather than entire jobs. At iHub we research how work will change, qualitatively and in practice, when some tasks are automated; how to foster good human-machine complementarity at work; and how technology can help us rethink the division of labor so that all people benefit from automation. For example, when Zoom is used for medical consultations and teaching, what new conversational skills are required from doctors and patients, and teachers and students, in order to achieve what was previously done in person? Can these skills be taught? Do professionals experience a loss of epistemic authority when computers “know better”, or do they welcome additional input? And, what clashes of different types of expertise will we see in sectors undergoing digitalization, between technical expertise that foregrounds norms such as efficiency and standardization, and professional expertise that foregrounds norms such as duty of care?