Ethical and Trustworthy AI Lab

Newly launched, the Ethical and Trustworthy AI Lab is an interdisciplinary group of researchers interested in the social and ethical implications of Artificial Intelligence (AI). The group investigates philosophical, ethical, and social aspects of AI including trustworthiness and the question of what it is that makes AI uses ethical, just, and trustworthy; the roles of ethics codes, ethical guidelines, and policy-making in the regulation of AI technology; as well as AI applications in agriculture and medical contexts.

Learn More

Upcoming Events:

Join us on April 11th for the second installment of our Climate Change and Ethics webinar will explore the roles and responsibilities of governments and societal actors in the context of climate change. It will analyze what hinders stakeholders from more effectively promoting climate change mitigation and adaptation and reflect on what can be done to improve the situation. Topics include the role of science and amateurs in climate change conversations, the influence of businesses on policymaking around climate change, and the importance of concern for the environment, solidarity, ecological reforms, and public education for climate justice.

Register via Eventbrite

Join us on April 25th for our webinar investigating the roles of climate change-related risks in the context of decision-making, policy, and law. Presenters will reflect on the role of the prudence principle for spending on climate security measures and discusses life-threatening implications of pollution and climate change that result from human activity.

Register via Eventbrite

CSEP Scholars in the news

Ethics Center Predoctoral Research Fellow Monika Sziron was interviewed as part of the Partners podcast in an episode titled, “Pass the Algorithms with Asparagus” where she discussed her research around the use of artificial intelligence in agriculture and food production.

Monika also was interviewed on February 8, 2022 in Farmweek about her research in AI and agriculture in an article titled “Artificial Excitement?”

Current Projects and Presentations

Leilasadat (Leila) Mirghaderi, a Ph.D. candidate and research assistant in the Humanities Department is helping with a National Science Foundation (NSF)-funded project that is being led by Professor Carly Kocurek. This project is titled “Games for Girls: Informing the Future” and is aimed at documenting the successes and failures of a movement in the United States in the 1990s. The Games for Girls movement was an effort to make games more appealing to young girls to increase their interaction with computer technology. The project abstract can be found here:

Furthermore, Leila’s article titled “Social Media Users Free Labor in Iran: Influencers, Ethical Conduct and Labor Exploitation” has been accepted for presentation at the 72nd International Communication Association (ICA). In this paper, Leila explored the current creator and influencer culture on Instagram, as a prominent social media site, and used the lens of the Audience Labor concept to identify the strategies that influencers use to exploit their followers.

Recent Publications:

Hildt, Elisabeth. 2022.“ A Conceptual Approach to the Right of Mental Integrity.” In: López-Silva, P., Valera, L. (eds) Protecting the Mind. Ethics of Science and Technology Assessment, vol 49. Springer, Cham.

In this chapter, I reflect on the right to mental integrity from an ethics perspective. Against the background of some conceptual considerations, I discuss the chances and limitations of a right to mental integrity. The right to mental integrity stresses a person’s right to control their brain states. It is often conceived primarily as a negative right to protect against unauthorized brain interventions. While this certainly emphasizes a very important aspect, I argue that the right to mental integrity would benefit considerably from reflections on what is specific about the right to mental integrity, compared to, for example, the right to bodily integrity or the notion of informed consent. That’s why after introducing and discussing the right to mental integrity, the notion of “mental integrity,” and the concept of informed consent, I sketch implications of neurotechnologies on privacy, agency, individual characteristics, identity, authenticity, and autonomy. Then, I highlight some implications of the right to mental integrity in the context of neurotechnologies.

Sziron, Monika. 2022. “A Common Ground for Human rights, AI and Brain and Mental Health.” In: Jotterand, F., Ienca, M. (eds) Artificial Intelligence in Brain and Mental Health: Philosophical, Ethical & Policy Issues. Advances in Neuroethics. Springer, Cham.

This chapter addresses the current and future challenges of implementing artificial intelligence (AI) in brain and mental health by exploring international regulations of healthcare and AI, and how human rights play a role in these regulations. First, a broad perspective of human rights in AI and human rights in healthcare is reviewed, then regulations of AI in healthcare are discussed, and finally applications of human rights in AI and brain and mental health regulations are considered. The foremost challenge in the blending and development of regulations of AI in healthcare is that currently both AI and healthcare lack accepted international-level regulation. It can be argued that human rights and human rights law are for the most part internationally accepted, and we can use these rights as guidelines for global regulations. However, as philosophical and ethical environments vary across nations, subsequent policies reflect varying conceptions and fulfillments of human rights. Like human rights, the recognized definitions of “AI” and “health” can vary across international borders and even vary within the professions themselves. One of the biggest challenges in the future of AI in brain and mental health will be applying human rights in a practical manner. Initially, the thought of applying human rights in the development of AI in healthcare seems straightforward. In order to develop better AI, better healthcare and, thus, better AI in healthcare, one must simply respect the human rights that are granted by various declarations, covenants, and constitutions. This is so seemingly straightforward that one would think this has already been the case in these developing fields. However, as we explore this notion of applying human rights, we find agreement, disagreement, and variability on a global scale. It is these variabilities that may well hamper the ethical development of AI in brain and mental health internationally.