Hi & Welcome
I’m a philosopher specializing in bioethics and normative ethics, with a focus on rights-based approaches to emerging technologies. My research explores how artificial intelligence—especially in clinical and diagnostic settings—raises new ethical challenges and obligations, and how we can respond to them using robust philosophical frameworks.
I recently completed my PhD in philosophy at Ruhr University Bochum, where I developed a normative framework for evaluating the risks and benefits of machine learning in medical diagnosis. My dissertation draws on a rights-based theory of risk ethics and integrates insights from relational ethics to account for the complexity of healthcare delivery. I propose what I call an ecosystem of moral constellations: a dynamic structure for evaluating how rights and responsibilities are distributed among patients, clinicians, developers, and institutions.
Beyond healthcare, I’ve worked on the ethical implications of AI-induced deskilling, value conflicts in large language models and friction-in-design approached in Human-Computer Interaction studies.
Currently, I’m working on a broader understanding of the so-called relational turn in AI ethics and the implications for ethical AI development and deployment. My work aims to bridge rigorous moral philosophy with interdisciplinary perspectives on real-world implementation.
Publications
Machine Learning in Medical Diagnosis: A Framework for a Normative Evaluation of Chances and Risks.
Leslye Denisse Dias Duran
Metzler J.B (Springer)
This book seeks to navigate between the optimism that has arisen from the promise of the potential of machine learning (ML) in healthcare, and the lack of clarity about what realistic risks and benefits we can foresee. Its main aim is to develop a relational, rights-based normative approach to evaluating the distribution of burdens and benefits of implementing ML in medical diagnosis. This framework, called the "Ecosystem of Moral Constellations", assumes that every person has an equal claim to the fundamental rights necessary to lead one’s life, but recognizes that there may be conflicting interests that risk violating or infringing the rights of an individual or individuals, and that therefore an assessment of these tensions requires a situational prioritization of certain rights over others. This framework proposes to consider the normative relevance of relationships at different points of moral engagement to assess the potential tensions between these burdens and benefits of these technologies. The author argues that decisions about the implementation of AI systems require more than an assessment of technical feasibility. Instead, it is imperative to consider the different normative goals and interests of the actors involved, the material capabilities of the tools, and the role they should play in the clinical workflow.