Important Dates
| Submission deadline |
|
| Acceptance notification |
|
| Camera-ready due |
|
| Workshop | October 25, 2025 |
Welcome to Identity-Aware AI 2025
The Identity-Aware AI 2025 workshop will be held in conjunction with ECAI 2025. It is a forum to bring together researchers from diverse fields to understand how the identities of all stakeholders should be considered in future research in AI.
Workshop Theme
What makes each of us unique, and which ethical and technical challenges does this imply?
Language (and thus the automatic processing of it) is about people and what they mean. However, current practice relies on the assumptions that the involved humans are all the same, and that if enough data (and compute power) is present, the resulting generalizations will be robust enough and represent the majority.
This approach often harms marginalized communities and ignores the notion of identity in models and systems. Our interdisciplinary workshop aims to raise the question of “what makes each of us unique?” to the AI community.
Contact us via email at identity-aware-ai@googlegroups.com for any questions.
Workshop Goals
- The development of a shared and interdisciplinary understanding of identities and how identity is treated in AI.
- The development of new methods that push the effective, fair, and inclusive treatment of individuals in AI to the next level.
Program Details
Schedule
| Start | End | Activity | Duration |
|---|---|---|---|
| 14:00 | 14:10 | Introduction | 00:10 |
| 14:10 | 14:50 | Keynote by Roberta Calegari | 00:40 |
| 14:50 | 15:30 | Poster session | 00:40 |
| 15:30 | 16:00 | Coffee break | 00:30 |
| 16:00 | 16:30 | Poster session continued | 00:30 |
| 16:30 | 17:10 | Keynote by Andrea Campagner | 00:40 |
| 17:10 | 17:30 | Virtual presentations + Conclusions | 00:20 |
Venue
Room 0.4, Engineering School, University of Bologna
Getting to the Engineering School
By Bus from Bologna City Center:
- Bus numbers: 20, 33, and D
- Get off at “Porta Saragozza – Risorgimento” (buses no. 20 and D) or at “Porta Saragozza – Villa Cassarini” (bus no. 33)
By Taxi:
- Use the TaxiClick App
- Check CoTaBo and RadioTaxiCAT (also transport of non-folding wheelchairs for disabled people)
Keynote 1: From Opportunity to Compliance: the AEQUITAS framework for Fair AI by Roberta Calegari
Roberta Calegari (She/Her)
Roberta Calegari is a researcher and professor at the Department of Computer Science and at the Alma Mater Research Institute for Human-Centered Artificial Intelligence at the University of Bologna. Her research field is related to trustworthy and explainable systems, distributed intelligent systems, software engineering, multi-paradigm languages and AI & law. She is the coordinator of the project Horizon Europe 2020 (G.A. 101070363) about Assessment and engineering of equitable, unbiased, impartial and trustworthy AI systems. The project aims to provide an experimentation playground to assess and repair bias in AI. She has been part of the EU Horizon 2020 Project “PrePAI” (G.A. 101083674) working on the definition of requirements and mechanisms that ensure all resources published on the future AIonDemand platform can be labelled as trustworthy and in compliance with the future AI regulatory framework. Her research interests lie within the broad area of knowledge representation and reasoning in AI for trustworthy and explainable AI and in particular focus on symbolic AI including computational logic, logic programming, argumentation, logic-based multi-agent systems, non-monotonic/defeasible reasoning. She is Member of the Editorial Board of ACM Computing Surveys for the area of Artificial Intelligence. She is the author of more than 90 papers in peer-reviewed international conferences and journals. She is leading many European, Italian and regional projects and she is responsible for collaborations with industries.
Title: From Opportunity to Compliance: the AEQUITAS framework for Fair AI
Abstract: AI offers unprecedented opportunities—but also carries the risk of discrimination and unfair treatment embedded in data, design, or decision-making processes. The AEQUITAS framework addresses this challenge through a holistic approach that embeds a controlled experimentation environment to identify, measure, and mitigate bias before deployment. By integrating legal, ethical, and technical perspectives and translating them into concrete, actionable methods, AEQUITAS enables organisations to detect discriminatory patterns, apply fairness mitigation strategies, and validate compliance with the EU AI Act, ensuring that AI systems are developed and deployed in a fair, transparent, and accountable manner.
Website: https://apice.unibo.it/xwiki/bin/view/RobertaCalegari/
Keynote 2: Robust Learning Methods for Uncertain Data: from Imprecision to Perspectivism by Andrea Campagner
Andrea Campagner (he/him)
Andrea Campagner is a Tenure Track Assistant Professor at University of Milano-Bicocca. Previously, he was a Researcher at IRCSS Ospedale Galeazzi Sant’Ambrogio. His research focuses on uncertainty management, machine learning, human-AI interaction, and medical informatics. His research has received international recognition, including the EurAI Best Dissertation Award, the ACM SIGCHI Gary Marsden and IJAR Early Career Researcher awards. He is Associate Editor of the International Journal of Approximate Reasoning, International Journal of Medical Informatics and Soft Computing journal.
Title: Robust Learning Methods for Uncertain Data: from Imprecision to Perspectivism
Abstract: The representation, quantification and management of uncertainty is a central problem in Artificial Intelligence, and particularly so in Machine Learning (ML). Among different forms of uncertainty, imprecision, that is the problem of dealing with imperfect and incomplete data, has recently attracted interest in the research community, for its implications on ML practice. The talk will focus on the problem of dealing with imprecision in ML, how to formally represent and study learning from imprecise data problems. The talk will then describe the connections between imprecision and preference modeling, discussing the relationships between uncertainty modeling and perspectivism, a recently proposed framework to manage data annotations in crowdsourcing-based ML.
Website: https://andreacampagner.github.io/
Posters
- A Fair and Personalized Dementia Prediction Framework Using Longitudinal and Demographic Data from South Korea
- Hong-Woo Chun, Lee-Nam Kwon, Hyeonho Shin, Sungwha Hong and Jae-Min Lee
- MetaRAG: Metamorphic Testing for Hallucination Detection in RAG Systems
- Channdeth Sok, David Luz and Yacine Haddam
- Political Bias in Large Language Models: A Case Study on the 2025 German Federal Election
- Buket Kurtulus and Anna Kruspe
- Identity by Design? Evaluating Gender Conditioning in LLM-Generated Agent Identity Profiles
- Mattia Rampazzo, Saba Ghanbari Haez, Patrizio Bellan, Simone Magnolini, Leonardo Sanna and Mauro Dragoni
- Testing LLMs’ Sensitivity to Sociodemographics in Offensive Speech Detection
- Lia Draetta, Soda Marem Lo, Samuele D’Avenia, Valerio Basile and Rossana Damiano
- IntersectionRE: Mitigating Intersectional Bias in Relation Extraction Through Coverage-Driven Augmentation
- Amirhossein Layegh, Amir H. Payberah and Mihhail Matskin
- Identity-Aware Large Language Models require Cultural Reasoning
- Alistair Plum, Anne-Marie Lutgen, Christoph Purschke and Achim Rettinger
- Neurodiversity Aware or Hyperaware AI? Visual Stereotypes of Autism Spectrum in Janus-Pro-7B, DALL-E, Stable Diffusion, SDXL, FLUX, and Midjourney
- Maciej Wodziński, Marcin Rządeczka, Anastazja Szuła, Kacper Dudzic and Marcin Moskalewicz
- Trustworthy AI Through Dual-Role Reasoning: Ethical, Legal, and Psychological Internal Critique
- Chengheng Li Chen, Antonio Lobo Santos, Marc Serramià Amorós and Maite López Sánchez
Virtual Presentations
- From Perceived Effectiveness to Measured Impact: Identity-Aware Evaluation of Automated Counter-Stereotypes
- Svetlana Kiritchenko, Anna Kerkhof, Isar Nejadgholi and Kathleen Fraser
- Who are you, ChatGPT? Personality and Demographic Style in LLM-Generated Content
- Dana Sotto and Ella Rabinovich
- On the Interplay between Musical Preferences and Personality through the Lens of Language
- Eliran Shem Tov and Ella Rabinovich
Anti-Harassment Policy
The Identity-Aware AI 2025 workshop adheres to the ECAI anti-harassment policy. Please contact any current member of the organizing committee or workshop email if you face any harassment or hostile behavior.