Towards a Safer Web for Women: First International Workshop on Protecting Women Online at the Web Conference 2025
The workshop is led by a team of dedicated researchers with expertise in Responsible AI, Natural Language Processing, and the study of online harms, bringing diverse perspectives and extensive experience to address the critical issue of online violence against women and girls.
Tracie Farrell is a Senior Research Fellow at the Knowledge Media Institute (KMi) at The Open University. Her research program, "Shifting Power," investigates the intersections of AI and Justice through a queer, intersectional lens. She has conducted extensive research on online hate and misinformation, with a focus on its impact on women and minoritised groups.
Miriam Fernandez is a Professor of Responsible Artificial Intelligence at The Open University, specialising in aligning AI technology with ethical and societal values, particularly in algorithmic transparency and fairness. She has authored over 100 scientific articles and actively contributes to the advancement of Responsible AI through interdisciplinary research and collaboration.
Christine de Kock is an Assistant Professor in Computer Science at Melbourne University. Her research focuses on natural language and its application to online communities, with a current focus on anti-women groups and the alt-right. She is a convenor on the Hallmark Research Initiative for Fighting Harmful Online Communication.
Debora Nozza is an Assistant Professor in Computing Sciences at Bocconi University. Her research focuses on Natural Language Processing, particularly in detecting and mitigating hate speech and algorithmic bias in multilingual social media data. She has organised several prominent workshops and coordinated international shared tasks in these areas.
Ángel Pavón Pérez is a Research Associate in Responsible AI at the Centre for Protecting Women Online and a PhD candidate at The Open University in collaboration with VISA Europe. His research examines radicalised online communities and explores strategies to address bias in AI systems, with a focus on their unintended impacts on minoritised groups.