L'intelligenza artificiale è maschio o femmina?
DOI:
https://doi.org/10.82015/32px3453Parole chiave:
Intelligenza Artificiale; Bias; Equità di genere; Neutralità; Assistenti VirtualiAbstract
L’intervento esplora il quesito, provocatorio, relativo al genere dell’intelligenza artificiale come punto di partenza per una riflessione critica sulle dinamiche di genere nella progettazione, nel linguaggio e negli usi delle tecnologie intelligenti. Attraverso due direttrici – gli stereotipi degli assistenti virtuali e l’analisi dei set di dati con cui si allena l’intelligenza artificiale, si analizza come l’intelligenza artificiale riproduca gerarchie culturali esistenti, contribuendo alla loro diffusione, cristallizzazione e amplificazione. L’intervento propone in luce la necessità di un ripensamento radicale dei paradigmi tecnici e cognitivi, evidenziando la necessità, urgente, di un’intelligenza artificiale multidisciplinare e multigender.
Riferimenti bibliografici
Abbonato, D., Bianchini, S., Gargiulo, F., and Venturini, T. (2024). Interdisciplinary research in artificial intelligence: Lessons from COVID-19. Quantitative Science Studies, 5(4), 922–935.
https://doi.org/10.1162/qss_a_00329.
Bogen, M., and Rieke, A. (2018). Help wanted: An examination of hiring algorithms, equity and bias. Upturn. https://shorturl.at/pCNLd.
Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., and Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Advances in Neural Information Processing Systems, 29: 4349–4357. https://arxiv.org/abs/1607.06520
Buolamwini, J., and Gebru, T. (2018). Gender Shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15.
http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
Caliskan, A., Bryson, J. J., and Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. https://doi.org/10.1126/science.aal4230
Carter-Browne, B. M., Paletz, S. B. F., Campbell, S. G., Carraway, M. J., Vahlkamp, S. H., Schwartz, J., and O’Rourke, P. (2021). A multidisciplinary framework for AIs to work in human teams. Applied Laboratory for Intelligence and Security, University of Maryland.
Commissione Europea. (2023). Raccomandazione (UE) 2023/2611 del Consiglio sui fattori che influenzano lo sviluppo delle competenze digitali e la diversità nei team tecnologici. Gazzetta ufficiale dell’Unione europea. https://shorturl.at/QCHKe.
Consiglio Nazionale delle Ricerche (CNR). (2024). L'intelligenza artificiale per lo sviluppo sostenibile. https://shorturl.at/106aj.
De Silva, D., and Alahakoon, D. (2021). An artificial intelligence life cycle: From conception to production. Patterns, 3(6), 100489. DOI: 10.1016/j.patter.2022.100489.
European Commission. (2019). Ethics guidelines for trustworthy AI. https://shorturl.at/7iiWm.
European Commission. (2024). European approach to artificial intelligence. https://surl.li/vpypgw.
European Commission. (2013). Innovation through gender: An expert group report. Publications Office of the European Union. https://doi.org/10.2777/11868.
European Institute for Gender Equality (2022). Gender equality and digitalization in the European Union. https://n9.cl/e1g2q.
Goodman, K. L., and Mayhorn, C. B. (2023). It's not what you say but how you say it: Examining the influence of perceived voice assistant gender and pitch on trust and reliance. Applied Ergonomics, 110, 103067. https://doi.org/10.1016/j.apergo.2022.103864.
Helbing, D., Frey, B. S., Gigerenzer, G., Hafen, E., Hagner, M., Hofstetter, Y., van den Hoven, J., Zicari, R. V., and Zwitter, A. (2019). Will democracy survive big data and artificial intelligence? Science, 359(6379): 745–746. https://doi.org/10.1007/978-3-319-90869-4_7.
Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudík, M., and Wallach, H. (2019). Improving fairness in machine learning through a community based approach. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Paper 617, pp. 1–13).https://doi.org/10.1145/3290605.3300830.
Jobin, A., Ienca, M., and Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9): 389–399. https://doi.org/10.1038/s42256-019-0088-2.
Khamis, R. Y., Ammari, T., & Mikhail, G. W. (2016). Gender differences in coronary heart disease. Heart, 102:1142–1149. https://doi.org/10.1136/heartjnl-2014-306463.
Kotek, H. R., Dockum, R., and Sun, D. Q. (2023). Gender bias in LLMs. In Collective Intelligence Conference (CI '23), 6–9 November 2023, Delft, Netherlands. ACM.
https://doi.org/10.48550/arXiv.2308.14921.
Khamis, R. Y., Ammari, T., & Mikhail, G. W. (2016). Gender differences in coronary heart disease. Heart, 102:1142–1149. https://doi.org/10.1136/heartjnl-2014-306463.
Latif, E., Zhai, X., and Liu, L. (2023). AI gender bias, disparities, and fairness: Does training data matter? arXiv preprint. https://arxiv.org/html/2312.10833v2
Lee, N.T., Resnick, P. and Barton, G. (2019). Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms. International Journal of Information Management, 60 (2021) 102387.
Lobinger, K., and Hegelich, S. (2022). The price of politeness: Why AI assistants reinforce gendered expectations. Journal of Artificial Intelligence Research, 74:523–549.
Lopatovska, I., and Oropeza, H. (2019). User interactions with voice assistants: Differences between Alexa and Google Assistant. Journal of the Association for Information Science and Technology, 69(10), 1263–1270.
Mahmood, A., and Huang, C.-M. (2024). Gender biases in error mitigation by voice assistants. Proceedings of the ACM on Human-Computer Interaction, 8(CSCW1), Article 60, 1–27. https://doi.org/10.1145/3637337
Matera, C., Spada, S., and Tagmat, S. (2023). EQUALS-EU: Embracing gender equality and diversity to foster stem careers and social innovation. https://n9.cl/uw3y43.
Mosca, L., Barrett-Connor, E., and Wenger, N. K. (2011). Sex/gender differences in cardiovascular disease prevention: What a difference a decade makes. Circulation, 124(19): 2145–2154. https://doi.org/10.1161/CIRCULATIONAHA.110.968792.
Nass, C. I., and Yen, C. (2010). The man who lied to his laptop: What machines teach us about human relationships. Penguin Books.
National Institute of Standards and Technology (2023). Towards a standard for identifying and managing bias in artificial intelligence. https://shorturl.at/BuD1T.
Nielsen, M. W., Alegria, S., Börjeson, L., Etzkowitz, H., Falk-Krzesinski, H. J., Joshi, A., ... and Schiebinger, L. (2017). Gender diversity leads to better science. Proceedings of the National Academy of Sciences of the United States of America, 114(8): 1740–1742.
https://doi.org/10.1073/pnas.1700616114.
Obermeyer, Z., Powers, B., Vogeli, C., and Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464): 447–453. https://doi.org/10.1126/science.aax2342.
Otis, N. G., Cranney, K., Delecourt, S., and Koning, R. (2024). Global evidence on gender gaps and generative AI. Berkeley Haas Center for Equity, Gender, and Leadership. https://shorturl.at/c0fs8.
Ovacık, Ş. (2025) Addressing gender bias in voice assistant development: A feminist technoscience perspective. Journal of Artificial Intelligence, 12(3): 150-168.
Page, S. E. (2007). The difference: How the power of diversity creates better groups, firms, schools, and societies. Princeton University Pres
Pais, I. (2023). Colmare il divario digitale. Il ruolo della collaborazione multidisciplinare nell’innovazione sociale e tecnologica. Milano: Vita e Pensiero.
Pasciuto, F. (2024). Superare i bias di genere: Sfide e opportunità per l’IA generativa e la robotica sociale. HUMANA.MENTE – Rivista di psicologia, 38: 113–124. doi: 10.13129/2240-7715/2024.2.
Perez Criado, R. (2019). Invisible women: Data bias in a world designed for men. London: Chatto & Windus.
Petrocelli, C. (2024). Siri, Alexa, Cortana e le altre: Perché gli assistenti digitali hanno sempre una voce femminile. Sapere Scienza. https://shorturl.at/NEROV.
Porcheron, M., Fischer, J. E., Reeves, S., and Sharples, S. (2018). Voice interfaces in everyday life – Findings from a diary study. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–12). https://doi.org/10.1145/3173574.3174214.
Randstad (2024). Understanding talent scarcity: AI & equity. https://shorturl.at/i6CAf.
Schulenberg, K., Hauptman, A. I., Schlesener, E. A., Watkins, H., and Freeman, G. (2023). "I Felt Like I Wasn't Really Meant to be There": Understanding Women's Perceptions of Gender in Approaching AI Design & Development. Proceedings of the 56th Hawaii International
Conference on System Sciences. https://shorturl.at/76e4k.
Serravalle, Alarcon, and Herrera. (2025). Gender bias in AI: Uncovering the roots and shaping inclusive futures. https://shorturl.at/f6fr5.
Shrestha, S., and Das, S. (2022). Exploring gender biases in ML and AI academic research through systematic literature review. Frontiers in artificial intelligence, 5, 976838. https://doi.org/10.3389/frai.2022.976838.
Smith, G., and Rustagi, I. (2021). When good algorithms go sexist: Why and how to advance AI gender equity. Berkeley Haas Center for Equity, Gender & Leadership.
Snyder, E. C., Mendu, S., Sundar, S. S., and Abdullah, S. (2023). Busting the one-voice-fits-all myth: effects of similarity and customization of voice-assistant personality. International Journal of Human-Computer Studies, 180, 103126.
https://doi.org/10.1016/j.ijhcs.2023.103126.
Stanford University (2024). Analyzing Gender and Intersectionality in Machine Learning. https://genderedinnovations.stanford.edu/methods/gender_ML.html
Tolmeijer, S. (2021). Female by default? Exploring the effect of voice assistant gender and pitch on trait and trust attribution [Conference paper]. In ACM CHI 2021 Extended Abstracts. https://doi.org/10.1145/3411763.3451623
Tolmeijer, S., Kramer, M., and Meyer, J. J. (2010). The effects of perceived gender on user acceptance of artificial intelligence agents. In Advances in Intelligent Systems and Computing (Vol. 89, pp. 1–12). Springer. https://doi.org/10.1007/978-3-642-21402-8_1
UNESCO. (2019). I'd blush if I could: Closing gender divides in digital skills through education. https://doi.org/10.54675/RAPC9356.
UNESCO (2019). Raccomandazione sull’etica dell’intelligenza artificiale: Modellare il futuro
delle nostre società. https://www.unesco.it/wp-content/uploads/202.
UNESCO (2023). Bias against women and girls in large language models. https://www.unesco.it/wp-content/uploads/2024/11/italiano.pdf
UNESCO (2023). Raccomandazione sull’etica dell’intelligenza artificiale. https://www.unesco.it/wp-content/uploads/2023/11/Brochure-su-Raccomandazione-UNESCO-sullIntelligenza-Artificiale.pdf
UNESCO (2025). Women4Ethical AI. https://www.unesco.org/en/artificial-intelligence/women4ethical-ai.
Vacanti, A., and Iebole, S. (2024). Iebole, S. (2024). Il genere digitale: identità e percezione di assistenti vocali e chatbot. GUD, (9): 50-59. https://shorturl.at/70pjq.
Wajcman, J. (2017). Automation: Is it really different this time? The British Journal of Sociology, 68(1); 119-127. https://doi.org/10.1111/1468-4446.12239.
World Economic Forum (2025). Global gender gap report 2025. https://reports.weforum.org/docs/WEF_GGGR_2025.pdf
Xinxin, S., Tianyuan, S., Qianling, J., and Bin, J. (2025). Research on the impact of an AI voice assistant's gender and self-presentation on user perception. Behavioral Sciences, 15, (2):184. https://doi.org/10.3390/bs15020184
Zhao, J., Wang, T., Yatskar, M., Ordonez, V., and Chang, K. W. (2017). Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (pp. 2979–2989). DOI 10.18653/v1/D17-1323