Home » Node » 28670

Seminario pubblico di Piercosma Bisconti Lucidi

Speaker: 
Piercosma Bisconti Lucidi
Data dell'evento: 
Martedì, 4 March, 2025 - 16:00
Luogo: 
Aula Magna, DIAG
Contatto: 
Danieke Nardi (nardi@diag.uniroma1.it)

Nell'ambito della procedura selettiva per il reclutamento di 1 ricercatore a
tempo determinato tipologia A tempo determinato PNRR PE1 FAIR SPOKE 5 GSD 11/PHIL-03 - SSD PHIL-03/A, presso il Dipartimento di Ingegneria informatica, automatica e
gestionale Antonio Ruberti, bandita con D.D. n.12/2025, Prot.n. 162 del 10/01/2025
Piercosma Bisconti Lucidi terrà un seminario pubblico in data 4/3/2025, alle ore 16:00, presso l'aula magna del DIAG, e in collegamento Zoom, link alla videochiamata:
https://uniroma1.zoom.us/j/88293503365?pwd=S241dDBhUThMN01icEt1UGJidjhXUT09

Title: A Unified Framework for Measuring AI Trustworthiness: Integrating Intrinsic and Perceived Dimensions

Abstract:The concept of AI trustworthiness is central to the development of reliable and ethically sound AI systems, yet its definition remains fragmented between intrinsic system properties and human perceptions. While intrinsic trustworthiness—encompassing robustness, accuracy, and transparency—is often discussed in technical and regulatory frameworks, its interaction with perceived trustworthiness remains insufficiently formalized. This research proposes a comprehensive framework that quantifies AI trustworthiness by integrating these two dimensions, linking intrinsic properties with observer perceptions mediated by transparency, agency locus, and human oversight.
To establish a robust definition of intrinsic trustworthiness, we will analyze AI-related standards from the European Union (e.g., CEN-CENELEC) and international bodies such as ISO/IEC. These standards will provide a structured taxonomy of trustworthiness characteristics, ensuring alignment with regulatory and technical benchmarks. Empirical studies will complement this by examining how discrepancies between expected and observed AI behaviors influence perceived trust. The study will further explore how regulatory compliance with AI standards affects public trust and the adoption of AI technologies.
Expected outcomes include a validated metric for AI trustworthiness, insights into the interplay between intrinsic and perceived trust, and actionable recommendations for developers and policymakers. By grounding our model in established standards, this research aims to provide a structured, measurable, and regulatory-aligned approach to AI trustworthiness, contributing to both theoretical understanding and practical implementation in real-world AI systems.


 

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma