Don’t Trust Your AI System: Model Its Validity Instead | Trustworthy AI | AI FOR GOOD
Machine behaviour is becoming cognitively very complex but rarely human-like. The efforts to fully understand machine learning and other AI systems are falling short, despite the progress of explainable AI (XAI) techniques. In many cases, especially when using post-hoc explanations, we get a false sense of understanding, and the initial sceptical (and prudent) stance towards an AI system turns into overconfidence, a dangerous delusion of trust.
In this AI for Good Discovery, Prof. Hernández-Orallo argues that for many AI systems of today and tomorrow we should not vainly try to understand what they do, but to explain and predict when and why they fail. We should model their user-aligned validity rather than their full behaviour. This is precisely what a robust, cognitively-inspired AI evaluation can do. Instead of maximising contingent dataset performance and extrapolating the volatile good metric equally to every instance, we can anticipate the validity of the AI system, specifically for each instance and user.
Prof. Hernández-Orallo illustrates how this can be done in practice, identifying relevant dimensions of the task at hand, deriving capabilities from the system’s characteristic grid and building well-calibrated assessor models at the instance level. His normative vision is that every deployed AI system in the future should only be allowed to operate if it comes with a capability profile or an assessor model, anticipating the user-aligned system validity before running each instance. Only by fine-tuning trust to each operating condition will we truly calibrate our expectations on AI.
Speaker:
José Hernández-Orallo
Professor
Universitat Politècnica de València
Moderator:
Wojciech Samek
Head of AI Department
Fraunhofer Heinrich Hertz Institute
#
Join the Neural Network!
https://aiforgood.itu.int/neural-network/
The AI for Good networking community platform powered by AI.
Designed to help users build connections with innovators and experts, link innovative ideas with social impact opportunities, and bring the community together to advance the SDGs using AI.
Watch the latest #AIforGood videos!
https://www.youtube.com/c/AIforGood/videos
Explore more #AIforGood content:
AI for Good Top Hits
https://www.youtube.com/playli....st?list=PLQqkkIwS_4k
AI for Good Webinars
https://www.youtube.com/playli....st?list=PLQqkkIwS_4k
AI for Good Keynotes
https://www.youtube.com/playli....st?list=PLQqkkIwS_4k
Stay updated and join our weekly AI for Good newsletter:
http://eepurl.com/gI2kJ5
Discover what's next on our programme!
https://aiforgood.itu.int/programme/
Check out the latest AI for Good news:
https://aiforgood.itu.int/newsroom/
Explore the AI for Good blog:
https://aiforgood.itu.int/ai-for-good-blog/
Connect on our social media:
Website: https://aiforgood.itu.int/
Twitter: https://twitter.com/ITU_AIForGood
LinkedIn Page: https://www.linkedin.com/company/26511907
LinkedIn Group: https://www.linkedin.com/groups/8567748
Instagram: https://www.instagram.com/aiforgood
Facebook: https://www.facebook.com/AIforGood
What is AI for Good?
We have less than 10 years to solve the UN SDGs and AI holds great promise to advance many of the sustainable development goals and targets.
More than a Summit, more than a movement, AI for Good is presented as a year round digital platform where AI innovators and problem owners learn, build and connect to help identify practical AI solutions to advance the United Nations Sustainable Development Goals.
AI for Good is organized by ITU in partnership with 40 UN Sister Agencies and co-convened with Switzerland.
Disclaimer:
The views and opinions expressed are those of the panelists and do not reflect the official policy of the ITU.