Comparing AI Reasoning to Human Thinking

Wednesday, June 29, 2022 at 11:00am to 12:00pm

Virtual Event

While it's easy to understand what decisions an artificial intelligence makes, it's much harder to understand why.

Register for this webinar to find out.

Without that understanding, it can be hard to know if an AI is trustworthy or safe to use in a real-world situation. One way to explain an AI's decision is through saliency methods—algorithms that uncover the features most important to the decision, such as which pixels helped convince the AI that it is looking at an image of a dog. However, analyzing saliency requires time and effort, and people often fear they may have missed critical insights. MIT researcher Angie Boggust will join MIT Horizon to discuss a new method of using saliency methods that compares AI thinking to human thinking by examining where the two align and differ in identifying important information. This method reduces the effort required to analyze saliency and identifies global patterns in AI behavior, giving us insight into what an AI is thinking. After the talk, Boggust will take questions from the audience.

Register for this MIT Horizon webinar:

Event Type

Conferences/Seminars/Lectures, Meetings/Gatherings, Community Event, Career Development

Events By Interest

Academic, Career Development, General

Events By Audience

Public, MIT Community, Students, Alumni, Faculty, Staff


ai, artificial intelligence, machine learning


MIT Open Learning
Contact Email

Add to my calendar

Recent Activity