Events Calendar
Sign Up

While it's easy to understand what decisions an artificial intelligence makes, it's much harder to understand why.

Register for this webinar to find out.

Without that understanding, it can be hard to know if an AI is trustworthy or safe to use in a real-world situation. One way to explain an AI's decision is through saliency methods—algorithms that uncover the features most important to the decision, such as which pixels helped convince the AI that it is looking at an image of a dog. However, analyzing saliency requires time and effort, and people often fear they may have missed critical insights. MIT researcher Angie Boggust will join MIT Horizon to discuss a new method of using saliency methods that compares AI thinking to human thinking by examining where the two align and differ in identifying important information. This method reduces the effort required to analyze saliency and identifies global patterns in AI behavior, giving us insight into what an AI is thinking. After the talk, Boggust will take questions from the audience.

Register for this MIT Horizon webinar: https://mit.zoom.us/webinar/register/WN_Yt70mL8ST1a4_aTdxcFX8w

Event Details

See Who Is Interested