Events Calendar
Sign Up
View map

To advance further, deep learning systems need to become more transparent. They will have to prove they are reliable, can withstand malicious attacks, and can explain the reasoning behind their decisions, especially in safety-critical applications like self-driving cars.

The Quest Symposium on Robust, Interpretable AI will explore the latest techniques for making AI more trustworthy. Join us for posters by MIT students and postdocs, and talks by MIT faculty. Research topics will include attack and defense methods for deep neural networks, visualizations, interpretable modeling, and other methods for revealing deep network behavior, structure, sensitivities, and biases. 

SCHEDULE
2:30 pm. Aleksander Madry, "Robustness and Interpretability."
2:55 pm. Tommi Jaakkola, "Co-operative games of interpretability."
3:25 pm. Stefanie Jegelka, "Robustness in GANs."
3:50 pm. Poster Session A
4:30 pm. David Sontag, "Challenges and Dangers of Machine Learning in Health Care."
4:55 pm. Luca Daniel, "Evaluating the Robustness of Neural Networks."
5:15 pm. Antonio Torralba, "Dissecting Neural Networks."
5:40 pm. Poster Session B 

All are encouraged to attend. Light refreshments will be served. This symposium is part of the Robust Intelligence Initiative at the Computer Science and Artificial Intelligence Lab (CSAIL), funded by Microsoft and the MIT-IBM Watson AI Lab.

Event Details

See Who Is Interested

  • oliva
  • Santiago Esteban Colacelli
  • Bianca Tedmori
  • Ignacio Fuentes Ribas

4 people are interested in this event