Quest Symposium on Robust, Interpretable AI
Tuesday, November 20, 2018 at 3:00pm to 6:00pm
Building 46, Atrium and Auditorium
43 VASSAR ST, Cambridge, MA 02139
To advance further, deep learning systems need to become more transparent. They will have to prove they are reliable, can withstand malicious attacks, and can explain the reasoning behind their decisions, especially in safety-critical applications like self-driving cars.
The Quest Symposium on Robust, Interpretable AI will explore the latest techniques for making AI more trustworthy. Join us for posters by MIT students and postdocs, and talks by MIT faculty. Research topics will include attack and defense methods for deep neural networks, visualizations, interpretable modeling, and other methods for revealing deep network behavior, structure, sensitivities, and biases.
SCHEDULE
2:30 pm. Aleksander Madry, "Robustness and Interpretability."
2:55 pm. Tommi Jaakkola, "Co-operative games of interpretability."
3:25 pm. Stefanie Jegelka, "Robustness in GANs."
3:50 pm. Poster Session A
4:30 pm. David Sontag, "Challenges and Dangers of Machine Learning in Health Care."
4:55 pm. Luca Daniel, "Evaluating the Robustness of Neural Networks."
5:15 pm. Antonio Torralba, "Dissecting Neural Networks."
5:40 pm. Poster Session B
All are encouraged to attend. Light refreshments will be served. This symposium is part of the Robust Intelligence Initiative at the Computer Science and Artificial Intelligence Lab (CSAIL), funded by Microsoft and the MIT-IBM Watson AI Lab.
- Event Type
- Events By Interest
- Events By Audience
- Events By School
- Website
- Group
- MIT Quest for Intelligence
- Hashtag
- Contact Email
- Add to my calendar
Recent Activity
No recent activity