Venue | Organizers | Speakers | Schedule
Where and When?
The Alan Turing Institute, 15 November 2017
Overview
As algorithmic systems become more complex, in many domains it is essential that we ensure interpretability — the ability of humans to understand how the systems are working. This half-day workshop at the Alan Turing Institute on 15 November will survey some of the recent technical work in the field of interpretability. We will also hear perspectives from industry about what is needed in the real world, and will discuss priorities for future research.
Organizers
- Adrian Weller , University of Cambridge, Alan Turing Institute and Leverhulme Centre for the Future of Intelligence (CFI)
- Tameem Adel, University of Cambridge and Leverhulme CFI
Speakers/panelists from academia and industry include
- Tameem Adel, University of Cambridge and Leverhulme CFI
- Vaishak Belle, University of Edinburgh and Alan Turing Institute
- Ray Eitel-Porter, Accenture
- Brent Mittelstadt, University of Oxford and Alan Turing Institute
- John Murray, Alan Turing Institute
- Brooks Paige, Alan Turing Institute
- Adrian Weller, University of Cambridge, Alan Turing Institute and Leverhulme CFI
Tentative schedule
9:00 | 10:30 | Session 1: Technical aspects of interpretability |
10:30 | 11:00 | Coffee Break |
11:00 | 12:30 | Session 2: Interpretability in the real world |
Details
9:00-10:30 Session 1: Technical perspectives on interpretability
Opening remarks
Tameem Adel “Visualizing deep neural network decisions”
Brooks Paige “Semi-interpretable probabilistic models”
Vaishak Belle
11:00-12:30 Session 2: Interpretability in the real world
John Murray
Ray Eitel-Porter
Brent Mittelstadt on the GDPR
Discussion