Exposé Invité ETeRNAL
02:00PM: stream of the presentation
03:00PM: question with the speaker
Title: Layers, Biases, and Responsibility
Neural Networks have revolutionized NLP: they have increased performance across the board, and enabled a number of applications that were not possible before. All this is a great opportunity, but also a new responsibility for NLP: never before was it so easy to write a powerful NLP system, never before did it have such a potential impact. These systems are increasingly used in applications they were not intended for, by people who treat them as interchangeable black boxes. The results can be simple performance drops, but also systematic biases against various user groups.
As a consequence, we as NLP practitioners suddenly have a new role, in addition to researcher and developer: considering the ethical implications of our systems, and educating the public about the possibilities and limitations of our work.The time of academic innocence is over, and we need to address this newfound responsibility as a community.
I will talk about the possibilities deep learning has opened up, and the caveats that come with it, and where this might lead NLP as a field. This includes both case examples and some provocations for future directions.
Dirk Hovy is associate professor of computer science at Bocconi University in Milan, Italy. He is interested in the interaction between language, society, and machine learning, or what language can tell us about society, and what computers can tell us about language. He has authored over 50 articles on these topics, including 3 best paper awards. He has organized one conference and several workshops on ethics in NLP, abusive language, and computational social science.
Outside of work, Dirk enjoys cooking, running, and leather-crafting. For updated information, see http://www.dirkhovy.com
Les liens pour assister aux sessions d’échanges sont communiqués dans les post d’informations pratiques sur le Blog des participants.