PhD position on algorithmic transparency and the right to explanation in a joint research initiat...
Theme: transparency of complex machine learning models
The goal of the project is to study how the principle of transparency of automated decision-making and the right to an explanation should be implemented to safeguard the right to data protection, considering competing legally protected rights and interests?
Transparency of Big Data algorithms is on top of the European policy agenda. It is part of the 2017 Council of Europe Big Data Guidelines. The ‘right to an explanation' of the logic of automated decision-making is one of the key regulatory innovations of the General Data Protection Regulation and is aimed to strengthen transparency of data processing in general and of automated decision-making in particular. While the reform process is over and the Regulation will apply from May 2018, considerable lack clarity exists both as to the actual scope of this right under the Regulation, as well as how it should be implemented in practice by the data controllers.
The uncertainty exists both in terms of the balance with other protected interests and feasibility. Specifically, the key to understanding the logic of an automated decision often lies with the algorithm protected by IP while the Regulation states that the right to explanation is without prejudice to the Intellectual Property Rights. Furthermore, in addition to technical feasibility to ensure transparency of autonomous self-learning algorithms, there are feasibility concerns related to a natural limit to human cognitive abilities to perceive the logic behind automated decision-making, even more so when it is powered by modern artificial intelligence. Finding technical solutions to the algorithmic transparency problem is only secondary to and should be led by a thorough legal understanding of the transparency principle and the right to explanation.
The project is a collaboration of the Jheronimus Academy of Data Science (JADS), 's-Hertogenbosch (campus Mariënburg) and KPN. In a companion project a PhD researcher will be appointed to study the legal aspects of the transparency of models induced by machine learning algorithms.
This job comes from a partnership with Science Magazine and