Kyle Dent, chercheur spécialisé en intelligence artificielle au PARC de Xerox, TechCrunch :
A significant risk is that we advance the use of AI technology at the cost of reducing individual human rights. We’re already seeing that happen. One important example is that the right to appeal judicial decisions is weakened when AI tools are involved. In many other cases, individuals don’t even know that a choice not to hire, promote, or extend a loan to them was informed by a statistical algorithm.
Et encore, s’il s’agissait vraiment d’un algorithme ! Bien souvent, l’intelligence artificielle n’est qu’un « turc mécanique ». Des travailleurs oppressés travaillent à oppresser des travailleurs, le néocapitalisme a enfin bouclé la boucle.
Accountability and transparency are paramount to safely using AI in real-world applications. Regulations could require access to basic information about the technology. Since no solution is completely accurate, the regulation should allow adopters to understand the effects of errors. Are errors relatively minor or major? Uber’s use of AI killed a pedestrian. How bad is the worst-case scenario in other applications? How are algorithms trained? What data was used for training and how was it assessed to determine its fitness for the intended purpose? Does it truly represent the people under consideration? Does it contain biases? Only by having access to this kind of information can stakeholders make informed decisions about appropriate risks and tradeoffs.