De la politique française d’intelligence artificielle

Emmanuel Macron, dans une interview accordée à l’éditeur en chef de Wired, Nicholas Thompson (version originale, version traduite en français) :

And for me, if a majority of people in France understand and endorse this change it will be a success. It will be a failure if we are stuck with fears and blocked by big scares. My concern is that there is a disconnect between the speediness of innovation and some practices, and the time for digestion for a lot of people in our democracies. I have to build a sort of reciprocal or mutual trust coming from researchers, private players, startups, and my citizens. If the first category of people trust a country as being a relevant ecosystem for them, and at the same time, if I manage to build trust with my citizens for AI, I’m done. If I fail building trust with one of them, that’s a failure.

L’échec serait la poursuite de la réglementation des technologies, assurément changeantes, le succès serait le retour à la codification de principes, censément pérennes. J’ose être optimiste à la lecture de cette interview :

I think that’s why the overall philosophy I have stuck to from the very beginning of my mandate is to say: blocking changes and being focused on protecting jobs is not the right answer. It’s the people you need to protect. You do so by giving them opportunities and by training and retraining them again to get new jobs. Don’t block the change because it’s coming and people will accept it. But try to be at the forefront of change to better understand it and deal with it. Change can destroy jobs in the very short run, but create new ones in other sectors at the same time.

La résurgence, passé l’attrait de la nouveauté, des mêmes dogmes économiques et des mêmes réflexes rétrogrades, des mêmes sophismes et des mêmes menaces, de la même bêtise crasse et des mêmes mensonges éhontés, m’invite toutefois à la prudence. Prudence que le président la République semble conserver face à l’intelligence artificielle :

Because I think you always need responsibility and assertion of responsibility. And technically speaking, you can have in some situations, some automation which will be possible. But automation or machines put in a situation precisely to do that would create an absence of responsibility. Which, for me, is a critical issue. So that’s absolutely impossible. That’s why you always need a human check. And in certain ways, a human gateway. At a point of time, the machine can prepare everything, can reduce uncertainties, can reduce until nil the uncertainties and that’s an improvement which is impossible without it, but at a point of time, the go or no-go decision should be a human decision because you need somebody to be responsible for it.