July 29, 2021

“Now, faced with advances in artificial intelligence, the opinion of philosophers counts”

Martin Gibert is a philosopher and researcher at the University of Montreal, affiliated with the Ethics Research Center and the Institute for Data Valorization. In 2015, he applied classic moral theories to veganism in See your steak like a dead animal (Lux Editor). He has just done the same, in the more artificial field of robots, machines and other algorithms, in Moral to robots (Flammarion, 168 p., € 17).

Why does a philosopher deal with artificial intelligence (AI) devices such as self-driving cars, chatbots, or online recommendation systems?

The moment is really exciting for a philosopher because we can ask ourselves questions about these devices that are both new and very fundamental. And, in addition, it is very concrete, with immediate and urgent applications. Take the famous case of the tram dilemma, posed by Philippa Foot in 1967: is a switch activated to prevent the vehicle from hitting five workers if there is only one person on the other track?

Until recent advances in AI, it must be said that philosophers thought about the answers, but their answers were of no consequence. Now their opinion counts. We have to say something to the programmers whose algorithms will make the fatal choice! AI forces us to make decisions about what’s right and wrong, and it’s no longer a mere thought experiment.

Read also The macabre dilemma of self-driving cars

Is that what you call lecturing algorithms?

To see clearly what to do, it is necessary to organize a little different areas in applied ethics. In order of generality, we have first of all the ethics of the technique, in which we will put both screwdrivers and nuclear power plants. At a second level, the ethics of AI raises questions such as the impact of these systems on society or the environment, or even on the possible rights of robots.

The ethics of algorithms that interests me in the book is located at a third, more specific level. How to program a machine, an algorithm, an artificial moral agent so that it behaves “well” ? This forces us to go into details and, of course, that does not prevent us from wondering about the upper echelon: collectively, do we really need this or that robot?

Article reserved for our subscribers Read also “Concerns about the ethics of algorithms must be taken very seriously”

It is all the more interesting and necessary as algorithms affect people’s lives. In the case of an autonomous car facing the dilemma of the streetcar, this is obvious, but these will be very rare decisions to be made. On the other hand, an algorithm of recommendations from YouTube or Facebook can have massive consequences on the flow of information. The more we develop new powers, the more moral responsibility we have. Even behind a chatbot, there are serious moral issues.

You have 67.49% of this article to read. The rest is for subscribers only.