top of page

The debates on AI

Translated from French to English


The debates on AI started in the 20th century – i.e., Isaac Asimov’s laws of robotics – but intensify today due to recent advances, previously described in the field. According to the theory of technological singularity, an era of domination of the machines over mankind will rise when the artificial intelligence systems will become super-intelligent:


“Technological singularity is a hypothetical event linked to the rising of a real artificial intelligence. Thus, a computer, a computer network or a robot would theoretically be able of recursive self-improvement (self-generated perfecting process) or to conceive and build smarter computers or robots. Repetitions of this cycle could lead to an effect of acceleration – an explosion of intelligence – meaning intelligent machines able to conceive successive generations of more and more powerful machines, creating an intelligence superior by far to human intellectual capacities, hence a risk of losing control. Mankind not being able to apprehend the capacities of such a super intelligence, technological singularity is the point beyond which human intelligence would not be able to predict or even imagine the events”.

Those who defend technological singularity are close to the transhumanist movement which aims at enhancing human physical and intellectual capacities using new technologies. The singularity would correspond to the moment when the nature of human beings would undergo a fundamental change, which would be perceived as a desirable event (by the transhumanists), or as a danger for mankind (by their opponents).


The debate on the dangers of AI has crossed a new threshold with the recent polemic on autonomous weapons and killer robots, raised by an open letter published during the opening of the IJCAI conference in 2015. The letter, which asks for the ban of weapons able to function without human intervention, was signed by thousands of people, including Stephen Hawking, Elon Musk, Steve Wozniak, and many leading researchers in AI; we find among the signers some of INRIA’s researchers who contributed to the writing of the present document.


Among the other dangers and threats having been debated within the community, can be quoted: the financial consequences of high frequency trading on the trade markets, which now represent the vast majority of the orders passed – in high frequency trading, so called smart software execute financial transactions at high speed, which can lead to stock exchange crashes like the Flash Crash of 2010; the consequences of the search of massive data on the respect of privacy, with searching systems able to disclose the personal characteristics of individuals by establishing connections between their online operations or their signing up to databases; and of course the potential unemployment generated by the progressive replacement of the workforce by machine.


The more we develop artificial intelligence, the bigger the risk of extending only certain intelligent capacities (for example optimization and exploration by training) to the detriment of others, unlikely to generate an immediate return on investment or present any kind of interest for the creator of the agent (for example: moral, respect, ethic, etc…). Used on a large scale, artificial intelligence can include numerous risks and represent many challenges for human beings, particularly if artificial intelligences are not designed and scoped in a respectful and protective manner for humans. If for example, the optimization and performances are the only goal of their intelligence, it can lead to large scale catastrophes where the users are used, abused, manipulated, etc. by inexhaustible and reckless artificial agents.


Research in AI must be global and include everything which makes behaviours intelligent, not only the “most reasonable aspects”. Diettrich and Horvitz have recently published an interesting answer to some of those questions. In their brief article, the authors acknowledge that the community of researchers in AI shouldn’t focus on the risk of loss of control by humans because it doesn’t show any critical character in a predictable future, and rather advise to give more attention to the 5 short term risks to which the AI based systems are exposed, which are:


- Software bugs

- Cyberattacks

- The temptation to act as a “wizard in training”, meaning giving the ability to AI systems to understand what users want instead of literally interpreting their orders

- “Shared autonomy”, namely the fluid cooperation of AI systems with users in a way that the users may always take back control of if needed

- The socio-economic impacts of AI: in other terms, AI must be of benefit for the entire society and not only for a few privileged ones



INRIA knows of those debates and as research institute dedicated to digital sciences and technological transfer, works for the well-being of all, fully aware of its responsibilities toward society. Informing society and leading instances about the potentialities and risks of the digital sciences and technologies is part of the missions of INRIA.

In this context, for the past few years, INRIA has:


- Started a reflexion on ethics long before the AI threats brought debates within the scientific community.

- Contributed to the creation of Allistene’s CERNA5, a reflexion commission that works on the ethics problems raised by research in digital sciences and technologies; its first recommendation report is dedicated to research in robotics.

- Set up a new committee in charge of evaluating one by one the legal and ethical issues of a research: The legal and ethical risks evaluation executive committee (CORELE), composed of INRIA scientists and external contributors. The mission of CORELE is to help identify risks and determine if the boarding of a research project is necessary.

- In addition, INRIA encourages its researchers to take part in the societal debates when solicited by the media to express themselves on ethics questions such as the ones raised by robotics, deep learning, data mining and autonomous systems. The scientific and technologic stakes brought to light by research in AI lead INRIA to develop strategies to take the multiple raised challenges.


Source: Bertrand Braunschweig. Artificial Intelligence: Current challenges and Inria’s engagement - Inria

white paper. Livre blanc Inria. 2016, pp.82. <hal-01564589>

bottom of page