Machines need to have emotions

The first discussion in the "The new technologies and digital humanism" series of talks, organised by Casa Amèrica Catalunya with the collaboration of DIPLOCAT, debated the ethical problems related to Artificial Intelligence

The first discussion in the "The new technologies and digital humanism" series of talks was held on Monday the 24th of April, organised by Casa Amèrica Catalunya in collaboration with DIPLOCAT. This forms part of a series of four talks held to discuss subjects such as the symbiosis between human and Artificial Intelligence, controlling the impact of technology on our lives and finding a balance between technological innovation and human ethics.

The series of talks, which was inaugurated by the Director of Casa Amèrica Catalunya, Ms Marta Nin, and the Secretary-General of DIPLOCAT, Ms Laura Foraster i Lloret, started with a discussion of the subject "Ethics and Artificial Intelligence". In her welcoming address, Ms Marta Nin declared that "technology is a human invention and we want it to meet the requirements of society. It should be designed for people, rather than people living their lives around technology." For her part, Ms Laura Foraster agreed, commenting that "one of the challenges facing us today, in relation to the development of Artificial Intelligence, is that of finding an appropriate balance between technology and ethics. We need to use the new tools proposed by technological innovation in a manner that is not only intelligent but also moral, sensitive and plural, so that it can develop in step with the social changes that are also taking place in our societies."

Ms Marta Nin then presented the day's two speakers: Ms Sofía Trejo, a Mexican researcher at the Barcelona Supercomputing Centre (BSC), a specialist in the ethical, legal, socio-economic, cultural and environmental impact of Artificial Intelligence, and Mr Jordi Vallverdú, a lecturer sponsored by the ICREA Catalan research system who is a specialist in the Philosophy of Science and Information Technology in the Department of Philosophy at the Autonomous University of Barcelona (UAB).

Ms Trejo began her talk with the observation that there is no universal definition of human intelligence, and that in the same way there is therefore no corresponding definition of Artificial Intelligence (AI). "I will talk about machine learning," she declared. Machines learn through examples and pattens that are entered by humans. So, what about the gender-based, cultural, social and political biases that are inherent in human beings? Who generates the data? And from what viewpoint? That of the white male? She also stressed that there is an urgent need to introduce an ethical element into machine learning, given that the people who are responsible for the gathering and entering of data, "smart" data extracted from human patterns, are now starting to suffer from negative effects to their health. The companies that hired them refuse to bear any responsibility for these adverse effects, having obliged their employees to sign prior confidentiality agreements acknowledging their own personal liability.

In the same vein, the speaker referred to the "material" nature of AI and its impact on our environment. Some of the materials used for the production of the computers that store the data come from conflict zones, and the mining of lithium is leading to the forced displacement of indigenous communities, while the water that is used to cool data centres has a significant environmental impact. In Ms Marta Trejo's opinion we need to develop "violence-free" working environments and rectify both bias based on race, social class and gender and the environmental and social impact of technology.

Mr Vallverdú then took his turn to speak. From his perspective as a philosopher, he declared quite bluntly that at the present time "we have new technologies but we don't know what to do with them." All biological systems that are cognitive have emotions and, therefore, "machines need to have emotions if we want them to be creative, complex, intelligent and useful." In his opinion, human beings have many biases in terms of gender, social class and race, and consequently the analyses that they conduct are produced from a specific perspective "that would have us believe that the world is as they say, whereas in fact this is only how they see it." "It is not the robots that are found wanting, it is the ethical systems applied," he affirmed. The data entered in the machines already include a bias, which is why the replies that they produce are also biased. Mr Jordi Vallverdú pointed to the contradiction with which we are confronted: "We humans are animals who function according to neuro-chemical programs that are unsuitable for programming machines, while on the other hand we also constitute the necessary element required to provide machines with emotions."

In conclusion, this academic specialist affirmed that "we are currently passing through a period of conflict, but it is also a period that is very exciting," adding that "we have no idea what the future will hold in reality because we have no idea of how we can apply an ethical system to machines." "We possess technology fit for the gods but we have the brains of apes," he judged.

Our talks on the theme "The new technologies and digital humanism" will continue with three further talks dealing with digital humanism, environmental technology and virtual communication.