The Term RobotEthics in Allusion to the Ethics of Robotics
Two decades have passed since the researcher Gianmarco Veruggio coined the term robotics in allusion to the ethics of robotics, a field of applied ethics that studies robotics’s positive and negative implications.
The goal is to prevent the misuse of robots and artificial intelligence (AI)-based products and services against humanity. “As AI is found in more and more sectors of our lives (work, education, leisure, social relations, etc.), the effects that the algorithms behind those applications of AI can have on people, society, and the whole world are becoming potentially larger,” says Joan Casas-Roma, a researcher at the UOC’s Computer Science, Multimedia and Communication Department.
An example is what happened during the COVID-19 pandemic in the British educational system, which used an automated system to, through existing student data, predict the grade they were estimated to have obtained in an exam due to confinement, which could not carried out. The result? The general dissatisfaction.
As Joan Casas-Roma explains, in this case, the decision to delegate the prediction of the grade entirely to an AI system that only took into account the data of the students that were in the system instead of having the experience and the knowledge of their teachers, was about to put at risk the education and educational future of many of those students. Due to mass protests, the British government decided not to use this automated prediction. However, there is not always the possibility of turning back.
“Unfortunately, there are many examples of how AI can make serious mistakes if ethics are not taken into account. Some of the best-known cases are related to the existence of biases and injustices in machine learning or automatic learning techniques”, says the UOC researcher, citing the case of an automated system used in the selection processes of a vital company multinational, which turned out to make unfavorable decisions towards the candidate’s women because the data used to train the system already showed significant gender inequality in positions similar to those offered.
Another case that chose the data to automate decisions that turned out to be problematic is that of the US judicial recommendation system, which, due to the racist bias in police data on crimes and criminals, showed unfavorable recommendations towards Afro-Americans.
“Those of us who have to incorporate ethical codes are the people who program the machines and who make decisions with the data they provide us,” says Anna Clua, professor of Information and Communication Sciences Studies at the UOC, and He adds that “machines don’t think. Make”.
According to Club, the use of AI must be ethical by definition wherever it is applied, “whether in the applications of our mobile phones or in the algorithms with which hospitals screen emergencies. Recognition of people’s rights, as well as compliance with laws (such as data protection), is a sine qua non condition for the use of AI from all fields, she says.
Explainable AI and Artificial Moral Agents
The list of examples of ethically adverse effects on AI systems is much longer. It was one of the reasons that led the European Union to publish ethical guidelines for trustworthy AI, based on a set of principles that the ethical design of AI systems must respect, with recommendations on how to integrate those principles into the design.
Of methods and algorithms. And recently, the Information Council of Catalonia published a study from which a series of recommendations have also been extracted for the proper use of algorithms in newsrooms in the media. These are recommendations entirely in tune with the code of ethics of the journalistic profession.
They are not the only steps that have been taken to try to make robotics a reality. The field of ethical AI has become a significant research area. As Casas-Roma, the doctor in Representation of Knowledge and Reasoning at the UOC, explains, among the lines of research that are focusing more effort is the treatment and processing of data to prevent an AI system based on machine learning from extracting biased and unfair correlations, for example, through demographic data that is not related to the decision that the AI must make.
“In this area, efforts are directed at understanding how data should be collected, processed, and used to identify, prevent, and mitigate the emergence of patterns based on characteristics that, in addition to being irrelevant from a causal point of view for the decision to be made, reproduce biased and unfair decisions towards certain social groups,” indicates the UOC researcher.
Another line being explored in ethical AI research is based on integrating ways to follow, understand and assess the decisions made by an AI system. It is the field known as explainable AI or explainable AI (XAI), which seeks to avoid the ‘black box’ effect in which, given specific input data, an AI system makes a particular decision, but without a human externally can understand what reasoning process has led the system to make that decision, and not a different one.
In this sense, the field of XAI seeks to bring the reasoning followed by an AI system closer to a form of reason understandable to a human user.”
Furthermore, another field is being investigated in the area of so-called artificial morality. It is about the possibility of creating artificial moral agents or artificial moral agents (AMAs).
As part of their code and the reasoning processes followed, these agents would incorporate “a way of identifying, interpreting and evaluating the moral dimension of the decisions that are made to ensure that said decisions are ethically acceptable.” It is a challenge that faces how to represent, computationally, something as complex and contextual as ethics.
Ethics, as we understand it lived in human society, has to do with some general principles, but also with the particularities of each case, with human rights and the space and potential for growth that every person should be able to have—, in addition to having its own cultural and historical variations.
Before we can integrate a general moral code into an AI, surely we should be able to capture and represent all that complexity and contextuality in a language as specific as computational and mathematical language. And that is, right now, a more human challenge than technological”, says Joan Casas-Roma.
To meet this challenge, experts consider the involvement of professionals from all fields essential since data is present in all sectors. “It is no use saying that they are the exclusive competence of the field of engineering or data science.
The proper use of AI is the responsibility of professional staff in public administrations and private companies in any field, whether it is used on a large or small scale”, says the UOC professor. But, in addition, the involvement of consumers and citizens is also needed.