Why AI Should Be Our Partner, Not Our Overlord

AI excels at data analysis but shouldn't replace human decision-making. Developed by private companies, AI risks bias and job displacement. We should use AI as a tool, prioritizing ethics, transparency, and human control.

Why AI Should Be Our Partner, Not Our Overlord
Data overload? AI can help us make sense of the information flood.

Artificial intelligence systems (AIS) should not replace human cognition and must be supervised by people and subordinated to them, considered the academic and former director of the Faculty of Philosophy and Letters (FFyL), of the UNAM, Jorge Enrique Linares. Salgado.

In this area, individuals must always act and decide, he added at the ICT Seminar, organized by the General Directorate of Information and Communication Technologies, where the topic The challenges of ethics in artificial intelligence was addressed.

The academic attached to the FFyL College said that AIS currently surpasses human intelligence in the processing of massive data or big data, because they do it almost instantaneously. “No person can process information as quickly as computers already did, and now those systems do it with algorithms, which can classify, order, compare and, in that sense, begin to make decisions.”

The expert in ethics of science and technology, and philosophy of technology, explained that AI, or simulated human cognition, replicates through technological and digital means the intelligent behaviors and capabilities that human beings normally possess.

These systems can be used to improve all types of services, studies, calculations, planning and organization with digital tools; promote a new model of social services through databases and a public assistance network; generate information of interest for investigations and monitoring in public or private settings; or establish databases, to develop commercial systems with user information.

Linares Salgado pointed out that we must not lose sight of the fact that AI systems are commercial, they are even capital goods, and they are in a capitalist market. Most developments and innovations come from private companies and not public institutions.

The university researcher reiterated that technological and industrial creation has had a logic of unstoppable growth and acceleration since the 20th century, and that prevents people from being able to carefully evaluate the risks that occur. “They become invisible, and occasionally, it is difficult to discover them until they become damage.”

Among the challenges and risks of artificial intelligence are that machines and AIS can restrict people's autonomy, affect their decision-making and reasoning capacity, or influence politics and decision-making by using discriminatory biases in their algorithms.

Likewise, obstruct fundamental rights such as privacy and intimacy; impersonate human workers in automatable tasks; exacerbate social and economic inequalities; damaging the climate and environment with the disproportionate use of energy and water; as well as greater pollution.

But the main thing, considered the specialist, is that AIS will not be as intelligent, adaptable, sensitive, empathetic and deliberative as the majority of human beings in the future.

He cited that according to the Organization for Economic Cooperation and Development, 25 percent of conventional jobs will be replaced by AI systems: new jobs will be created, but most of them subordinate to the maintenance of AIS. It seems a fact that they will replace the repetitive, dangerous and basic cognitive tasks that most people perform today.

These systems have intentional or unintended ethical-political consequences: “they do not have conscience or bad faith, like humans, but they can fail.” The errors and damage caused by AIS could be serious and have consequences, which is why transparency, responsibility, and accountability are required from those in charge of their design, construction, and operation.

AI, Linares Salgado stressed, should never make crucial decisions (life or death) or those of great social and environmental impact by replacing human beings, nor should it cancel or circumvent democratic debate and deliberation and citizen participation in decision-making decisions.

Among the ethical principles of this tool, “all equally indispensable”, are those of protection of privacy and intimacy; of responsibility and solidarity; evaluation and democratic participation, justice, and equity; diversity inclusion; preservation of human responsibility, and sustainable development, in accordance with the Montreal Declaration for the Responsible Development of AI.