The popularity of the ChatGPT software is an opportunity to discuss the merits of a problem that will mark the 21st century’ scenarios in many areas of our life – namely the growing development of intelligent devices that autonomously carry out tasks previously supported only by human beings, intervening in sensitive contexts for their economic, social and cultural value.
From a historical point of view it is time that discussing Artificial Intelligence (AI) is no longer considered as an area reserved only for a certain professionalized elite. Starting from the 2000s – with the growing involvement of people in the technological and cultural fields of the Internet – the topic began to circulate, fascinating or disturbing growing segments of the population.
Human hybridization on the web
The reason could be in the fact that, at that time, we were starting to wonder about the hybridization we were undergoing by grafting ourselves on to the digital technologies and practices of the web. In this merger, in fact, we felt that an exchange was activated between an “augmentation” of our possibilities for action and the delegation of some of our abilities to the algorithmic world.
Where did the ability to understand and communicate that we thought was unique to human beings begin and where did we end up once we became a body with these artifacts?
Would the growing intimacy produced by algorithms have been the basis for producing machines capable of replacing us in more complex tasks requiring intelligence and sensitivity? Could we have counted on partners – robots, software – capable of helping us in physical tasks or in the analysis of highly complex phenomena?
If we look at the searches for the term Artificial Intelligence on the Google search engine – with data starting from the year 2004 – we can see how around those years there is a high degree of interest, which then decreased over time to grow back and have a violent surge right at the beginning of the current year. The first months of 2023 is certainly not accidental: even the chatGPT search key rises to the highest peak at this juncture.
The arrival of ChatGPT to the general public
In fact, the ChatGPT product – a NLP (Natural Language Process, where natural in the world AI stands for human) software – put on the net and launched to the general public in a big way and bang by Microsoft in partnership with OpenAI, the company creator of the generative model, which has now become an operational think tank of the IT giant – has aroused a universal wonder for her way of using natural language for the most varied tasks: building chatbots capable of understanding and responding to input in human language, completing and suggesting text, generating written content, automatically translating, analyzing sentiment, making summaries, classifying texts – these functions are gradually joined by others such as the interpretation or creation of images and interpolations with voice (wikipedia).
In the wake of its popularity, we are witnessing a continuous proliferation of press articles that question this turning point and the repercussions that these abilities – to understand, weave dialogues and produce appropriate texts in various fields of knowledge and work – will have.
Wonder and criticism
Obviously, as often happens when you see something that was only theorized or still seemed far away being concretely realized, what prevails are – in addition to wonder – the worries and the difficulties in dealing with them, including the criticisms of some improvident exits found in the texts in response to certain questions – accusations of racism, fake news, stereotypical bias, invasion of privacy, but also more general ones, such as copyright violations.
In fact, as known, current AI (machine learning) software is trained with enormous amounts of data from which to learn and, in this case, 60% of it comes from all web pages, 22% from posts on the reddit social networks selected for their high quality content, 16% from two different databases containing book publications and 3% from Wikipedia texts.
The AI software therefore reuses the materials treated as well as introjects the prejudices it has somehow fed on, including the cases of misunderstanding of which it becomes the author given the intrinsic ambiguity of language, known and criticized phenomena to which the AI creators try to cope by filtering “sensitive” themes in advance or proceeding backwards to correct the drifts – defined in jargon as “toxic language” or even “hallucinations” – with some artifice.
The deconstruction of the AI mythology
The useful thing in this popular interest and in the publication of the many articles is undoubtedly the deconstruction of the mythology that had been created around AI, understood as a strong idea of reproduction of human intelligence and even of its conscience thanks to computational processes capable of perfectly replicate its biological mechanisms, ideas that we often find in many movies.
Going into the merits of ChatGPT it becomes clear how the AI that works is instead obtained by trying to replicate the results of an action that we understand as intelligent, without having to worry about knowing how a human brain arrives at the same result – something on which, morever, as neuroscientific knowledge certifies, we know very little.
The so-called weak or reproductive (the results of an intelligent action) AI is therefore a work of engineering, of problem solving obtained through the use of enormous amounts of data and large computational and storage resources. So, as all engineering artifacts of a certain complexity, it is subject to scientific/commercial strategies, budget constraints, planning and operational capabilities, social and cultural paradigms.
In short, to create complex AI algorithms it is necessary to implement an organization and move enormous human work – in this sense, in its making, there is little of the so-called autonomous learning but a lot of care to guide and monitor the quality and quantity of the steps to accomplish.
Having said that, it is clear that a society interested in its own living conditions as well as in its evolution is obliged to evaluate all these design and product aspects – especially the more controversial ones – also because they shape entity that become player who act and participate in our society and culture – in the case of ChatGPT as a dialogue partner useful for carrying out a multitude of tasks, often in very delicate fields such as knowledge and communication.
Artificial Intelligence or artificial communication?
This was basically the idea that, in a recent survey on AI, the sociologist Elena Esposito, a student and scholar of complexity theories in the footsteps of Niklas Luhmann, put forward, warning us that
these programs are reproducing not intelligence but rather communicative competence. What makes algorithms socially relevant and useful is their ability to act as partners in communicative practices that produce and circulate information, independently of their intelligence. Could we say that machine-learning programs realize not an artificial intelligence but a kind of artificial communication, providing human beings with unforeseen and unpredictable information? Maybe our society as a whole becomes “smarter” not because it artificially reproduces intelligence, but because it creates a new form of communication using data in a different way. (Esposito, 2022).
In any case, the first consideration to make is that we find ourselves in social environments enmeshed and declined to favor reciprocal communicative exchanges through transmissive and computational technologies of which billions of people and things are individually harnessed, and in them we carry out more and more activities connected to the global network.
Under these conditions, the conception and design of new intelligent artifacts – i.e. machines capable of performing complex actions once conceivable only as the product of human agents – will proliferate, also because they are useful for expanding our knowledge through analysis and correlations identifiable in the ocean of data generated – something otherwise impossible to obtain due to the quantity of combinations present in it.
The dissection of reality and the assault of AI products
From this point of view, our reality configures itself to being selectively but massively dissected by some kind of interest in order to develop vary types of intelligent operator. Indeed, every intelligent machine can be represented (in the abstract) as constituted by an “agent” – the functional action in progress – an “environment” – the section of reality on which it acts – an “agent-environment relationship” – the system of information exchange between agent and environment, and the elaboration, on the basis of the information exchanged, of the functional responses to be implemented.
It should be clarified that these machines are able to act only in a closed context in which one operates rationally with numerical entities and therefore following mathematical logics of a deterministic nature or – more likely with the richness of big data – of an inductive nature (statistics), with actions judged correct with the maximum probability obtainable on the basis of the number of “learned” cases.
Big data – increasingly comprehensive information available on many phenomena/activities – and the computational and memorization power we have equipped ourselves – in order to support our digital lives – are the elements now common allowing these machines to become so sagacious.
They are modeled in computational architectures that recall and simulate human neural networks but which are organized in intricate (and exaggeratedly numerous) layers of node capable to light on and off (1 and 0) influencing each others – on the basis of the many phenomenal peculiarities and in the innumerable passages of training phases – to be finely weighted and obtain effective algorithms in providing “highly probable” answers.
Definitely
on the basis of the data the machine – having received from human beings a model or a set of starting logical-mathematical models – puts into play everything it can in terms of computational capacity and memory to solve specific problems and formulate decisions in an always more independent of human intervention (D’Acquisto, 2021).
The structural complexity of AI models
The counterpoint to these powerful modeling strategies is the constructive complexity – for ChatGPT version 3 we are talking about an algorithmic model with a capacity of “175 billion of machine learning parameters” (wikipedia, 2023). The models that come out of these procedures, unlike the algorithms traditionally programmed through symbolic languages, are not easily understandable around their actions. They derive from an approach defined as “subsymbolic” resulting cryptic to the same researchers in the field – in the jargon they are said to have “explicability” problems.
Subsymbolic approaches to AI took inspiration from neuroscience and sought to capture the sometimes-unconscious thought processes underlying what some have called fast perception, such as recognizing faces or identifying spoken words. Subsymbolic AI programs do not contain the kind of human-understandable language … instead, a subsymbolic program is essentially a stack of equations — a thicket of often hard-to-interpret operations on numbers. As I’ll explain shortly, such systems are designed to learn from data how to perform a task. (Mitchell, 2020).
When AI models become highly complex, as in the case of ChatGPT, not only there are often problems in describing their functional steps – explaining to someone what is being done – but there are also cases in which the machines correlate aspects unknown to us, as can happen in the case of the automatic recognition of image contents, where the algorithms frequently run into errors or can, at the same time, be deliberately deceived.
This is an example of a common phenomenon seen in machine learning. The machine learns what it observes in the data rather than what you (the human) might observe. If there are statistical associations in the training data, even if irrelevant to the task at hand, the machine will happily learn those instead of what you wanted it to learn. If the machine is tested on new data with the same statistical associations, it will appear to have successfully learned to solve the task. However, the machine can fail unexpectedly, as Will’s network did on images of animals without a blurry background [Will is an AI expert engaged on an own experiment, ed.]. In machine-learning jargon, Will’s network “overfitted” to its specific training set, and thus can’t do a good job of applying what it learned to images that differ from those it was trained on (id.).
Against the new regimes of truth
The popularity of the AI theme must therefore prompt us to critically examine the ways in which technological products materialize – strategies, techniques, actors, interests – and also how they are “prejudicially” or “unknowingly” implemented – for example, there should be no patents as excuses for denying deep analyses, and the open source logics should be the rule.
Ultimately, we need – and in this social and cultural research is a fundamental help – to undermine the uncritical ideologies or beliefs into which it is easy to fall, attracted by the need to deal effectively – thanks to these tools – with the complexity of our problems.
In fact, the French philosopher Eric Sadin acutely notes how there is the risk of falling into new “regimes of truth” – he uses the word algorithmic aletheia, with aletheia that in Greek means “unveiling”, “truth” – for the fact that we convince ourselves that only these AI machines have the power to grasp reality in an extremely precise way (Sadin, 2019).
It should be noted how – with the growing centrality of the role of algorithmic artifacts in many social processes – various neologisms have lately been coined aimed at grasping their impacts, for example algocrazia, for their influence in the formation of political opinions, or algorethics, for the ethical issues that they are rising.
At this point a question turns up. Faced with the many well-documented dangers – for example, the possibility of systematically creating convincing and large-scale circulating news and documents that, unfortunately, are based on deceptive and conspiracy theories (Erler, 2023) – why did not Microsoft talk about the next launch of ChatGPT with the authorities who have long been working on laws regulating the release of AI devices – see the EU Artificial Intelligence Act (2021)?
Among other things, ChatGPT is a general purpose system that can be used for good but also malevolent ends, infiltrating into the endless streams of the delicate communication sphere, and that also explains its current popularity.
The struggle of high-tech companies to excel in the market
Indeed, Microsoft’s strategy and behavior are in line with what has been seen in the last three decades of the internet history when the winning high-tech companies – in order to gain relevant economies of scale and network externalities, crucial elements to dominate in internet – always try to force the launch of innovative products ti attract audience.
The inseriment of AI products into user practices – and, in parallel, into the business chains of other products – becomes a precious reservoir hard to empty by the competition.
In the case of ChatGPT there is also the possibility of putting another competitor in the search engine sector in difficulty – Microsoft’s Bing.com has an insignificant market share, 3% against Google’s 95%, and an integration functioning as a friendly interface could incourage people inverting the trend.
As can be seen, a greedy goal, to be pursued even at the cost of risking a few fines in some region of the world – which is more or less what happened (in a few cases and with negligible damage compared to the insured profits) to punish ex post practices which turned out to be harmful to people and communities.
The echo of this battle – even Google, reluctant to take the field due to well-known reliability problems, finally showed up on the scene with the ChatGPT emulator, called Bard – is now under the eyes of many, with all the problems of putting in place devices capable of spreading disinformation on a large scale and, potentially, in a systemic way (Vincent, 2023).
In a fundamental text for understanding the strategies and tactics carried out over the years by these companies to achieve however their business objectives, the scholar Shoshana Zuboff, author of the book The Age of Surveillance Capitalism, indicates the steps stubbornly put into practice even in front of evident abusive and predatory practices (2019).
Incursion, habituation, adaptation, redirection
At the beginning, one acts with an incursive technique to explore the realization of services in defenseless fields and, faced with some kind of accusation, one tries to attribute the errors to someone within the project team.
In the meantime, taking advantage of the inevitable bureaucratic delays of the public authorities, the focus is habituation – continuing to bring users to one’s own side. However, when you are forced to make changes – the adaptation phase – these turn out to be mostly superficial.
If criticisms continue to persist, we finally arrive at the redirection phase, which provides for an encirclement of the target, which they will try to approach through the work of other entities with lesser-known profiles, often small companies acquired ad hoc.
One factor that has certainly influenced this bad trend is the legal protection that companies born on the net have enjoyed to be defended by all those traditional industries which, to defend their businesses from the new competition, could suffocate them at birth – the fact that the Internet Service Providers do not have to answer for the contents present on the net, because generated by the activities of people using services implemented therein, has made it possible to create innovation, as well as extend and stabilize processes and infrastructures in the online world.
AI in the ambiguity of responsibility
However, faced with the extent and relevance of online activities and their social effects, the various public authorities have attempted to counteract this sort of de-responsibilising mentality, but acting on a case-by-case basis.
For their part, internet companies have tried to incorporate these needs by implementing corrective measures mostly with algorithmic mechanisms which, automatically, mitigate the effects of the reported abuses, however willing to intervene manually afterwards.
It can be said that so far the most systematic action in legal terms is limited to the privacy law – the so-called GDPR (General data protection regulation) – which tries to curb the abuse of the collection and use of personal data, and is confined practically to the European region.
At this point, let’s consider the depth and extension of information, processes and knowledge already deposited and that we continue to feed into the network, as well as its dense network of correlations, information and operational exchanges activated with the other internet, that of things.
Is it reasonable to leave this immense human and social heritage at the mercy of any commercial opportunism that can be achieved through the implementation of intelligent machines or software free to move ambiguously in terms of responsibility?
As the philosopher Luciano Floridi – one of the leading experts in ethics applied to AI – suggests to us, this new kind of devices are in all respects new ways of acting within the communities, and we must – increasingly and inevitably – coexist with them.
However, at the moment there is an action detached from some intentionality (in terms of purpose and awareness) as instead happens – and we pretend it must happen – when human beings act.
The social consequences we are facing are therefore enormous. This is why, as happens in the medical field, where biotechnology has opened up a wide field of possibilities, not everything that technologies allow becomes socially and morally feasible.
Debating and legislating on AI developments, as is done in medicine with bioethics – however complex it may be to elaborate a global governance – is therefore an unavoidable priority if we desire to guide, and not only to suffer, its processes (Floridi, 2021).
Reference
wikipedia, 2023, GPT-3
D’Acquisto, G., 2021, Intelligenza artificiale. Elementi, Torino, Giappichelli.
Erler, D., 2023, “ChatGpt è sempre più potente. Anche nella capacità di fare disinformazione”, editorialedomani.it.
Esposito, E., 2022, Artificial Communication: How Algorithms Produce Social Intelligence, Cambridge (Ma), Mit Press.
Floridi, L., 2021, Ethics, Governance, and Policies in Artificial Intelligence, Berlin, Springer.
Mitchell, M. 2020, Artificial Intelligence: A Guide for Thinking Humans, New Orleans, Pelican.
Sadin, E., 2019, Critica della ragione artificiale. Una difesa dell’umanità, Roma, Luiss University Press.
Vincent, J., 2023, “Google and Microsoft’s chatbots are already citing one another in a misinformation shitshow”, theverge.com.
Zuboff, S., 2019, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, New York, PublicAffairs.