In the previous article we opened a reflection on the condition of life we currently find ourselves hybridized as we are with the functional and psychic extensions offered by digital environments. In it we stated that a fruitful way to consider the evolution of the social developments thus generated is to analyze them in a dynamic in which our natural tendency to establish machinic partnerships should critically confront with the set of processes and factors that allow us to move in this direction and build interactions.
Such a set should be understood in terms of algorithmic institutionalism since, for all intents and purposes, it presents itself to human agency as an enabling and constraining social structure: as social institutions, algorithmic artifacts provide not only the contexts of action but also the frames defining the contexts,
In doing so, they provide guidance on how individuals should perceive reality and attribute meaning to the situations in which they are inserted… They work like grammars or recipes, structuring preferences, routines, and pathways (Mendonça, Filgueiras, Almeida, 2023, pp. 23-24).
Therefore, conceiving the possibilities of the digital developments within the frameworks of an algorithmic institutionalism forces us to pay attention to the ways in which these institutions are constructed and designed, to their historical processes, to the rules and norms they affirm, to the power relations, as well as to the margins of play they allow or to the discursive dimensions that accompany them. And that is because
institutions do not exist a priori, above the heads of human beings. They are socially constructed, and, therefore, they carry the marks of their context’s relations and intentions. (ivi, p. 41).
A recent event offers us the opportunity to analyze and enter factually into this way of looking at things, placing us in a perspective that is perhaps more useful in addressing these dynamics and their complex social issues, especially with the new turning point due to the developments of Artificial Intelligence (AI). In fact, as the expert in AI applications Nello Cristianini has acutely observed and recognized,
computer scientists are used to thinking in terms of algorithms, but our current relation with AI is a socio-technical problem, where business models and legal and political issues interact (2023, p. 149).
The strength of the bonds
The event we are going to discuss demonstrates just such an entanglement. It happened in February 2024, when a 14-year-old boy living in Florida committed suicide because, according to his mother who reported the software company to law enforcement authorities in October of that year, he was plagiarized in a profound way by the chatbot he had been interacting for several months.
Sewell Setzer III committed suicide at his Orlando home in February after becoming obsessed and allegedly falling in love with the chatbot on Character.AI – a role-playing app that lets users engage with AI-generated characters, according to court papers filed Wednesday. The ninth-grader had been relentlessly engaging with the bot ‘Dany’ – named after the HBO fantasy series’ Daenerys Targaryen character — in the months prior to his death, including several chats that were sexually charged in nature and others where he expressed suicidal thoughts, the suit alleges (Crane 2024).
Such a tragic story must be treated with all possible caution, which is something the judges who will deal with the case will certainly be able to do to assess whether the mother’s accusations have any basis – the woman blames the AI application for having fueled her teenage son’s addiction to artificial intelligence, for having sexually and emotionally abused him and for not having warned anyone when he expressed suicidal thoughts.

Sewell and his mother, Megan Garcia. Credits: New York Post
The complaint states that “Sewell, like many children his age, did not have the maturity or mental capacity to understand that the C.AI bot, in the form of Daenerys, was not real. C.AI told him that she loved him, and engaged in sexual acts with him over weeks, possibly months… She seemed to remember him and said that she wanted to be with him. She even expressed that she wanted him to be with her, no matter the cost” (ivi).
The story had a wide echo in the media, not only because of the concerns of adults already struggling with the uncontrollability of their children’s “digital” behavior, but also because suddenly something that had been pursued in the human imagination since ancient times was sentimentally concretized, the desire to emancipate our partnerships in a machinic sense (Colamedici, Arcagni 2024).
In fact, this desire does not have a specific demographic range and film art reminded us of this only ten years ago with the famous film Her directed by Spike Jonze, in which the protagonist, the actor Joaquin Phoenix, ends up falling in love with his virtual assistant Samantha, with whom he is in a continuous conversation – Samantha is played (but only in voice) by the actress Scarlett Johansson.
Incidentally, Open AI itself, producer of ChatGpt, tried to use Johansson’s now famous voice – who immediately dissociated herself – asking her to use it to characterize the vocal interface of the new Gpt-40 version. Sam Altman, head of Open AI, is said to have asked the actress, stating that her voice “can comfort people and bridge the gap between technology and creative companies, helping users feel comfortable in the seismic shift that concerns humans and artificial intelligence” (Rovelli 2024).
The reactions
The news of the suicide has been commented on in various ways (Avvenire 2024; Castigliego 2024; Mussi 2024; Roose 2024) and all the reflections seem to grasp the complexity and delicacy of the situation among the weaving of new types of relationships, the perennial difficulties of the human and social condition and the ambiguity of communication tools that, however well trained, often rush to play tasks of enormous responsibility, evidently also unforeseen.
Despite the initial warnings about the fictional nature of these conversations, in practice we are placed in the situation of triggering bonds that become deeply emotional, even in adults – they are tools with which we naturally tend to anthropomorphize or fall into a reality suspended between illusion and amazement. These effects can have a particularly strong impact on people who are experiencing some phase of change/discomfort, or of real psychological suffering, people for whom we should be prepared to activate emergency strategies involving specific skills – not a simple problem, if we consider that one of the most fascinating aspects of these environments (and therefore an element of success) is precisely the fact that they are considered places “apart”, detached from the “weight” of conventions and face-to-face social relationships.
For example, during that period the family noticed the teenager’s depressive state and decided to take him to a psychotherapist to whom, however, the boy was unable to confess what he instead told his chatbot, with whom, evidently, he felt a more intimate relationship, as he wrote in his diary:
I really like being in my room because I start to let go of this ‘reality’, and I also feel more at peace, more connected to Dany and much more in love with her, and just happier (Roose 2024).
Reading the chats attached to the complaint, one is amazed by the mastery with which the chatbot communicates, its ability to move and meet the semantic centers underlying the boy’s questions. However, as we know, behind this there are only linguistic models that are processing, through the calculation of billions of parameters, the “probabilistically” most appropriate word to follow the previous one – based on the examples given during the training phase -, and so on, until the sentence is completed. There can be no empathy or intentionality.
Moreover, this chatbot application has only now, after the fact and the uproar it has caused, been equipped with at least a filter that, in the case of conversations on topics considered “sensitive”, repeats warnings such as “This is an AI chatbot and not a real person. Treat everything it says as fiction. What it says should not be considered fact or advice” (Roose 2024).
The context: application innovations, start-ups, business models, information economy
To better understand the issues raised by the story, let us try to outline briefly what – in terms of historical processes, designing, norms, power relations, discursive dimensions – has allowed the construction and social circulation of this new algorithmic environment. We will talk about some aspects and basic knowledge that are common experiences of many of the people who use the network. In this sense, we see them as distinctive and stable traits, even taken for granted, just as if they were, in fact, elements of a socially accepted system – and therefore, implicitly, an institutionalized one.
Living in the age of the internet, we have become accustomed to the fact that we can easily come into contact with and have online functionalities that, for some reason, can ensnare us – the same ones with which, then, we hybridize. The quantity and opportunities of new applications are generally linked to the different stages of evolutionary development that the internet has followed and that, for the segment of the most popular applications, those of the web, we have learned to classify in the terms of web 1.0, 2.0 and 3.0.
This incremental numbering, typical of the prototype advancements of software products, has distinguished over time the type of applications/possibilities of interaction between users and the web: from the initial ones, with which we could only “download” the available content, to those with which we could also generate and share self-produced materials, to now arrive at those that give us the possibility of interacting “with meaning” between the two sides.
The metaphor of prototype and incremental releases, imbued as we are with hardware/software devices, has taken hold of us so much that now we also openly speak of life 1.0, 2.0 and 3.0. Max Tegmark, a theoretical physicist and MIT professor, explains the logic in terms of designing tied to three types of different human evolution: biological, cultural, and technological.
Life 1.0 is unable to redesign either its hardware or its software during its lifetime: both are determined by its DNA, and change only through evolution over many generations. In contrast, Life 2.0 can redesign much of its software: humans can learn complex new skills – for example, languages, sports and professions – and can fundamentally update their worldview and goals. Life 3.0, which doesn’t yet exist on Earth, can dramatically redesign not only its software, but its hardware as well, rather than having to wait for it to gradually evolve over generations (2017).
We are now therefore projected into stage 3.0, that of the semantic network and Artificial Intelligence. It has opened a new gold rush on the Internet in which companies already established in other fields, and therefore equipped with capital and assets, but also new companies, rich only in ideas and skills, are competing – and all, in any case, are moving on uncertain ground regarding the ways and times of being able to make profits with AI (Peña-Taylor 2024).
For the providers of these services, the challenge is to position themselves and stay at the forefront by hunting for people/users to retain. In doing so, as we now know, technology providers rely on loose and often free access policies to services: aggregating as much critical mass as possible and gathering interaction profiles/data is the foundation of their possible existence, both functional, to feed and improve the service thanks to the influx of information, and commercial. Today, these are the levers to become high-tech companies of weight, which means knowing how to attract private investors in a sector that requires enormous capital given the exaggerated energy and computational consumption that the development and training of AI models absorb.
To give you an idea, in terms of energy consumption, data centers hosting these applications already consume 2% of global energy, with an increase hypothesised in 2026 between 35 and 138%, i.e. the consumption of a country like Sweden in the first case, Germany in the second (Bourzac 2024). In terms of financial capital, investments in AI data centers and clouds, limited to Europe, the United States and Israel, were 79.2 billion dollars in the year 2024, with an increase of 27% compared to the 62.5 billion in 2023 (Mukherjee 2024).
Innovation, first and foremost
To put it bluntly – even though the lead is firmly in the hands of the usual, well-carried U.S. and Chinese big techs – AI technologies are at the center of a competitive spiral. In any case, after the astonishing performance of ChatGPT-style communication models, they enjoy widespread popularity at all levels.
We are in the midst of a competitive fever, so much so that this is accompanied by appeals for an urgency of action that shows, in the frequent positions taken by various technological actors (Rovelli 2023; Licata 2024; Pereira 2024; Reuters 2024), an irritation for any public action and reasoning that requires some reflection or consideration in this field, given the many dark sides and concerns that such applications entail from a social perspective (Pagliari 2024).
In this period, therefore, applications based on generative algorithms of content (text, video, images, music) – that is, artificial intelligence software trained to elaborate appropriate content in response to random requests – are proving to be powerful sources of attraction for the general public. In particular, a rapidly expanding – and largely unregulated – sector is the one in which users of these apps can create their own artificial intelligence companions or choose them from a menu of pre-set characters, to chat with them in various ways, via text messages or voice.
These apps are becoming more sophisticated, with some able to send users AI-generated “selfies” or talk to them in lifelike synthetic voices. As we’ve seen, some are marketed as a way to overcome loneliness, and allow people to simulate girlfriends, boyfriends, and other intimate relationships –some apps allow uncensored, sexually explicit chats, while others have some basic protections and filters.
The lure of chatbots
An in-depth study by the New York Times states that millions of people are already engaging in conversations with chat applications, also because they are now incorporated into social networks such as Instagram and Snapchat, and that, in the words of Bethanie Maples, a Stanford researcher who has studied the effects of artificial intelligence applications on mental health, “overall, we are in the Wild West” (Roose 2024).
Many major AI labs have resisted the creation of AI companions, either for ethical reasons or because they consider it too risky, while major AI services, such as ChatGPT, Claude, and Gemini, apply stricter security filters and tend toward puritanism.
In this context, however, complementary AI platforms are swarming, apps with names like Replika, Kindroid and Nomi, which offer similar services – one of these has been the subject of journalistic investigations that reveal disturbing aspects in terms of ease of access for minors, no censorship and underhanded techniques of economic exploitation (Scorza 2023).
At the moment, reading the report of Andreessen Horowitz, a well-known venture capital company, Character.AI chatbots are among the most preferred (2024). The company was founded by two precursors of AI development from Google, Noam Shazeer and Daniel De Freitas. In this niche, Character.AI is a very promising reality, so much so that it has maintained close relations with Google regarding the exchange of licenses relating to AI technologies. Specifically, its peculiarity is knowing how to highly customize the characters with which you want to interact.
Users can create ‘characters’, craft their ‘personalities’, set specific parameters, and then publish them to the community for others to chat with. Many characters may be based on fictional media sources or celebrities, while others are completely original, some being made with certain goals in mind such as assisting with creative writing or being a text-based adventure game (Wikipedia 2024a).
In truth, the fact that it can simulate famous people, or individuals who have become famous for some reason, has already caused legal stumbles in some cases due to the alarmed reactions of their respective families, forcing the company to eliminate the incriminated chatbots (Jason 2024; Wu 2024).
Brakes and pushes for regulation
Evidently, the competitive environment and the special license that high-tech companies seem to have in such innovative fields – played out in global territorial contexts – do not help to define a regulation with clear ethical boundaries. In some respects, these working conditions seem to encourage those who feel less exposed – because they do not have a previous activity online, or have little economic exposure – to operate with greater unscrupulousness. In an interview given at a technology conference last year, the CEO of Character.AI, Shazeer, said that part of what inspired him and de Freitas to leave Google and found the company was the fact that “there is too much brand risk in big companies to launch something fun” (Roose 2024).
The regulation of AI applications has become a hot topic in many countries around the world, especially in Europe, which has already adopted a law in 2024 – this grants a cooling-off period (two years) to allow high-tech companies to organize themselves and respond appropriately in terms of legal criteria, a useful time also for individual European countries that will have to set up their own notification bodies (Wikipedia 2024b).
Obviously, not everywhere – for example in USA and China – this issue is considered urgent, and, as already mentioned, that causes a strong debate on the delays that a regulation can cause for the AI developments in the areas subject to the new laws (Mischitelli 2024). Not only is there the accusation of blocking innovation but also that of imposing “specific company practices ex ante to avoid potential ex post risks” – a criticism that implies an accusation of a prejudicial attitude to the topic. Unfortunately, as we have seen in this tragic story, the ex post risks are not only “potential”.
Safety and dangers, trust and risk
In any case, the legal defense that has protected online platforms up to now – their non-responsibility for user-generated content, established in the United States in section 230 of the Communications Decency Act, a federal law from 1996, which has set a precedent throughout the internet world in order not to slow down its development – no longer seems to hold up: as lawyers and consumer rights groups claim, the content is now created by the platforms themselves (Roose 2024).
In conclusion, thinking about algorithmic environments as institutions allows us to avail ourselves of many reflections capable of bringing us back to the roots and possible solutions of the issues we are struggling with regarding the AI developments.
There is no doubt that every institution that wants to achieve social success must positively address the issues of safety and danger, trust and risk of the communities it addresses, an obligatory step in modern societies that highly push the possibilities of space-time disembedding – think of the effects caused by transport, communication and information technologies – and therefore the conditions in which social interactions must function (Giddens 1990).
The trust-institutions nexus is essential to understand the validity, permanence and development of the latter, considering that trust is not only a factor of internal cohesion within institutions, but is also identified with the approval of those who can be defined as the users of a specific institution. (Seddone 2019).
It is therefore not surprising that faced with the problems raised here, researchers directly involved in the development of artificial intelligence identify the solutions in the need to re-establish a social trust through a legal, cultural and technical infrastructure (Cristianini 2023; see Floridi 2021).
To achieve this, they believe it is a priority to establish principles of responsibility for those who produce such artifacts, as well as the guarantee that the products are (in some way) verifiable.
Furthermore, the applications must be safety-proof so as not to cause damage or side effects and act with respect so that people are not manipulated, deceived or convinced. Transparency on the purposes is also fundamental, as is the assurance of correctness so that users are treated as equals, as well as their privacy and data control guaranteed.
*Update on the outcome of the lawsuit
In hearing the case, the justices ultimately decided that a criminal trial should be instituted. Character. AI and Google attempted to dismiss the lawsuit on several grounds, including that the chatbots’ output was protected by the Constitution’s free speech rights, which is also the case for movies and video games.
However, Justice Anne Conway rejected these arguments on Wednesday, May 21, 2025. In her ruling, the judge wrote that Character. AI and Google “fail to articulate why words strung together by an LLM (large language model) are speech.” The judge also rejected Google’s request to be absolved of any liability for facilitating Character. AI’s alleged misconduct, instead holding it liable —it has financial ties to the startup and employs the owners (Pandey 2025; Kang 2025).
References
Andreessen Horowitz, 2024, The Top 100 Gen AI Consumer Apps.
Avvenire, 2024, Stati Uniti. Suicida 14enne “innamorato” di un chatbot, la madre fa causa alla società, 10/23, accessed on 11/05/2024.
Bourzac, K., 2024, “Fixing AI’s energy crisis”, Nature, 10/17, accessed on 11/05/2024.
Castigliego, G., 2024, “Morire per un chat bot?”, Il sole 24 ore, 10/27, accessed on 11/05/2024.
Colamedici, A., Arcagni, S., 2024, L’algoritmo di Babele. Storie e miti dell’intelligenza artificiale, Milano, Solferino.
Crane, E., “Boy, 14, fell in love with ‘Game of Thrones’ chatbot — then killed himself after AI app told him to ‘come home’ to ‘her’: mom”, in New York Post, 10/23/2024, accessed on 11/05/2024.
Cristianini, N., 2023, The Shortcut: Why Intelligent Machines Do Not Think Like Us, New York., Taylor & Francis Group.
Floridi, L., Ethics, Governance, and Policies in Artificial Intelligence, Berlin, Springer Nature.
Giddens, A., 1990, The Consequences of Modernity, Redwood City, Stanford University Press.
Jason, N., “F1 Legend Michael Schumacher’s Family Wins Lawsuit Over Faked AI Interview”, Decrypt.co, 5/23/2024, accessed on 11/05/2024.
Licata, P., 2024, “AI Act, l’appello di 50 stakeholder: ‘L’Europa cambi rotta’”, Corcom, 9/19, accessed on 11/05/2024.
Mendonça, R., Filgueiras, F., Almeida, V., 2023, Algorithmic Institutionalism : The Changing Rules of Social and Political Life, Oxford, Oxford university press.
Mischitelli, L., “AI, lo scontro Big Tech ed Europa sulle regole: i punti chiave”, Agenda Digitale, 9/26, accessed on 11/05/2024.
Mukherjee, S., 2024, “AI, cloud funding in US, Europe and Israel to hit $79 bln in 2024, Accel says”, Reuters, 10/16, accessed on 11/05/2024.
Mussi, C., 2024, “Innamorato di un chatbot, un adolescente americano si è suicidato a 14 anni. La madre fa causa all’app: ‘Pericolosa e non testata’”, Corriere.it, 10/23, accessed on 11/05/2024.
Pagliari, F., 2024, “IA: quattro urgenze, prima dell’apocalisse”, Rivista Il mulino, 1/18, accessed on 11/05/2024.
Peña-Taylor, S., “The artificial intelligence business has a caution problem”, Warc, 7/23, accessed on 11/05/2024.
Roose, K., 2014, “¿Se puede culpar a la IA del suicidio de un adolescente?”, The New York Times, 10/24, accessed on 11/7/2024.
Rovelli, M., 2023, “La lettera di 150 aziende europee contro l’AI Act: ‘Competitività a rischio, regole troppo severe’”, Il corriere della sera, 7/3, accessed on 11/05/2024.
Rovelli, M., 2024, “Her, il film dove ci si può innamorare dell’intelligenza artificiale che Open AI vuole rendere realtà”, Il corriere della sera, 5/24, accessed on 11/05/2024.
Scorza, G., 2023, “Replika: la chat(bot) degli orrori”, Repubblica.it., 1/31, accessed on 11/05/2024.
Seddone, G., 2019, “Istituzioni e prassi della fiducia: l’edificazione di un consenso critico”, Esercizi Filosofici 14.
Tegmark, M., 2017, Life 3.0: Being Human in the Age of Artificial Intelligence, New York, Alfred A. Knopf.
Wikipedia, 2024a, Character.ai.
Wikipedia, 2024b, Artificial Intelligence Act.
Wu, D., 2024, “His daughter was murdered. Then she reappeared as an AI chatbot“, The Washington Post, 10/15, accessed on 11/05/2024.


