Giovanni Gardinale is an eighteen years student who is about to gain a double graduation in Science and Philosophy at the Sorbonne in Paris. He is investigating on reality in order to define the mechanisms of falsifiability and reproducibility. He is bringing forward a poetic project and, in the future, he would like to regulate Artificial Intelligence from the philosophical point of view, that means, giving an ethics to robot.
Valeria Coratella: What do you think has been robotics’ greatest achievement in today’s society?
I think that, to date, the most amazing achievement of robotics is twofold: on the one hand, artificial intelligence has managed to enter the deepest mechanisms behind the management of power in our global society; on the other hand, the vast majority of us still have an image of robotics shaped by works of fiction and the more “user friendly” aspects of the complex world of non-human intelligence.
For example, a large percentage of the buying and selling that takes place on financial markets is handled by trading algorithms, in some cases without human intervention. I would like to say that these are relatively simple devices, optimised to perform a single function, but one which is enough for their competence and speed of action to exceed the limits of human understanding. There are countless fields where something similar is happening – systems that are capable of self-perfecting without human supervision, and are so successful that we are gradually becoming dependent on their smooth functioning. Yet this is not what comes to mind when we think of the dangers or benefits of artificial intelligence, because to quote an AI pioneer, John McCarthy: “As soon as it works, no one calls it AI anymore”. AI public relations are managed in such a way that, seeing Siri or Google Assistant proposing an old joke we’ve heard before or realising with ill-concealed glee that Google Translate does not capture the nuances of human language, we confine AI to a sphere that touches us, like Netflix’s occasionally spot-on recommendations, but which also reassures us, maintaining a sense of risk in those who have it, always at an intellectual level and at a distance.
This propagation of non-intelligent bodies, which together already form a global network, is not a new phenomenon. It’s perhaps one of the oldest processes to have taken place on Earth, as Richard Dawkins argues in “The Selfish Gene”. Animals like the chicken (50 billion of them) are an examples of this, genes that “want” to maximise copies of themselves, so-called “memes” which, before being associated only with funny, viral images, were understood as “cultural units” that undergo processes and pressures similar to those of evolution and, if adapted to the environment, reproduce in large quantities.
This kind of phenomenon, combined with our difficulty in conceiving exponential growth (such as that of AI), has already reached enormous proportions, and this double achievement in AI “networking” and “public relations” is incredible.
What does the word “ethics” mean to you? Do you think it’s possible to generate robots with a universal ethic independent of the socio-cultural status of each individual country?
For a definition of the word “ethics”, I’m close to the common meaning of the term when I speak of “the science that studies the moral behaviour of human beings towards themselves and towards other selves.” There are, however, two thorny concepts: “science” and “other”.
Talking about science implies a fundamental prejudice, namely thinking that there are universal and comprehensible moral values, an attitude that is known as “moral realism” (discussions about the existence of an objective and scientifically comprehensible reality are extremely interesting, but more theoretical than practical). What is the basis for universal moral values that are not dogmatically assigned by a supernatural entity? This is the basis of ethics as a philosophical discipline and, personally, I view pleasure and pain (intellectual or physical) as foundations on which to build a common moral philosophy, in particular viewing good as maximising universal pleasure and evil as minimising it. Pleasure and pain are deeply subjective yet common to all sentient beings, human and non-human, regardless of their socio-cultural background.
This basis makes it possible in principle to consider some “screenshots” of the world as better than others and, while not embracing realism and moral universalism (in an empty universe or on a planet with a different evolution there is no reason to think that these principles apply), we can consider ourselves to be on a scientific quest for an intersubjective morality. This is a term borrowed from Yuval Noah Harari, indicating abstract concepts that concretely impact our lives not because of their objective reality, but because of the broad consensus regarding their existence and usefulness, as with money or SPAs.
The problem in defining the “other” to whom the moral principles one defines should be applied is as old as the world and has clearly been greatly influenced by the culture that formed us. I like to think of “social progress” as a process of expansion of the groups we consider within our moral sphere, in which principles apply, starting with the family or clan and extending to all sentient beings. Consequentialist morality, which follows the principles mentioned above, aims to eliminate the arbitrariness of the boundary of the moral sphere, taking it to its biological and objective limit. However, this progress has not led to a consensus around these morals, because if we follow them to the letter, we risk violating principles that are themselves almost universally accepted (not killing, looking after one’s family, etc.).
One of this century’s major challenges will be to reconcile “eternally valid” principles with the “moral calculation” characteristic of consequentialism – we calculate the pleasure and pain caused by an action and, if the former is greater than the latter, we consider it good. In this way artificial intelligence can be given rules that align its objectives with ours. In my own small way, I’m studying various possibilities, but describing them in detail is beyond the scope of this answer.
In conclusion, it’s theoretically possible to bestow AI with “moral rules” that transcend at least most of the local features of our ethics. In practice, as a result of the large number of private bodies carrying out research in this field, the lack of consensus among moral philosophers, the need to mediate between extremely untheoretical and diverse parties and interests, together with the current limits of technology (ironically), the most we can aspire to is to try to align the “interests” of AI with those of humans so that the process is, on the whole, beneficial to sentient beings on the planet, avoiding the most extreme scenarios.
Personally, when I think of a cyber-world where robots are equipped with ethics, I imagine a semi-human made of metal, which can feel, make decisions and engage in discourse. Why would man need to create a robot with these features?
That’s how I also imagine it to be, at a deep level. This goes back to the first question and the ability of incredibly pervasive systems to be hidden from our ideas of what true artificial intelligence consists of.
However, the ethics of robotic entities come much earlier. For example, in self-driving vehicles equipped with racially-biased facial recognition algorithms, resulting from the homogeneous background of the people who created them. There are also many other cases, reaching as far as the management (self-management?) of social networks that have to “decide” which accounts and content to block/restrict.
In itself, the need for humanoid, physical robots is reduced to a few special cases, such as personal care and development aid – in other words, the care economy and e-learning. The quest for a faithful reproduction of the human body only makes practical sense in that it makes us humans more willing to interact with beings we perceive as our fellow human beings. Another interesting aspect is our incredible fascination with the creation of such beings and, therefore, how much the narration of this process awakens the passions of many people in the field, a bit like what going to the moon has done for millions and millions of minds.
Actually, there would be another purpose for such beings, known by the term “transhumanism”. The android could be the body for a mapped brain of a human being and/or a way of combining human beings and technology at a deeper level to increase our capabilities, although clearly this undertaking is not a short-term one (there are those who already consider the relationship we have with our smartphones to be the first step towards “transhumanism”, echoing Dante who was the first to use the term).
In this hypothetical “industrial revolution 2.0”, the machine could definitively replace man. Is this just a Luddite fear or is there a real risk of the species being supplanted? At this point what differences would there be between humans and robots?
The risk of many humans being replaced by machines in their jobs is real and, at the same time, this is also a Luddite sentiment. Luddites, from their point of view, were right to protest against technology, even though, on a large scale, they had no future plan but were only reacting. We must learn from history and prepare ourselves, because this technological revolution has the potential to be different from the others.
This time upgrading will be more complicated, because we will need a “quality” leap to stay in the labour market and the spectre of being replaceable will always haunt us. Ludd’s time was incredibly stable compared to what might happen now. I’ve said above that this is a Luddite sentiment, albeit a real and justified one, because this technological revolution will make humanity as a whole enormously more prosperous and, by fighting the right battles in the political and economic fields, will be able to get rid of the concept of productive work as an obligation and pivotal focus of our lives. Personally, I see huge potential in being able to free humanity from the constraint of employment as a capitalist necessity and to guarantee everyone the concrete possibility of creative, collaborative and desired work as an end, and not as a hated means of survival or a way of accumulating money and objects.
The real extent of the change could be a tendency, over the decades, to create a division into two social classes, one productive and exploited and one unproductive, exploitative and enjoying widespread prosperity. For the first time, there would be the possibility of putting all human beings in the second class and only forcing beings who by nature cannot suffer into extreme working conditions.
This potential does not erase the fact that the transition will be painful and extremely delicate: the relationship between the liberal professional elites and workers who will be the first to see their jobs jeopardised without seeing the benefits of the revolution will be prone to cracks and reflected in closures, political polarisation and reactionary extremism.
The differences between humans and robots, at this time, would still be marked, at least as far as the relationship between the “Spartiate” humans and the specialised “Helot” robots is concerned, although clearly human-machine fusion to enhance human capabilities and welfare could also already be taking place at this time.
From a cultural point of view, are there any disciplines that progress thanks to robotics or vice versa? What role can your “creations” play in the work sector?
I think that art could experience a period of flowering, with the development of an unprecedented cooperation between human and artificial intelligence.
It’s a well-known fact that AI can now compose music, write poetry and even paint at the level of the great masters, thanks to deep learning techniques, which allow the machine to absorb a large amount of information (e.g. all of Bach’s fugues and preludes) and gradually self-correct, improving its algorithm and generating, for example, a composition “in the style of” Bach that is indistinguishable even to an expert ear.
So I envisage a fruitful union between human sensitivity and real human presence and recursive artificial intelligence, opening the door to new kinds of creativity.
In this case, I see the artistic revolution more as the advent of photography, which did not replace painting or take away the value of art, but provided an additional tool to liberate the creativity of artists, a powerful means to break free from previous preconceptions (in today’s case a human arrogance that claims a monopoly on creativity and finds value in the “uniqueness” of one’s own faculties).
My poetic “creations” could definitely engage in dialogue with artificial intelligence, both literally and metaphorically. My poems dealing with major philosophical themes would benefit greatly from the possibility, for example, of accessing an information database already “in my brain” or, more indirectly, from the incredible advances in human knowledge on subjects on which I have so far only instinct and some research to guide me, such as consciousness. The lyricism could be amplified by the possibility of being able to dig deep into all the languages of the world to find the words that best express what I feel. The irony of a possible poetic back-and-forth in which a line written by me is followed by one written by the AI would broaden my horizons, not to mention of course the modesty and freshness of feeling like a child, playing with instruments even more intelligent than himself and finding joy in the act itself and not in the feeling of metaphysical intellectual superiority.
You’re so young and yet you have such clear ideas about the future, how do you feel about it in comparison with your peers? Do you think culture is within everyone’s reach? When do you consider a person cultured?
Sometimes I feel that the age of my body and the age of my mind don’t match, so I wonder if around the age of thirty-five I will start to falter.
All joking aside, I try to live by clear, easy-to-follow principles to simplify most decisions and devote myself to the things that I have decided matter. So, when things seem to be going wrong, I can recognise the fact that I’m moving, however little, towards the big goals I’ve set for myself and that they remain there while everything else changes, because otherwise I organise myself in a chaotic and Heraclitean way: everything flows and changes continuously.
However, I can’t help but acknowledge that it’s thanks to the privileges I enjoy (stability, emotional and economic support) that I’ve been able to choose what I want and not what I have to.
This ties in closely with culture, which I consider in principle to be within everyone’s reach in this world full of possibilities and, at the same time, mocking. Mocking because for many others, culture is more distant, kept away from other more pressing needs.
I always try to remind myself and others of Maslow’s theory of basic needs, according to which only once certain needs (physiological, security, love/belonging and the esteem of oneself and others) are met can one freely engage in what he calls “self-actualisation”, the phase in which one is creative and seeks to improve oneself (through culture as well).
Unfortunately, for many people the four necessary building blocks are not met, although I hope this will change in the future.
As for who I consider a cultured person to be, I would say that it’s someone who is Socratically aware of what he or she does not know and, for this very reason, maintains a child’s outlook on reality and loves to learn about every aspect of it – in other words, a person undergoing a process of constant ‘self-actualisation’, who does not allow himself or herself to be defined by what he or she knows, but only by how much he or she still expects from the outside world).