Luca Pagan, a young artist who received a special mention in the third edition of the Re:Humanism Prize 2023, brought his body-architecture project Retraining Bodies to the Romaeuropa Festival on 8 October, 2023. With this work, his own body has become a dynamic sound space, as the artist has created an exo-skeleton consisting of a network of sensors connected by thin cables that cover his torso, arms and hands and intercept his muscular impulses.
“Through the use of body architectures to detect body movement,” the artist explains in the statement of his work, “the machine learns to associate gestural expressiveness with the production of sound forms. Neural networks process the sensor data, playing a dual role: teaching the machine expressive body-sound associations and recognising behavioural states that influence the software and sound. When it encounters new gestures, distinct from previous training, the software produces unexpected sound results, making the machine ‘creative’ in itself. In this way, the machine influences the human being to perform new movements’.
Luca Pagan’s body contracts, flexes, and extends, generating watery or magmatic sounds that vary in intensity depth according to the isomorphic power produced by the tension of agonist and antagonist muscles: when he extends he ‘plays’ a waterfall, when he flexes the sound implodes.
It will be Luca Pagan himself who will tell of the new geographies that take shape in the man-machine encounter in a choral interview. The artist’s emotional component also emerges strongly, as expressed in the choice of the association of sounds and gestures.
In establishing the man-machine relationship of your body machines, is it possible to speak of empathy?
In Muli-Node Shell more than empathy, the more correct term in my opinion is ‘symbiosis’. This is because there is a feedback in the learning processes between me and the machine. Let me explain. The movement-sound association system is based on artiﬁcial intelligence algorithms that perform a dual function. In a first phase, it is I who teaches the neural networks my subjective expressiveness in describing a sound through body movement. In this phase, the machine learns the associations between my movement and the production of a sound. The interesting thing is when the machine receives new gestural data different from those previously taught and, since the algorithm is based on a principle of value interpolation, the software returns unexpected sound results. These new sounds inﬂuence the perception of my movement. And it is here that the machine teaches me something. In this phase, learning is reversed. Through sound I am guided and stimulated to perform new movements that lead me to new perceptions of my body. That is why I think symbiosis is the most appropriate term, because it is a relationship where each one learns something from the other, it is a continuous exchange.
Do you also refer to mirror neurons in your research?
My research is also based on a scientiﬁc principle, namely that the perception of sound, on a cognitive level, is a phenomenon that originates from a mirror model between perception/action. What happens when we hear a sound is that through our sensory system and the processing of our memory, we link the ﬁxed sound energy to a sound object. This object is processed in our cognitive system where it passes through two respectively interconnected areas: outer space, where the sensory inputs contained in the environment are present, and inner space, where the perceptual systems of our movement (also called kinaesthetic apparatus) are present. The interesting thing is that within these two spaces, we process sound as movement trajectories. The internal space, as an output, gives rise to movement, which can be inhibited at certain levels. This is how our perception of sound is linked to action or simulated action. When I started to look into this I was really happy, because I realised that there is a strong link between movement and sound, not only apparent and expressive, but it is a link that originates in our biological nature as human beings.
What has Artiﬁcial Intelligence taught you so far? Did the moment of exchange with the machine during the performance at the Romaeuropa Festival reveal any surprises to you?
Artistic intelligence is teaching me new ways of expressing myself and communicating through sound. Sound for me becomes a kind of language. Clearly unlike ‘traditional’ languages where the purpose is to communicate concepts, sound rather tries to communicate emotions, feelings, moods, things that traditional language does not always deliver efﬁciently. This communicative and expressive possibility using many types of sound is leading me towards a new sensorial form. I can now express certain conditions no longer with verbal language but directly with music. In performances I always try to explore the efﬁcacy of this system with the audience that is invited to interact with me. I often notice that many things work: aggressive sounds bring tension among the audience, while sounds rich in melodic freedom act by building a dialogue.
At Romaeuropa, it was interesting to present the research and show the learning process behind it. Clearly where I construct gestures that I have previously taught to the machine the predictability is almost total. Instead, when I move away from the training then improvisation happens: there is almost always something new to discover and this is the element that fascinates me the most and pushes me to continue developing this project.
Can you tell us more about the choice of sounds?
The choice of sounds is the only totally artistic and personal area of the project. I prefer not to apply a scientiﬁc criterion to the choice of sounds, otherwise the whole project would risk becoming a ‘scientiﬁc experiment’. I choose the sounds to associate with the gestures based on the sensations that sound gives me. Sometimes they are the result of what that sound produces within my memory. For example, the sounds of the waves of the sea, which I associate with very closed gestures with the body, take me back to my childhood in Venice, as if those sounds could protect and reassure me. Other sounds, on the other hand, refer to literary works or other references that have in some way inﬂuenced me. An example here are the sounds of crumbling concrete, a very aggressive type of sound that I associate with energetic movements: the origin of the association comes from Ballard’s book The Condominium, in which he talks about cities as concrete prisons that imprison society. So the idea of using this sound for movements with high muscular tension becomes an expressive way for me to destroy these ‘concrete cages’ and seek freedom. Generally speaking, I try to construct a kind of my own vocabulary of sounds and signiﬁcations.
How Multi-Node Shell is evolving in this new phase?
Initially, Muli-Node Shell was born as an interactive musical instrument, but after an initial intensive period of use, it became a sort of prosthesis, an integral part of my body. It is an evolving project in many ways. On the technological side, I am now working on the production of custom circuit boards with the circuit directly printed on the board, so that I can also reproduce it on a large scale. This is my personal second big technological revolution after learning how to programme IOT systems.
In terms of interactivity, I am very satisfied now that I have updated and improved many of the mathematical functions in relation to sound aesthetics. Whenever you design new functions you always have to compare them with the musical intentionality that these functions can provide. There, now the level of musical intentionality is almost perfect, perfect but not finished. This means that it will not be the deﬁnitive version, certainly an advanced stage of interactivity that allows me to consider it perfect as a system.
Another key point is the social impact that this technology can bring. I am working a lot in the health field on rehabilitation technologies: my wish is to use these devices also for motor rehabilitation and assistance purposes for prosthetic or disabled patients. The Divergences project is based on this. I would like to think that these tools can become helpful or supportive to people, and that they somehow challenge the concept of the ‘normalised body’ towards an ‘expanded body’ direction.
images (all): Luca Pagan, «Muli-Node Shell», lecture performativa, Romaeuropa Festival 2023
The interview was born within the research group of the School of New Technologies of Art at the Academy of Fine Arts of Rome as part of
Backstage /Onstage (edizione 2023: Trame), a project born from a partnership between the Academy of Fine Arts Rome, Romaeuropa Festival and Arshake. The occasion, was Pagan’s performance at Romaeuropa Festival within Digitalive, curated by Federica Patti: Retraining Bodies, Romaeuropa Festival, Mattatoio, Roma, 08.10.2023. Retraining Bodies was performative lecture which explored learning methods between the human body and artificial intelligence. With Retraining Bodies, Luca Pagan received the First REF Special Mention of the Re:Humanism award, curated by Daniela Cotimbo, dedicated to the relationship between art and AI.