The regulation of urban noise is a highly debated issue and, depending on the country and the field, a series of proposals have been identified or simulated.
On European territory, noise exposure primarily interfaces with the preservation of the artistic heritage in order to avoid damage, due to mechanical ground vibrations caused by both natural geological motions and city traffic and possible industrial and/or construction noise sources. In order to protect cultural heritage and build predictive dynamic mathematical models, different parts of the monument concerned are connected to different seismographs to analyse the time series of vibrations they detect per minute, second, or fraction of a second and to hypothesise what would happen if it were placed under suddenly different acoustic conditions, e.g. if a road construction site opened right next to it.
In order to limit car noise, in Paris noise speed cameras were tested for several months in 2019 near crowded bars in entertainment areas, and other ones were installed at motorcyclist hotspots. Equipped with microphones that measure decibel levels every tenth of a second as a particularly noisy motorbike or car passes by, they identify the vehicle and assign a fine. In New York city, the most viable route according to a team of scientists seems to entrust the regulation of city noise to an artificial intelligence, capable of exerting the right degree of innovation and fascination on the control of an undervalued urban aspect, and to encourage widespread control of various city areas. SONYC (Sounds of New York City) is a five-year collaboration between the City of New York, New York University and Ohio State University, which is training an AI to help reduce noise pollution. How has it been trained? Citizens and scientists have been asked to listen to ten-second sound clips collected by sensors around the city and identify what they hear. In order to assist in the instruction of the algorithm, users are given a range of sound options to choose from (e.g. small engine, barking dog, stationary hawker motor….). This information, together with a spectrogram display of the audio, will then feed an algorithm that will learn to better identify noise sources on its own. A better understanding of noise pollution could result in better tools to combat it.
We can learn a lot from the space around us by listening to it more, in a process of collective listening. From mountain tops to the depths of the ocean, biologists are increasingly planting audio recorders to discreetly eavesdrop on the moans, cries, whistles and songs of whales, elephants, bats and especially birds, and gain insight into how natural disasters affect the living things that live there. Audio data is valuable because it contains a wealth of fundamental information on the interactions between individuals and groups in urban cacophony, on mating patterns, on the influence of noise and light pollution on avian species, on measures to safeguard species in threatened habitats. The data clearly has to be read, interpreted and analysed by an AI, thousands of data per day.
Stefan Kahl, a machine learning expert at Cornell’s Center for Conservation Bioacoustics and Chemnitz University of Technology in Germany, built BirdNET, one of the most popular bird sound recognition systems used today. A team led by Connor Wood, an ecologist and postdoctoral researcher at Cornell University in the Sierra Nevada is using BirdNET to study how birds’ habits have changed as the hours of silence and darkness at night have shortened, due to light and noise pollution generated by human activities. The degree and quality of learning of each system depends on the amount of pre-labelled recordings available and, of course, on the relevant study area. BirdNET can currently identify around 3,000 species found in Europe and North America, but it does not work as well in Asian areas, where trying to catch compromised bird calls requires scattering recorders and training other locally developed machine learning algorithms.
As our bodies and senses become sites of surveillance, new hybrid technologies offer new physical capacities and allow us to extend or re-appropriate sensory limits. The vulnerability of our senses, subjected to persistent pollution that weakens them, is also accompanied by progress in the biomedical industry that wants to compensate for the sensory points of human vulnerability. The relationship between technology, implantology and art is becoming more and more promiscuous till the point where implantology surgeries and innovations in biotechnology appear to be real performances. It has to be said that the doubt had already been raised with some body art interventions in the 1980s.
In 2019 in London, it happens that a group of cochlear implant users are involved in the creation of a sound installation by artist and composer Tom Tlatlim.
Tonotopia: Listening Trough Cochlear Implants (2019) is a series of works developed by Tom Tlalim, in residence at London’s Victoria &Albert Museum, through dialogues with cochlear implant users, for an exhibition at the John Lyons Charity Community Gallery. Cochlear implant users became co-authors of the final audiovisual installation composed by Tom Tlalim, a musician-artist whose research explores technologically extended listening, hence the relationship that sound and technology have with subjective identity.
Cochlear implants (CIs) allow profoundly deaf people to perceive sound digitally and, because they are designed primarily for speech, the implants do not accurately convey musical pitch and dynamic sounds. This condition makes it difficult for CI users to hear music and other complex sounds, but is artistically and acoustically interesting for the co-design of sound art for CI users. The Tonotopia project has benefited from and been enriched by meetings and in-depth studies on the development of medical biotechnologies for acoustic purposes, among which we feel it is worth mentioning the one with Dr. Wenhui Song, Reader in Biomaterials at the Department of Nanotechnology at UCL, about the production of custom-made human organs with 3D printing, and the research of acoustic devices made of piezo-nanofibers for the realization of an artificial cochlea. In the latter case, the tissue would be crossed by conductive wires, capable of accumulating current, hopefully enough to stimulate epithelial cells.
In recent years, artificial intelligence has been keen to fill the vulnerability points left by hearing impairments and noise-polluted spaces. For example, a series of hearing aids have been developed with sensors and A.I. that can filter out external noise, translate up to 27 languages and also function as health monitoring devices (LIVIO AI); tune sounds in accordance with the external environment (SoundSenseLearn); or to isolate specific signals in a mass of noise (Google hearing aid). Similarly, smart otoscopes compatible with Apple and Android have been developed, i.e. apps that allow the user to act as an otoscope from photos and videos of the auditory system. (CellScope, TYM Otoscope,TitoCare’s). In the future, perhaps not too distant, we expect biocybernetic research to prepare implant devices that are no longer cochlear, but cerebral, with direct transmission to the auditory areas of the cerebral cortex.
images: (cover 1) Tom Tlalim, «Tonotopia», 2019, interview with Ed Rex, still from video (2) Tom Tlalim, «Tonotopia», 2019. Tonotopia at friday-late-sonic-boom-february-2019 (3) Tom Tlalim, «Tonotopia», 2019 still from video (image Tom Tlalim) (4) Tam Tlalim, «Tonotopia», 2019, interview with Seohye Lee, still from video
Can technological devices correct the effects of noise pollution and the obsolescence of our senses? is part of “Eternal Body. Human senses as a laboratory of power, between ecological crises and transhumanism”, curated by Elena Abbiatici. This rearch has been organised thanks to the support of the Italian Council (IX edition 2020), an international programme promoting Italian art under the auspices of the Directorate-General for Contemporary Creativity of the Ministry for Cultural Heritage and Activities and for Tourism.
Previous articles:
E.G.Abbiatici, Interview to Abinadi Meza, Arshake 20.01.2022
E.G.Abbiatici, Interview to Mario Matta, Arshake 23.11.2021
E.G. Abbiatici, The political component of noise in the artistic practices of the last century: Pt. I ( 14.10.2021) e Pt. II, 14.10.2021
Elena Giulia Abbiatici, Smell as a transcendent sense. The Role of the Olfactory System in a society focused on the Ethernal Body, Arshake 02.08.2021
E.G.Abbiatici, For an Olfactory Bio-politics. Pt I and Pt II
E.G. Abbiatici, Exellent (artificial) noses, Arshake, 04.05.2021
E.G. Abbiatici, Right Under Your Nose, Arshake, 03.03.3021
Partners of the project: Arshake, FIM, Filosofia in Movimento-Rome, Walkin studios-Bangalore, Re: Humanism, Unità di ricerca Tecnoculture – Università Orientale – Naples GAD Giudecca Art District-Venezia, Arebyte – London, Sciami – Rome. “Eternal Body. Human senses as a laboratory of power, between ecological crises and transhumanism” is supported by the Italian Council (9th Edition, 2020), program to promote Italian contemporary art in the world by the Directorate-General for Contemporary Creativity of the Italian Ministry of Culture”.