Improtech 2019 | 2nd day
Dates
Tickets
Venue
Time & Date
Information
Lectures 9:00-13:30
National and Kapodistrian University of Athens, Amphitheater “Ioannis Drakopoulos”
Workshops 16:00-19:00
Onassis Stegi (The Galaxy Studio, Galaxia 2)
Concert 20:00-22:30
Onassis Stegi (Upper Stage)
Tickets
Lectures: Admission is free, on a first come first served basis
Workshops: Admission is free, on a first come first served basis | Limited capacity
Concert: Admission is free, on a first come first served basis | Entrance tickets will be available 1 hour before the event
Introduction
Improvising machine systems, augmented pipe organs and voice instruments. In Improtech Day 2, we explore the relation between algorithms and improvisation through talks, workshops and concerts.
Algorithms, AI and Improvisation
Welcome coffee
Keynote talk: “Perception, embodiment, and expressivity in human and computer improvisation”
George Tzanetakis (University of Victoria, CA, USA)
The majority of research in computer systems for composition and improvisation has been based on symbolic representations and follows a stylistic imitation paradigm. There are some inherent limitations to these approaches that are especially apparent in the context of improvisation. After a brief overview of existing approaches, Tzanetakis will argue that in order to create more effective improvisation systems it is critical to integrate perception, embodiment, and expressivity and also consider audio representations. This integration will be motivated using specific examples from human, computer, and human-computer improvisation scenarios. This exploration will help us better understand and appreciate the complexity of music improvisation and inspire future research that considers perception, embodiment, and expressivity.
“It Ain’t Over till It’s Over: Theory of Mind, Social Intelligence and Improvising Machines”
Ian Gold and Eric Lewis (McGill University, CA, USA)
Improvising machine systems have made remarkable advances in the appropriateness of their contributions to collective improvisations. It has, however, proven to be intractably difficult to create an improvising system that seems aware – to the same degree that experienced human improvisers are – of when a collective improvisation is coming to an end. Gold and Lewis explore the role that theory of mind plays in collective improvisation, and suggest that machine failures to manifest theory of mind may be behind this failure. They suggest that a false model of collective improvisation, and a false model of theory of mind, has occluded the importance of theory of mind to collective improvisation. They also survey a number of experiments that they hope to undertake to help establish the connections they hypothesize, and suggest what this may mean for the future of improvising machine system design, and for the role of improvisation in assorted therapeutic contexts.
“Improvising with augmented organ and singing instruments: gesture, sound, music (Cantor digitalis)”
Christophe d’Alessandro (Sorbonne University, France)
In this talk, d’Alessandro presents a reflection on his practice of improvisation with the augmented pipe organ and voice instruments. In the augmented organ, the pipe sounds are captured, transformed and then played back in real time in the same acoustic space as the direct pipe sounds. Augmented organ projects rely on three main aesthetic principles: microphony (proximal sound capture), fusion (of acoustic and electro-acoustic sounds) and instrumentality (no fixed support or external sound source). The augmented organ can be played in solo or duo (organist + live-electronics player). Solo performance is more challenging, as the organist must control additional interfaces, when his hands and feet are already busy with keyboard, pedalboard, expression pedals and combination and registration. Performative vocal synthesis allows for singing or speaking with the borrowed voice of another. The relationship of embodiment between the singer’s gestures and the vocal sound produced is broken. A voice is singing, with realism, expressivity and musicality, but it is not the musician’s own voice, and a vocal apparatus does not control it. These instruments allow for voice deconstruction, voice imitation, voice extension. Specific vocal gestures are replaced by hand gestures on control interfaces like graphic tablets, MPE keyboards, and even the (augmented) Theremin. D’Alessandro will argue that the augmented organ (including extended techniques and new control interfaces) is in continuity rather than break with the organ improvisation tradition. Pipe organs are complex timbral synthesizers, which have always accompanied the evolution of music and technology. Improvising with performative vocal synthesis is a more disturbing experience: because linguistic meaning, vocal intimacy and personality are mixed or even confused in vocal performances, at the (possibly interesting) risk of an “uncanny valley” effect.
Coffee Break
“Creativity, blending and improvisation: a case study on harmony”
Emilios Cambouropoulos (Aristotle University of Thessaloniki, Greece)
One of the most advanced modes of creativity involves making associations between different conceptual spaces and combining seemingly unrelated constituent elements into novel meaningful wholes. Composers and improvisers often actively employ combinational and fusion strategies in producing original music creations. In this presentation Cambouropoulos focuses on issues of harmonic representation and learning from data, giving special attention to the role of conceptual blending in melodic harmonization. Models are presented for statistical learning of harmonic concepts (chord types and transitions, cadences and voice-leading) from musical pieces drawn from diverse idioms (such as tonal, modal, jazz, octatonic, atonal, traditional harmonic idioms). Then, a computational account of concept invention via conceptual blending is described that yields original blended harmonic spaces. The CHAMELEON melodic harmonisation assistant (new online version) produces novel harmonisations in diverse musical idioms for given melodies and, also, blends different harmonic spaces giving rise to new “unexpected” outcomes. Many musical examples will be given that illustrate the creative potential of the system. Such sophisticated blending methodologies can be incorporated in interactive improvisation systems allowing the creation and exploration of novel musical spaces (bypassing mere imitation).
“Do the math: Musical creativity and improvisation under the spectrum of information science”
Maximos Kaliakatsos-Papakostas (Ionian University, Greece)
Musical scores include information that is mostly sufficient to reproduce a musical work or the performance of an improvisational agent; this information can be considered as “low-level”, if micro-timing, performance-specific or timbre-related information is disregarded. High-level structures emerge from patterns that combine low-level attributes: cadences, harmony and rhythm, among others, are higher-level constructions that build upon fine-grained combinations of low-level elements. Humans have the ability to implicitly identify such structures and readily employ them when listening, composing or improvising music, but to what extent can such human cognitive processes be algorithmically modelled? What would such modelling be practically good for? This lecture presents the problem of algorithmically modelling music cognition and creativity through methods of information science. Particular focus is placed on pattern extraction through generalization (or information reduction) which is directly related to statistical learning. An intuitive presentation of the relations between these concepts and deep learning is given and, finally, some thoughts are openly discussed with the audience about how the latest advances in Machine Learning can be of practical use to the composer, the improviser or the music enthusiast.“Children’s improvisations using reflexive interaction technologies – Computational music analysis in the European Project MIROR”
Christina Anagnostopoulou, Aggeliki Triantafyllaki, Antonis Alexakis (University of Athens, Greece)While improvisation has been an essential component of music throughout history, its manifestation in children’s music-making is a debated issue (Azzara, 2002). At the same time, research has revealed that improvisation is a significant aspect of children’s musical development and an important venue of creativity (Webster, 2002; Ashley, 2009). When children are improvising, particularly at an early stage of development, they usually try to express themselves without following any particular rules. Creativity then can emerge naturally (Koutsoupidou & Hargreaves, 2009). New technologies can support this natural development and help children develop their own musical style.
The European Project MIROR (Musical Interaction Relying on Reflexion, mirorproject.eu) was based on a novel spiral design approach involving coupled interactions between computational and psycho-pedagogical issues. It introduced an AI-based improvisation system, the MIROR- IMPRO (Pachet et al. 2011), based on the original Continuator (Pachet 2003). The project integrated various psychological experiments, aiming to test cognitive hypotheses concerning the mirroring behaviour and the learning efficacy of the system, and validation studies aiming at developing the software in concrete educational settings. The philosophy behind the project was to promote the reflexive interactive paradigm not only in the field of music learning but more generally as a paradigm for establishing a synergy between learning and cognition in the context of child/machine interaction (Addessi et al. 2015).
This talk explores the thesis that the computational music analysis of children’s musical improvisations who use the above technology in order to find regularities and patterns of significance, can provide a useful addition and a valuable tool that can render even more constructively the blending of technology into children’s musical routine. On one hand, a tool is offered to assist the teacher in providing the musical dictions and on the other, the tool can provide the learner with a means which independently advances his/her musical capabilities through playful interaction. In order to achieve this, a specialized data-mining techniques was employed and a set of lexicographically empowered investigation software tools were developed to analyse the musical corpus produced by the children’s improvisations. The speakers present part of their results on the analysis on children’s improvisations and discuss the general advancements that the MIROR Project offered in the area of children’s improvisations.
Body and Drama
“Kinaesonics: crafting and being trans-dimensional (Bodycoder system)”
Mark Bokowiec, Julie Wilson-Bokowiec (University of Huddersfield, UK)
In this workshop/demo Bokowiec and Wilson-Bokowiec will unpack their particular approach to Kinaesonic composition and the multi-dimensional nature of their particular brand of live performance with the Bodycoder System. They will explore the critical intersections where liveness meets the programmed and the automated, consider the aesthetic as well as the socio-political implications, and discuss the role and qualities of improvisation employed in the new work they will present at the Onassis Stegi.“Interactive Drama Tools”
George Petras (National School of Dance, Greece), Panagiotis E. Tsagkarakis (freelance sound engineer), Anastasia Georgaki (University of Athens, Greece)The use of novel interactive technologies in performative arts provide dynamic tools for improvisation and expressiveness of the actor/musician during a performance.
The speakers’ research focuses on the development of interactive tools used in the context of ancient Greek Drama and Prosodic recitation. Firstly, the speakers will present theoretical and practical aspects regarding the use of voice in drama performance. How are individual elements of ancient Greek prosody as well as transposed ancient music theories (such as the curve of “logodes melos” by Aristoxenus) used in the interactive process? Secondly, the speakers will present the technical and practical aspects regarding the interactive platform, explaining areas like sensors, data extraction, mapping, and sound design. The interactive tools were built to provide and develop the improvisation ability of the performer in two ways: sonic improvisation and structural improvisation. The sonic improvisation is achieved by focusing on voice and sound processing, where the performer manipulates the sonic outcome in order to enhance the prosodic interaction and emotional meaning on the text. The structural improvisation allows the performer to freely move between scenes since he controls the cues with gestures and key positions in space. The workshop includes a brief performance presenting the interactive platform in action. It aims to show how the theoretical, technical and performative aspects merge.
“Collective performance and improvisation using CoMo-Elements”
Michelle Agnes Magalhaes (composer, Ircam, France), Frédéric Bevilacqua (researcher, Ircam, France)
Using the web application CoMo-Elements, this workshop proposes an approach to collective performance with mobile phones, used as motion sensors and interactive sound systems. Each mobile can be “played” using gestures. The application allows for users to design their own gestures and associate them to specific sounds. Additionally, all mobiles can be synchronized and remotely controlled, allowing for musical structures to be either composed and performed, or more directly improvised collectively. The speakers will present the CoMo-Element system, along with examples of possible uses of the system allowing participants to explore various possibilities. Musical material and small musical pieces by Michelle Agnes Magalhaes will be offered to the participants to be collectively experimented and discussed.
Bernard Lubat, Sylvain Luc, Marc Chemillier, Gérard Assayag
“Lubax Lux”
Piano, guitar, Omax and DJazz computer systems
Now is the time for Lubax style improvisation between man and machine, a duo created by Bernard Lubat and Ircam/EHESS researchers Gérard Assayag and Marc Chemillier, extended for this occasion with guest musician Sylvain Luc, one of the greatest guitar player in his generation. Computer systems (OMax, DJazz) capable of “listening” carefully are capturing the game of improvising musicians in real time, in order to produce new improvisations combining imitations and transformations. In direct contact with the human performers, and starting from nothing, the computer learns how to play, making steady progress toward musical expertise, tirelessly increasing the amount of structured information stored in its database, while human operators are still in control of global strategies (when to choose silence, responsory or heterophony, scarcity or complexity etc.).
Bernard Lubat (piano, voices), Sylvain Luc (guitar), Marc Chemillier (DJazz computer systems), Gérard Assayag (OMax computer system)
Lara Morciano
“Philiris”
For piano, motion capture, transducers and real-time electronics
In “Philiris” the research is based on the correspondence between the gesture and the characteristics of the sound signal as well as the synchronization and interaction between the hand movements of the performer and various sound processes. The work is composed of different movements, each one conceived as an individual sound universe. The name refers to a family of butterflies which inspires some parts, such as the light movements of the wings in the air produced by the pianist’s fingers, as well as the concept of metamorphosis (from the state of larvae to that of beautiful butterfly) by different processes of transformation evoking the piece’s changes of color and atmosphere.
The prevailing idea of the ambiguity of the double explores the cohabitation of a tempered world with that of a virtual resonant instrument completely detuned; this virtual instrument is treated and diffused by the transducers, which recreate and diffuse the sound synthesis manipulated with the motion capture directly into the piano. Chords played in the air are triggered by the motion tracking of both hands; the acoustic piano chords are captured to make synthesis in real time that prolongs the tempered sounds of the instrument while creating beats between the neighboring microtonal frequencies produced by the virtual piano. The improvised sections based on timbral exploration inside the soundboard produce unusual noisy sounds, which are associated with concatenative sound synthesis driven by sensors, setting up a situation in which the instrument and electronics are interdependent and interactive.
Lara Morciano (composer, piano), José-Miguel Fernandez (programming, live electronics)
George Lewis, Evan Parker, Mari Kimura, Stylianos Dimou, Voyager
“Voyager: Interactive Quintet” (2007/2019)
Trombone, saxophones, violin, live electronics, Voyager interactive pianist
The performers are engaged in live, completely improvised dialogue with Voyager, a computer-driven, interactive “virtual improviser” software designed by Lewis, who has been working with musical computers, or “creative machines”, since 1979. The computer performs on a software-controllable acoustic piano, the Yamaha DisklavierTM. Lewis’s program uses its analysis of both musicians’ performances to guide its generation of complex responses, while also establishing its own independent musical behavior. The system does not need real-time human input to generate music; it can also take initiatives on its own. In this performance, the improvised musical encounter is portrayed as a sonic negotiation among musicians, some of whom are people and others not.
The philosopher Arnold I. Davidson has written most cogently on the issues involved: “Listening to freely improvised music cannot be a passive experience. Neither we nor the musicians know what to expect, and our programmed habits of listening may interfere with our ability to listen, and so reveal themselves to be prejudices that narrow our space of freedom. Responsibility begins in response, and to respond to this music we have to learn to hear its intelligibility, to participate in its conversation. Such participation requires continuous awareness, attention, vigilance, and practices of self-transformation that are necessary to the creation of new forms of freedom. The ethics of self-formation and the politics of social interaction come together in these improvisations. When one of the performers is a creative machine, as is the case tonight, our habits and prejudices are still further challenged. Don’t we know how a machine must think, must sound? What we say about human/computer interaction is all too frequently dictated by an already determinate picture of the boundaries of the possible and the impossible. If we detach ourselves from this picture so that we can begin to listen, perhaps we will come to experience these creative machines as posing and provoking the challenges of self-transformation and social meaning from yet another perspective. And then we will be in a position to realize that multiplication of perspectives means multiplication of possibilities.”
George Lewis (trombone, laptop), Evan Parker (saxophones), Mari Kimura (violin/MUGIC™ sensor), Stylianos Dimou (live electronics), Voyager (interactive computer pianist)
Mari Kimura, Pierre Couprie, Gyorgy Kurtag, Hugues Genevois
“A Close Encounter of the Seventh Kind”
Violin, live electronics, Handsonic, Koncertdoboz
In musicology, a “close encounter” is an event in which a person witnesses an unidentified musical object. This terminology and the system of classification behind it were first suggested by cyber-musicologist Leonid S. Gonionski in his 1958 book “The CE Experience: A Scientific Inquiry”. Categories beyond Professor Gonionski’s original three have been added by other researchers and successors. It would be long and tedious to detail all these categories. Let’s just say that a “close encounter of the seventh type” occurs when the audience is invited to attend the creation of a musical hybridization between real humans by scientific methods and technological means.
Mari Kimura (violin/MUGIC™ sensor), Pierre Couprie (live electronics), György Kurtág (HandSonic), Hugues Genevois (Koncertdoboz)
Rémi Fox, Jérôme Nika
“C’est pour ça”
Saxophones and DYCI2 computer systems
In the course of a process of “digital lutherie”, artistic collaborations are inseparable from the technological aspects. Thus, Rémi Fox has been involved since the very beginning in the creation of the DYCI2 generative agents. The name of the duo, “C’est pour ça” [That’s why], is a nod to its early days, when the performances were intended to be didactic rather than purely creative, with a purpose of tests or demonstration. This “digital lutherie” has now reached a stage of maturity sufficient to serve as a basis for a purely musical research, combining “meta-composition” and free improvisation: composing the “musical memories” of improvising agents, the structures underlying their musical discourses, their listening and reaction mechanisms, and allowing the form to be generated by pure interaction. “C’est pour ça” develops an electronic aesthetic while seeking to preserve the organic character of the summoned “memories” (traditional choirs, spoken voice, saxophone playing modes, and so on).
Rémi Fox (saxophones, electronics), Jérôme Nika (DYCI2 computer system)
Credits
Embedded media
If you want to enjoy embedded rich media, please customize your cookie settings to allow for Performance and Targeting cookies. Your data may be transferred to third-party services such as YouTube, Vimeo, SoundCloud and Issuu.