Artaud Forum 2: Konnecting Gestures
Research Archive of the event (March 30, 31, April 1, 2012) // audio-visual documents
And what would a “futurist manifesto” sound like today? What is the future of attention (as Ian Winters asks), and who will be attending?
What (pre)occupies us? Interfaces and interconnections exist in manifold ways in contemporary creative practice and the common modalities of communication in today’s digitalized and networked universe. Collaborative creation of knowledge and action (if we think of the recent uprisings and collective movements) has moved to the foreground of many discourses that politicize the engage- ment we choose, and the gestures we make, to produce new models of participation. These models are not necessarily based only on our new technologies, networks and protocols, but on reciprocal activities, which in our workshop we explore, to some extent, through sounding and through performing and listening together, and through debate and through sensory immersion. Antonin Artaud once proposed to “make metaphysics” out of our spoken language and our gestures (“the enthusiastic expression of the body, of kinetic, gestural behavior”), to “make language express what it usually does not express” (cf. Allen Weiss in “Phantasmic Radio” and Stephen Barber in “The Screaming Body”). Julian Henriques, in his new book “Sonic Bodies,” claims that sound systems and performance techniques are ways of “knowing” since sound systems, and techniques of sounding (fine-tuning, balancing, mixing, sampling, processing, playing, voicing, moving, articulating, etc), do not only operate at auditory, but also at corporeal and socio-cultural frequencies. Every artistic gesture is, to a certain extent, a political act, and thus the narrow context of a movement or an electronics workshop expands if we listen to hybridity, multiplicity and self-organization in collaboration and approach questions about our “interconnecting gestures” with an honest effort for intellectual integrity and transdisciplinary practices, sharing aesthetic and scientific knowledge.
Ian Winters in Kinect Workshop. (c) Phil Maguire
An exploration into the choreography of attention & gestural response.
Description of the Artaud Forum workshop:
The widespread use and integration of sensor-based systems now permeates many facets of our physical and social/cultural lives. Leaving aside political questions for the moment, this now ubiquitous ‘fabric’ offers many opportunities to us as artists to explore interaction(s) between disparate parties (such as data sources, performers (corporate and corporeal), biotics, and so on) in a much more complex fashion than ever possible. And yet, the irony of this to me is that no number of sensors, data points, or interaction can replace the now rare quality of attention.
Much of my recent research and projects have been into creating simple, affordable acoustic & visual tools to choreograph ‘attention’ rather than the use of sensor / movement as simple ‘switches’. As a compositional framework for a 3-hour workshop we will create an open-ended score for interaction; then follow through the process of a simple implementation of a score involving a depth-sensitive visual sensor and some type of acoustic sensor.
Using Isadora as an interface, we’ll look at:
- Examples of using a kinect and other depth-map type cameras to track qualities of movement (such as the difference between restless and calm) versus tracking specific gestures (e.g left hand moves to head)
- Use of visual and acoustic triggers
- Use of scaling, data arrays and simple conditional logic structures to generate more complex responses
- Mapping that information to media / other interactions (sonic, visual, accumulative, etc.).
- If time permits some discussion of the integration of mass-market consumer sensors (such as the accelerometers and location data being tracked by smart phones and accessible via tools like TUIO pad).
Some descriptions of works and ideas presented at the Artaud Forum 2
Opening Keynote: Interactive Music: Social Considerations of Gesture & Vocality
In this presentation, I would like to critically reflect on what interactive music is aboutand how it has developed in recent years. Among interactive music, I include what isoften called Interactive (Computer) Music, Interactive Gesture Music (IGM) or ‘gesturecontrolled’ music, ‘multimodal’ (interaction) environments (MEs) and live electronics.Central are two small case studies of interactive music performances that sharesome features in gestural control and composition through, but are also very different inmany ways: Swiss interactive vocal performer, singer and sound artist FranziskaBaumann’s Electric Renaissanceand German singer and composer Alex Nowitz’sStudies for Selfportrait. Both are (or were) artists in residence at STEIM in Amsterdam,which is an independent live electronic music centre for the performing arts based inAmsterdam. Both are classically schooled as singers (Baumann at Winterthur Conservatory and Nowitz in Berlin, Munich and the University of Potsdam). Both uselive electronics particularly based on live controlled audio feedback, segmented intosequences, which allows them to fully explore the sonorous qualities, particularly thegestures, in their own voices. Baumann uses a Sensorglove (or cyberglove), an interfacecustom-designed at STEIM, as well as a sound dress with inbuilt speakers to manipulatethe interactive system. Nowitz uses theStimmflieger and the LiSa live sampling software(also produced by STEIM), which he manipulates by means of two Wiimotes (the low-cost game controller of the Nintendo Wii). (Let’s have a quick look at both performancesfirst so that we know what we are talking about.)Rather than discussing these clips at great length, I would rather focus on thetheoretical underpinnings of my own considerations regarding these performances, whichleaves more room for discussion. The second clip does bring up an interestingdevelopment, which developers envisioned decades ago but which only slowly findsresonance in the commercial industries with corporate interests: the accessibility of suchinteractive media, interfaces and tools for public use and musical training. This has particularly social consequences for how we think in terms of bodily and musicalgestures and how we appreciate or depreciate such interactive music performances. For this presentation, I would therefore like to critically focus on related issues of interactivemusic technology, which have an inherent social interest, particularly:
- the nature of gestures (instrumentalisation & overdetermination);
- how the interplay of sonic and bodily gestures can induce a sense of vocal bodies.
In previous research, I discussed the oral and literate modes of listening that these performances may elicit in the listener’s search for meaning. I also studied the differencein embodied experiences (in particular the contrast between Cartesian eye and embodiedear) from the perspective of performers and spectators in the related art form of interactive dance. But I thereby ignored the specific nature of visible and audible gesturesin performance and the possibility of a combination of first and third-person perspective by the audience (as suggested by Marc Leman of the IPEM Institute in Ghent). I willfocus on these issues in more detail later. Although all of these issues are interconnected,they do not immediately imply a coherent argument for which I apologize. I do invite youto think along with me and I hope that the issues I will raise here will stimulate a criticaldebate on where interactive music stands today and which directions it could take further.
To download the full text of this keynote presentation, click here.
Description of the project: "DarkStar"
DarkStar is an installation which explores relationships between the permanent and the transient, and is a combination of public monument, interactive artwork and virtual gallery. The piece was developed by Simon Katan in collaboration with Martin Bricelj whilst Simon was artist in residence at Mota in Ljubljana, Slovenia, and premiered at Sonica Festival 2011. It takes the physical form of a large sphere, raised above the heads of its viewers with constellations of stars gently floating across its surface. Through the glowing of its stars in sequence to form a band of light rotating around the sphere once a minute, with the width of the band reflecting the current lunar phase, the DarkStar functions as an esoteric, cosmically concerned, public clock.
However, onlookers in the vicinity of the DarkStar are also able to interact with it by pointing towards an individual star. Initially they will notice that the star that they pointed at is no longer moving with the others. If they hold their pointing pose for longer, the star will begin to grow, revealing its individual character via real time generative sound and graphics. Gradually the star’s sound and graphic will come to dominate the DarkStar, though when there are multiple users, the stars will compete for dominance. As soon as a user drops their pointing pose the star will return to its normal state.
Currently the DarkStar hosts five types of stars with contrasting sonic and visual qualities. However, it is the artist’s intention to add more stars to this collection with each showing of the DarkStar, the ultimate goal being for every star to have a unique sound and graphic. With hundreds of stars and the possibility of their inter-combination, onlookers will find a seemingly limitless world to be explored within the DarkStar. Furthermore, the existence of thriving virtual communities of artists around the popular open source libraries upon which DarkStar’s software is based, offers the potential of a rolling ‘open call for stars’ which would not only greatly increase the multitude and diversity of stars but also plunging the DarkStar into a state of ongoing flux in which new stars are continually appearing.
The installation takes the physical form of a 1m 40 diametre translucent hemisphere, raised 1m above the ground with computer visuals back-projected onto the surface. The hemisphere is made of a single piece of semi-opaque black plexi-glass which is suspended via nylon strings attached to walls and ceiling. An ideal ceiling height is around 2m80 though higher ceilings are also possible. (Very high ceilings will require extra rigging) The total required floor space is 5m x 7.5m.
The interaction happens via a Kinect sensor camera, mounted directly above the hemisphere (normally flown from a ceiling mount). The interaction area is marked on the floor by a triangular rubber mat which extends approximately 5 meters in front of the installation and is 3 meters wide at the furthest point. The projector is a short-throw distance projector and needs to be mounted 2 meters away at a height of 1.5 meters. For effective projection, the space around the installation should be as dark as possible, though some ambient lighting is fine.
The installation is powered by two computers, one for the interaction and one as a media server. The software is custom designed using OpenFrameworks, OpenNI and SuperCollider. The computers are stored underneath the hemisphere in black stage boxes. The sound is provided by a 900 watt stereo PA with sub-woofer. The speakers are mounted on stands either side of the installation and the sub-woofer stored discretely underneath the sphere.
Martin Bricelj Baraga – Producer, Concept Design
Simon Katan – Sound, Graphics, Interaction
Carolien Teunisse - Texture Mapping, Project Management
Interactive installation of "DarkStar." (c) Phil Maguire
Claudia Robles performed Alvin Lucier's "Music for Solo Performance" , at the Folkwang Hochschule, Essen, Germany. If there had been enough technical preparation time for the set up, Claudia would have performed the piece at the Artaud Forum 2 on April 1.
How is it possible to re/create Lucier's brain wave performance, first staged in 1965? it was not scored, was it? how could it be scored? (Birringer asking Robles in an email exchange in 2011).
It's not really a score; I received from Alvin Lucier a complete technical description about the piece, which kind of instruments could be used and how the sound of the instruments comes out... and what should one do during the performance and some other specifications...
>This implies it is a work that can be created again by others (thus participatory in the Cagean sense)- being / performing in the same configuration or arrangement, mise en scène and allowing their own waves to weave and be, in these moments, in this weather? (JB)>
And here is my paper: "Creating Interactive Multimedia Works with Bio-data." Proceedings of the International Conference on New Interfaces for Musical Expression, 30 May - 1 June 2011, Oslo, Norway.
For further reading, see "Performative Science: Biofeedback."
PLATAFORMA BOGOTA: Laboratorio Interactivo de Arte, Ciencia y Tecnología
Tuesday 24th January 2012. 6:00 pm.
Plataforma Bogotá, CL. 10 No. 4 – 28
Lecture by Claudia Robles Angel: Creating interactive Multimedia Works with Bio-data.
Darren Vincent Tunstall
Darren is a Lecturer at the School of Art, Design & Performance, University of Central Lancashire. He teaches acting and also runs a motion capture facility for animation and performing arts research. His research centres upon two areas: non-verbal communication (specifically gesture) and Shakespeare. He pursues research that considers how our embodied sense of the psychological present - the 'moment' in which we live - conditions what we think and feel about and what we say and do to each other. Put simply, who we think we are is to a great extent the result of how we move. The rhythms of our movement determine the story of our life: we know the world in terms of the spatiotemporal effect of forces, and we unconsciously translate this bodily knowledge into a movement style that tells us and others who we think we are. This metaphorical translation of embodied experiential knowledge into an idea of the 'self' acts like a bully upon our behaviour, leading us into making choices that are based upon irrational ideas of control and intimacy. He looks at dramatic performance such as the history of Shakespearean production from this perspective. For example, he tries to show how star performers, by disrupting the rhythmic flow of interaction with an asynchronous movement style, will indicate an 'inner life' that controls how the situation of the drama is defined. Key to his research is the use of motion capture for analysis of human movement, which extends the possibilities for research in other fields such as computer animation. The kind of movement he captures is the movement of actors undergoing exercises that are used as the basis of self-feedback: they witness their own movement, try to change it, watch it again, etc.
With colleagues he has just developed an extension of the motor feedback process to include aural response - tones generated by the motion capture software data in real time, which the performer can then react to. Last summer with colleagues and two actors he developed a piece of work based on Hamlet using live motion capture as a kind of 'moving point-of-light backdrop' to the physical score of the actors. The actors researched into arrhythmic properties of movement presented in Asperger's syndrome and we developed the work out of their discoveries. In addition, he has worked with dyspraxic performers looking at the intersection of physiological and cultural definitions of the condition as it is revealed in movement. Again, this work was primarily concerned with rhythm and the psychological present.
(c) 2012 Artaud Forum / Center for Contemporary and Digital Performance