FraMESHift 2012

After the preliminary research phase occurred in 2011, project FraMEShift came to fruition as a live performance with the July 12th 2012 premiere in Turin, Italy during the “Teatro a Corte” festival, produced by Beppe Navello.
Directed and choreographed by Renata Sheppard, produced by VR&MMP and performed by three taiwanese dancers Lu-Kai Huang, Tzu-Ying Ho, Yin-Guan Huang and italian Vanessa Michielon, it was truly a multi-national collaboration.

The 5.1 sound for the performance was designed to be a continuos real-time conversation between Pure Data (sound engine) and MESH (CG engine).
To clarify the structure of the performance audio content, here are the components involved:

  • Sounds procedurally generated from the movement of three performers on the stage.
  • A real-time rendered CG generated robot, whose movements were controlled by a fourth off-stage performer.
  • 40 pre-rendered stop motion videos, whose playback was triggered by the off-stage performer.
  • 13 CG real-time rendered animations, triggered by specific movement gestures of the on-stage performers.
  • A videogame-like environment, interactive and completely controlled by the three on-stage performers.

The Sound of the Performers
These sounds were created from a research thread I began with Renata Sheppard in 2010, based on ideas of sound and movement mapping that we presented during the Dance for the Camera 2011 Workshop at Arts Enter Cape Charles in Cape Charles, Virginia.

John Toenjes from The University of Illinois at Urbana-Champaign, created the prototype of the accelerometer/gyroscope set worn by the performers. He also provided the MAX/MSP patch that treats data arriving from the hardware and forwards it to the Pure Data engine through the OSC protocol.

Accelerometer worn by one of the performers during rehearsals.

Our system wasn’t intended as a gesture recognition software per se. Instead we wanted to create a correspondence between the movement of the performer and the sound generated, matching Laban-based concepts of Effort Life and the dynamic, expressive qualities of the dancer’s movements with specific sound properties.

The Robot
The role of the Robot, whose visual design was developed by the ASALab CG team, was fundamental for the performance. Controlled by a fourth off-stage performer through the use of a Kinect and two Wii-motes, he/she interacts with the three on-stage performers during the show.

Movement capture engine on the left screen – Rendered robot on the right screen.

The sound is procedurally generated, a translation of the angular velocity of robot’s joints into RPMs. The resulting value is used to control synthesizers that generate the sound of the robot motors.
For the design of the sound, I started from the famous Andy Farnell physical model, presented in his excellent book Designing Sound.
The robot moves in a 360° panorama in the theatre (visually he was limited to the use of just one of the on-stage screens which were part of the scenic design). The “surround-sound” effect was created with the use of a 5.1 pan-pot, also developed in PureData.

Physical engine realized in Pure Data for the procedural sound of the robot

Stop Motion Videos
Six international artists were involved in the production of 40 stop-motion videos representing the evolution of the traditional telephone, an icon that represents the history of technology as a communicative device. The Robot “draws” these telephones throughout the performance, almost as if he is chronologically revisiting history by telephone model.
Although for this section a typical post-production workflow was followed in terms of editing the video content and generating the accompanying sound, all the materials needed to be prepared for the interactive live performance.
After a series of foleys sessions, we synced the sound to the videos without being completely satisfied. We then decided to give the off-stage performer (who controlled the robot) the freedom to introduce a variation, providing a real-time connection to the live performance that would make these sounds a little more dynamic and “real”. Wii-mote’s gyroscopes were then used to control a combination of several parameters such as the Doppler effect, the frequency modulation and the volume of the clip.

3D Phone
The 3D phone was rendered in real-time through MESH software. In this case, although the three performers were interacting live with the contents, the audio stream was pre-rendered. The 3D telephone represents a sort of door to a fantastic world (the Videogame world), where physics behaves in a unusual way (think about the world of “Alice in Wonderland”). We recorded some sounds that were hardly processed to emphasize the distorted reality.

The 3D Telephone

The Videogame World
A real-time videogame-like world called upon the iconic video game environment of a tunnel, for which we decided to use the first-person perspective.
We defined three different states, based on the velocity of the camera: slow speed, normal speed, high speed. For each state we established a different structure of sounds and, of course, sound mix of the elements.
The sound’s purpose was to support the visual graphics and emphasize changes in speed so we decided to control some parameters such as the distortion of the air depending on the velocity or the pitch and doppler effect duration for some objects “met” inside the tunnel.

While the musical tempo remained the same in each state, some parameters of the music composition were also controlled considering the speed of the camera, such as the note velocity or the amplitude modulation, that dynamically changed for some instruments.

Pure Data patch for the Videogame world

5.1 Loudspeaker Setup
Audio was delivered through the use of 2 Meyersound UPM (L R), 1 Meyersound UPJ (Cnt) 2 Meyersound UPM-1P (SL SR) and a pair of subwoofers Meyersound USW-1P. The already performing setup was enriched by the precious work of the sound engineer Piero Raucci that meticulously prepared the show studying the acoustical response of the theatre in pre-production.

Project FraMESHift was realized with the support of the Chin Lin Foundation, Taipei National University of the Arts, Camera di Commercio Industria Artigianato e Agricoltura di Torino. Special thanks to the US-Italy Fulbright Commission and the United States Embassy for supporting the residency of Renata Sheppard.


One thought on “FraMESHift 2012

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s