[Mar 2] Mapping Movements to Video Effects and Sound with Kinect

The Kinect camera provides three dimensional data of a person’s movement in physical space. How can this movement be translated into video effects and sound? This class focuses on mapping movement to parameters of video and audio effects. Using jitter and the cv.jit library, kinect data is scaled to create interactive video and sound.

[Mar 2] Mapping Movements to Video Effects and Sound with Kinect

Sofia Paraskeva
Date: March 2nd, 2013, 12 noon to 6pm
Cost: $150 (regular), $125 (student/member)

Pay with PayPal or Credit Card on our Payment Page here

Location:
Harvestworks – www.harvestworks.org
596 Broadway, #602 | New York, NY 10012 | Phone: 212-431-1130
Subway: F/M/D/B Broadway/Lafayette, R Prince, 6 Bleeker

WHO IS THIS CLASS FOR?

This class is for video artists, musicians and others who are interested in creating video or audio effects using the movement of the body. You will learn how to capture data from the kinect and use motion tracking methods to map movement into parameters that can affect video and audio.

Do you want your movement to interact with two video screens? Have you thought of mixing two videos as you move in space? Maybe you thought of modulating the speed of a movie as you move from left to right, while panning its audio. Perhaps you want to create an interactive orchestra where different locations in space trigger different instruments to play. Or you are a musician who wants to create an interactive dance that generates music through movement. If so, this class is for you.

Some experience with MAX/MSP/Jitter is recommended although not required.

WHAT WILL BE COVERED IN CLASS?

This class will be hands on, with the students following along with the instructor.
Currently, two ways to capture data from the kinect are the OpenNi framework, an open source SDK used for the development of 3D sensing middleware libraries and applications, and libfreenect developed by the OpenKinect community.

kinect_pict2We will use Jean Marc Pelletier’s jit.freenect.grab object based on libfreenect, to capture depth data from the kinect camera, and the cv.jit library to refine our motion tracking. Once our motion tracking is complete we will monitor motion in 3D space and map spatial coordinate data of our movement to different video and audio parameters.

This will involve using our movement to to control video effects such as speed, looping and mixing videos with the jit.qt.movie object and its attributes, and manipulating objects such as the jit.gl.videoplane to apply transformations and other effects in 3D space. Our movement will trigger audio samples and affect parameters such as volume and panning.

Topics to be covered in class:

  1. Capture Data with he 3D Kinect motion controller: Capture depth and camera data using the jit.freenect.grab object
  2. Use the  cv.jit library for motion tracking: Apply motion tracking methods to extract X and Y data and a bounding box to track motion along the X and Y planes.
  3. Calibrate motion tracking data: Use a calibration patch to scale motion tracking data ranges to match input required for video and audio effects.
  4. Map movements to effects: Map motion tracking data to parameters of video and audio effects

REQUIREMENTS FOR CLASS:

This class will use Jean Marc Pelletier’s jit.freenect.grab object to capture depth data from the kinect camera and the cv.jit library for motion tracking.
It is recommended that you download and install the cv.jit library and the jit.freenect.grab object prior to class.
Students will need to bring their own Kinect camera.

Kinect_pict1

BIO

Sofia Paraskeva is an artist who explores sound and visuals in the context of leading edge technology. Using tools such as MAX/MSP/Jitter she designs visual and aural narratives for interactive installations and performance. She is a concept creator, a visual effects designer and a natural video editor. Her work spans across interactive art and design, visual design, video production, motion graphics and experimental sound. She is currently involved in projects that use computer vision and custom-made wearable interfaces such as gloves and bodysuits to create sound textures and visuals.

Her interactive installation Rainbow Resonance, a computer vision musical interface, has been presented at the Mind’s Eye program, Solomon R. Guggenheim Museum, the New York Hall of Science, Harvestworks, Brooklyn Conservatory of Music and Queens Museum of Art among other venues. She recently collaborated with composer Joshua B. Mailman to create “The Sound and Touch of Ether’s Flux”, an interactive musical interface that generates spontaneously algorithmic music and visuals through motion detected by the 3D Kinect motion controller and her custom designed motion gloves. Sofia has a BA degree in Visual Studies and a BA in Media Communication. She is a graduate of the Interactive Telecommunications program at New York University

Bookmark the permalink.

Comments are closed.