VDMX + Cinder + Syphon guide

The following is a guide to a setup using Syphon and OSC to combine the flexibility and graphics performance of a C++ framework like Cinder with the live-performance-oriented user interface of VDMX.

VDMX already supports GLSL shaders through its Quartz Composer interface and, most recently, the ISF protocol. But the latter is focussed on fragment shaders and Quartz composer poses its own limitations when it comes particular tasks, such as the following: I wanted to use a Microsoft Kinect to generate a depth image that would in turn control a particle system, creating a dynamic abstraction of people's silhouettes. The whole thing should also be audio-reactive and tunable in a live setting.

Since VDMX supports Syphon out of the box, it's possible to use any texture data generated in another application as long as it also supports Syphon. I decided to use the great C++ creative coding framework Cinder along with a library called Cinder-Syphon, since this would give me much more flexibility than a building-block system like Quartz Composer.

This guide assumes that you know how to program in C++/OpenGL and have worked with Cinder, openFrameworks or something similar before. You should also have a basic understanding of Xcode.

The Cinder application code for this can be found here. Best keep the central source file "ShdrPartsApp.cpp" open to follow along.

Setting up Cinder and required libraries

  1. Open Terminal and clone the Cinder repository:

    git clone git@github.com:cinder/Cinder.git
  2. Open the Cinder Xcode project in the Cinder/xcode directory. Build the project. (I like to build everything in 64-bit mode; to do that, before you build click on the project root in the navigation bar to the left, and under Build Settings > Architecture choose 64-bit Intel.)

  3. Get these two "Cinderblocks" (additional libraries to use with Cinder):
    a. https://github.com/astellato/Cinder-Syphon
    b. https://github.com/cinder/Cinder-Kinect
    Git-clone them (in the same way you cloned Cinder itself) and put them in the Cinder/blocks directory

  4. Take a look at the Kinect-Basic example in the Cinder/blocks/Cinder-Kinect/samples. If you build and run it (check if it's in 64bit mode), you should see something like this:
    (It goes on to the right with an RGB picture)

  5. Go to Cinder/tools/TinderBox-Mac and launch the TinderBox application. This is a project-creator that will organize all the required libraries etc for you. In the first screen, choose the "Kinect Basic" template, then select the Cinder-Syphon and Cinder-Kinect blocks (the latter one should already be selected) as well as the OSC block, and set the reference settings to "relative" for each. Give your new project a name (such as "AwesomeParticles") and a location in your file system.

  6. Once your project is created, open it in Xcode. You may also want to change it to 64-bit mode if you built Cinder that way. At this point you can connect your Kinect, build and run the application. If it produces an error at startup complaining about a missing Syphon library, you might have to go to your project settings and in Build Phases > Copy Files set Destination to "Frameworks" instead of "Executables". If there is a Kinect-related error, look at the Kinect-Basic example and troubleshoot from there.

At this point we have a project that is taking input from the Kinect and has all the necessary libraries to publish textures via Syphon. Here, I'd strongly recommend you read the 1 Million Particles tutorial. It's a great approach to running a particle system by doing most of the calculations on the GPU (rather than the CPU) and thereby greatly improving performance. A lot of my code is based on this.

How many particles?

If you take a look at the code of ShdrPartsApp.cpp, you can see that I used the 1 Million Particles as a base, but changed quite a few things around. The basic program flow is the this:

  1. In setup(): Create textures that hold information about the particles (position, velocity, age). Since each pixel of these textures refers to one particle in the system, be sure to set the interpolation to GL_NEAREST. We also create a noise texture to introduce some randomness that is independent of the depth image.

  2. In update(): Here we draw our textures into a quad - the trick is that by drawing with a special shader we don't draw anything "visible", but use the shader to calculate the new particle positions.
    First we grab the latest depth image from the Kinect. Then we use an FBO (Framebuffer Object) to update the particles' positions - we bind the FBO and send in the textures with the current positions and velocities. Our vertex shader does a simple pass-through, and in the fragment shader we calculate each particle's new position from its old position and velocity, each sent to the shader as the color of a particular pixel in the according input texture.
    The same goes for the age calculation.
    The velocity of a particle is updated using some other input: In the original 1 Million Particles tutorial, this is a constant noise we generated in setup(), but here we use the depth image we get from the Kinect to make the shapes "seen" by the Kinect act as disturbances in the particle system:

    // in ShdrVelF.glsl
    vec2 coords = vec2(pos.x, 1.0-pos.y);
    // texNoise2 is our depth image from the Kinect
    vec2 depthNoise = texture2D(texNoise2, coords).rg - 0.5;
    // direction is a parameter we can change at runtime, 
    // a sort of "wind-direction"
    depthNoise.r *= direction;
    vec2 noise = 0.001 * depthNoise;
    // speed is another parameter we can change at runtime
    vel += vec3(noise.x,noise.y,0.0) * speed;

    Because we draw into an FBO, the output of the draw command (and therefore the output of all these particle update calculations) is stored in another texture.

  3. In draw(): We use a different GLSL shader to actually draw the particles. This shader only needs the textures that store the particles' positions and ages. We use gl_PointSize to draw billboards instead of points, so we can draw a particle as a little quad with any texture we like on it:

    // in ShdrPosF.glsl
    // we rotate the texture coordinates -> "spinning" particles
    float phi = PI * age * rotSpeed;
    mat2 rotmat = mat2(cos(phi), sin(phi), -sin(phi), cos(phi));
    vec2 coords = rotmat * (gl_PointCoord.st - vec2(0.5)) + vec2(0.5);
    // look up the sprite texture we passed - I'm using a cross
    vec4 colFac = texture2D(texSprite, coords).aaaa;
    // do some colour / alpha modulation based on age
    colFac.a *= 2.0 * age*(1.0-age) * doDiscard;
    colFac.g *= 1.0-age;
    colFac.r *= sin(age*12.56)*0.5 + 0.5;
    colFac.b *= sqrt(1.0-age);
  4. Also in draw(): Instead of drawing to the screen, we actually draw the things described in the last paragraph into another FBO, because we don't want to see anything in the Cinder app, but publish the texture we get from this FBO via Syphon.
    With all this going on (especially the billboard rendering) and the CPU-intensive VDMX running, we actually want to scale all of this down from 1 million particles to something like 256x256 particles - which is still more than enough.

The particle system, with my silhouette creating structure. Here we are drawing the particle system into an FBO and then render that FBO as a simple quad to see the result. In the final version we won't render the FBO to the screen, but publish it to Syphon.


The Syphon Cinderblock allows you to publish either the screen contents or the contents of a Cinder Texture object. I have found drawing into an FBO and publishing the texture I can get from that FBO to result in better performance than publishing the screen contents.

First we create a server instance in our application class, along with an FBO instance to which we draw everything and a texture to which we copy the FBO data:

syphonServer m_srvSyphon;
gl::Fbo m_fboSy;
gl::TextureRef m_texSyRef;

In setup(), initialize the Syphon instance with a meaningful server name and prepare the FBO as well as the texture:


gl::Fbo::Format format2;
format2.enableColorBuffer(true, 1);
format2.setWrap(GL_CLAMP, GL_CLAMP);
m_fboSy = gl::Fbo(SYWIDTH, SYHEIGHT, format2);

m_texSyRef = gl::Texture::create(SYWIDTH, SYHEIGHT);

I have set SYWIDTH to 800 and SYHEIGHT to 600, a good compromise of performance and quality.

In draw(), before we draw the particle system, we bind our FBO:

GLenum bfrs[1] = {GL_COLOR_ATTACHMENT0_EXT};
glDrawBuffers(1, bfrs);

Then we prepare the particle system shader, send in all parameters and drawing the actual particles. After we are done, we unbind the FBO:


Finally, we copy the FBO contents to our texture object and make that texture available using the Cinder-Syphon library:

*m_texSyRef = m_fboSy.getTexture();

Now all that's left to do is to pull the Syphon source into VDMX. First, run the particle system application. If all is working, there should be a window that shows you nothing (because we're not drawing our FBO onto the screen, we only publish it to Syphon). Minimize the window, and open VDMX. On the source dropdown menu of any layer, select Syphon and then the source names after your application.

You should see a smooth livestream of your particle system im VDMX.

The back channel

One of the great things about VDMX is how easy it is to map parameters generated from an audio signal, a time-based function or a midi controller to parameters influencing the visuals. Of course we want to harness this for our particle system. We do this by sending OSC messages from a control surface in VDMX to our AwesomeParticles application, and then sending these values into the shaders as uniform parameters.

The Cinder side

This is why we included the OSC block in our project: We simply query for new messages every time we call update() and store the values in member variables of our AwesomeShaderApp class, so they can be bound to the shaders:

// in declaration
osc::Listener m_listener;

// in setup()
m_listener.setup(5991); // whatever port you like

// in update()
while( m_listener.hasWaitingMessages() )
    osc::Message message;
    m_listener.getNextMessage( &message );

    for (int i = 0; i < message.getNumArgs(); i++)
        if (message.getAddress().compare("/FromVDMX/parts_speed") == 0)
            m_parts_speed = message.getArgAsFloat(i);
        else if (message.getAddress().compare("/FromVDMX/parts_size") == 0)
            m_parts_size = message.getArgAsFloat(i);
        // etc...
        else if (message.getAddress().compare("/FromVDMX/reload_shaders") == 0)

// in update()
m_shdrVel.uniform("speed", m_parts_speed);
m_shdrVel.uniform("direction", m_parts_direction);

// in draw()
m_shdrPos.uniform("partSize", m_parts_size);
m_shdrPos.uniform("numParts", m_parts_numparts);
m_shdrPos.uniform("rotSpeed", m_parts_rotspeed);

You'll notice that there's also an int message that we can send to reload all shaders. This is useful as reloading shaders can be done at runtime, so we could edit the shaders at will during a VDMX performance and then simply send a reload message to our Cinder app, effectively doing a little bit of "live coding".

The VDMX Side

In VDMX, it's best to group the five parameters we send back to the cinder app via OSC in a "Control Surface".

  1. Go to Preferences and make sure you provide an outgoing OSC port that matches the one in the Cinder app (5991 in this case).

  2. In Workspace Inspector > Plugins create a new Control Surface plugin and use Control Surface Options to create a slider for each of the five parameters. Adjust the ranges as follows:

    parts_speed         [-1, 10]   
    parts_size          [0.1, 4]    
    parts_numparts      [0, 1]      (a fraction)     
    parts_direction     [-1, 1]     (fraction that maps to the angle)   
    parts_rotspeed      [-10, 10]   (rotation speed)

    If the range is not [0, 1], make sure to uncheck send normalized values. Of course, you can now link the sliders to various sources, such as the Clock or the Audio Processing Plugin. I also made a button that simply sends a 1 to the reload_shaders address when it's pressed. This calls our loadShaders() method, re-reading the .glsl files at runtime. Below are two screencasts that illustrate the process:



The result

Here's a sample performance on vimeo:

VDMX + Syphon + Cinder + Kinect on Vimeo.

And a screencast showing how the parameters are tuned via VDMX (the screen recording is rather choppy, but you get the idea):