Admin
On the way to massively distributed instruments

Jesse_headshot
October 06, 2010

On the way to massively distributed instruments

So I've got this idea that using a web service we can distribute the controls for an audio instrument to many performers. In fact it could go to thousands of performers across the entire world.

Of course, the closer the performer is to the server, the more responsive the instrument will be. However, by distributing appropriate controls over sound, the performers could have an incredible collaborative musical experience.

Although this has all sorts of technical issues, the main obstacle at the moment seems to be distributing the collaborative performance back to a distributed audience in real-time, or as close as it could be. An audience on location could hear the performance quite satisfactorily, but the de-facto real-time sound distribution standard of audio streaming is still pretty laggy. I'm afraid a performer would feel very disconnected from the sound by the time it reached them.

There is a very large push in the programming realm towards the Model/View/Controller style (MVC) of separating design structures. With the development of HTML5 and the ubiquity of web applications, and ruby on rails in particular, it makes sense to me to push this idea into a browser to distribute controls for audio performance. The cross platform support for this would allow the most distribution bang for the buck across operating systems, mobile devices, touch pads, and the like.

Most audio applications do something like the MVC structure already. All of the controls that are displayed are simply GUI elements that when used, pass commands to modules/objects that change the way they process sound. e.g. the signal processing is separate from the control elements - or more appropriate in this instance, the controller is separate from the view. Besides speed of course, it doesn't matter if we present these controls in the same application as the audio processing, or for that matter, even on the same computer. If we place sliders, triggers, toggles, waveforms region selection, etc. into an html page, we can use AJAX to pass parameter changes back to the server to change the state of the instrument. And of course, once the control is in an html form, we can pass that along to any number of users and utilize session information to allow everyone to interact with the signal processing engine in a uniquely individual way. Very Exciting stuff. (at least to me.)

Working with Tim Place, we were able to get the Jamoma Audio Graph actually running inside a Rails application and receiving commands from an html form. This is very exciting as it holds the promise of running audio processing directly inside a server which will help with the containment of the instrument as well as keep latency down. As the Jamoma Audio Graph develops, this will be fun to experiment with.

Until then however, I've prototyped a number of UI elements in a Rails application that passes commands via OSC to a MaxMSP Patch (using the Jamoma Framework although it isn't necessary) to control audio processing. This solution is working quite well and it will be interesting to see how it scales and performs over much larger networks.

I'll post back as things develop. Here's looking forward.


Posts