[ardour-dev] quick note re: MTC

Thomas Vecchione seablaede at gmail.com
Tue Jan 2 18:15:51 PST 2007


> 
> Ambisonics - in particular higher order - is the best technology for
> creating a 'you are there' effect, e.g. for natural sounds or for 
> classical music where it can capture the entire acoustics of a concert
> hall and allow this to be reproduced exactly. It is not necessarily the
> best technology for theatre sound, but it can easily be combined with
> other methods.

There rarely is a best technology for theater sound.  That which works 
best there is typically called innovation and gaff tape;)

At any rate, on to the post, I wasn't expecting a full blown analysis of 
Ambisonics, I have read some of those but it has been a while.  So your 
post covered most of what I needed quite well.


> - The idea to represent the direction of a sound in the way it is done
> in Ambisonics (i.e. using an Ambisonic panner) has nothing at all to do
> with the rest of Ambisonics technology. It is just the 'mathematically
> obvious' way to encode directional information, more or less in the
> same way as complex numbers map naturally to points in a plane, or HSV 
> is a 'natural' way to represent colors. The amount of 'detail' increases
> with the order of the system, which translates directly to the number
> of bus signals needed.



So what you are describing would essentially be ambisonic encoding until 
the final stage of bounce then, and the speaker layout would be 
determined at the final stage of bounce to file, or playback, or 
whatever, correct?  Just trying to understand what may become the 
endgoal for a panning system in Ardour, especially since I think you hit 
on(And responded to) a previous post I had done with suggestions about 
multi-channel panning in Ardour.

So in effect how it would work is that each channel would get pan and 
automation done in the Ambisonic 'style'.  The location of speakers 
would mean nothing at this point.  However if played back that audio and 
locational data is then sent through an ambisonic decoder which would 
provide the correct number of channels to be mixed together post decoder 
stage, the channel fader would modify the group of channels sent to the 
main mix level as normal, and the master fader would modify the overall 
level of the main mix as normal, correct?

In that case does it mean each channel would have to have its own 
decoder running that would just get fed information from the overall 
setup of the project?  How would that affect cpu useage per track on 
average?  (Obviously not expecting a exact number as that is impossible 
with so many possible different setups, but possibly a comparison vs the 
stereo panner implemented now?)

Also how would this affect plugin useage?  For instance at first glance 
it would seem that in order for the plugin to be useful, particularly 
multichannel reverbs or other similar plugins, the decoding would have 
to have already been done correct?  Would that affect in any way how 
they would respond to the audio data passed to them, or would they get 
the original data and be run through the decoder just the same, using 
the same parameters for distance and direction as the original audio would?

Very curious about this and may be dusting off some of the readings I 
did some time ago on this just the same to refresh my memory.

> There should also be the option of having no panner at all, for
> any number of outputs.


Yep that was exactly what I was referring to, the ability to direct 
patch audio into an output channel would be something I would strongly 
desire to be kept of course.

           Seablade



More information about the Ardour-Dev mailing list