[Ardour-Users] Automation from analog mix?
nettings at stackingdwarves.net
Sun Dec 19 10:15:30 PST 2010
On 12/19/2010 10:13 AM, Giso Grimm wrote:
> Am 12/18/2010 02:56 PM, schrieb Jörn Nettingsmeier:
>> i don't quite grasp what decorrelation or echo compensation
>> techniques have to do with it.
> It is simply an adaptive filter technique, used in echo compensation
> (in nearly any modern phone or VoIP software), used for feedback
> cancellation in hearing aids or PA systems, used for noise
> cancellation in cars and active noise cancellation headphones for
> planes. Adaptive filters are even used to control heating systems and
> many more simple tasks - they are designed to estimate quasi-linear
> black box systems. They can involve estimation of time-depending
> complex filters (e.g., feedback cancellation, echo compensation), but
> they also can be used to estimate a single gain only (the more
> constraints can be applied to the resulting filter, the more stable
> is the estimate, and it can be adapted faster and with less
> pre-conditions to the input signal (e.g., low auto-correlation). This
> is everyday technology.
well, i know where it's being used, and all these are distinctly low-fi
applications, where it's about speech intelligibility, not ten octaves
of 120dB dynamics.
this discussion reminds me a bit of the last ambisonics symposium, where
some people conjectured incredibly powerful systems that looked good in
matlab but had no relevance whatsoever to the real audio world.
one guy seriously proposed an active noise cancelling technique using
two omni-directional higher-order anti-sound emitters. the intended
usage was to quieten a waiting area in a busy train station by 10dB.
it really looked good in matlab, but the was using insanely intense
sound beams that will blow your ears when something goes just slightly
wrong, and he only looked at a very limited spectral range. the whole
thing was based on simulation alone, and a very optimistic one.
believe me, i've been toiling for 2 years now to deal with things dsp
engineers tell me are no problem. turns out they either only look at
steady-state systems, or just 2-3 octaves, or they are using beam
steering with 120dB side lobes. there is always a catch.
granted, in this case we talk about obtaining metadata, so the prospect
is not as dire as if we were to deal with the signal itself, but wrong
controller values can wreck the sound just as effectively as intrusive
> And the stereo mix would be sufficient (in combination with the
> inputs): If you see your mixing console as a time dependent matrix
> operation, X(t) is your input signal (with many channels, lets say
> N), H(t) is your time dependent mixing matrix with size Nx2, then
> your stereo mix Y is Y=H*X, and adaptive filtering is nothing more
> than estimating H. The method provides an error estimate. And if it
> does not have to be real-time, then the estimation definitely can be
> improved, especially to find the initial states.
ok, if you are doing it with an acausal system, it might even work. but
you definitely _have_ to do that, otherwise you will only notice changes
when a sufficient signal becomes present and then chase, which in the
faders would cause compressor-like pumping, and for eqs would yield
weird sweeping sounds or band-limited pumping.
>> in any case, i really doubt this is feasible. even less so if you
>> start using interesting spatialisation techniques - just imagine
>> the amount of
> I am sure that this method is definitely feasible, as long as the
> applied filters are linear and quasi-stationary, and if we have the
> input signal.
i don't buy this at all. the initial proposal talked about just faders,
then later eq, and following your argument, yes, it might be possible.
> If the mixing is ambiguous, then it doesn't matter if the estimated
> mixer is not correct - the result will be the same. Estimating
> non-linear operations is another topic, not impossible either, but
> far more complex.
but it just doesn't make sense to try to turn an analog desk into a
controller: the tracking will be either too slow or too imprecise, you
need a very long look-ahead, and you do care about ambiguities: it would
be irritating as hell if eq bands jump around because the algo guessed
it was the low-mid band when in fact it was the overlapping high-mid
one. there goes the controller.
next usecase was to reverse-engineer analog mixes. totally different
story. everything you described is quite feasible in theory. what i
doubt is that you get the error low enough that mixing enginers will
accept it. and you just deal with levels and simple linear filters. add
compression, reverb and correlated groups of signals such as a m/s piano
or drum overhead, and the whole idea goes pooof, because there are so
many different ways of arriving at a particular result that your
algorithm will guess wrong - and since you are trying to create
controller data, arriving at an equivalent set of parameters will not
neccessarily yield a usable reconstruction of what went on.
these non-linear or very blurry linear things such as reverb are what
makes a mix interesting. manual reproduction of just the fader
automation won't take a remixing engineer that long,
and it won't tell even half of the story.
besides, the point of a remix is, well, that you remix. i.e. you try a
fresh approach, because obviously you (or the marketing department)
found flaws with the original mix.
Lortzingstr. 11, 45128 Essen, Tel. +49 177 7937487
Meister für Veranstaltungstechnik (Bühne/Studio), Elektrofachkraft
Audio and event engineer - Ambisonic surround recordings
More information about the Ardour-Users