[ecasound] Re: [Jackit-devel] some maybe good news about timeouts

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: [ecasound] Re: [Jackit-devel] some maybe good news about timeouts
From: Jeremy Hall (jhall_AT_uu.net)
Date: Tue Apr 09 2002 - 14:03:32 EEST


note: xposted because of the unanticipated comments

In the new year, Paul Davis wrote:
> i would also like to point out that the latency question really only
> matters during monitoring-during-capture. only all serious audio
> interfaces, zero latency monitoring (either digital or analog) is
> available, making the period size irrelevant. during playback/editing,
> its nicer to have better "feel" from the controls (gain, ladspa
> parameters, etc.), but 64 frames is way below what you could detect, i
> believe. in this sense, the problem is (a) really only there for some
> consumer audio cards and (b) not really that critical when viewed
> through a realistic lens. however, pride and a few related ideas
> compel me to get to the bottom of it.
>
um please excuse my ignorance, but I'd like to understand this a little
better. In the case where 0-latency is provided by the card, you're
really monitoring the rec-enabled tracks and on the friend output channels
for those tracks.

I would like to understand from a musician's point of view how listening
to his vocal track, and only his vocal track, actually benefits him. If
others are singing, he won't hear them, if other tracks are playing, he
won't hear them, I just don't get it.

I can see the possibility of say a tape machine sitting behind the DAW
where the DAW is kindda an invisible device at any point in the signal
path, thus monitor-inputting with 0 latency is a win so that the tape
machine (if digital) isn't out of sync.

If you have the DAW sitting between the preamps and the mixing console,
you lose the benefits of automation, plugin processing and mixing from
within ardour. You would need an aditional recording agent behind the
mixer to record the mixed signal, so the signal path might look like:

microphone -->preamp -->A2D -->DAW -->D2A -->mixing console
-->headphones|monitors|tape

Each route&diskstream would be configured with passthru settings, with
channel12 going through to channel12 etc.

now consider removing some of the potentially redundant pieces:

microphone-->preamp -->A2D -->DAW_&_MIXER -->D2A -->head

In this case, you have the drawback that all your effects are digitally
controlled and you must live with LADSPA ones however useful or nonuseful
they are. You might be able to use an analogue effects rack, but you'd
get real exciting on ports, channels, and limitations as returns eat up
physical channels and additional A2D and D2A conversions take place

In this second approach where ardour is doing the mixing as well as the
recording, the smaller the frames_per_cycle the better. As I have
mentioned before, old audioengine saw wierd echoing and ghosting for me at
64 frames_per_cycle, so I used 128 and got used to the phasing and
latency. If I could get the signal loud enough the end user would hear
_THAT_ instead of their own internal input monitoring and the phasing is
less of an issue. You can see this by the following test:

Mel and Jen are singing, John is playing his electric, Christi might be
playing the piano, and I am controlling trying to field the whining, which
usually goes like this:

me: How did you like that mix during the take?

Mel: All I could hear was Jen and the piano, I need to hear more of me.

Jen: WHAT? I heard you FINE! I couldn't hear ME

Mel: oh I heard you FINE!

Christi and John shake their heads, proclaiming:

I need the metronome more, if we lose sync again, I think I'm going to
have to get real ill because it just falls apart.

me: but if we raise the metronome, it bleeds over on the vocals from their
headphones. Let's try it again and I'll raise the vocals a bit. John and
Christi, you'll have to live with their voices a bit more or they won't
sing out much. but that's ok, we'll get used to it louder anyway.

Sighing, I now must readjust all the mix so that the resulting output does
not exceed 1.0F, or I have to listen to Mel whining about her tender ears
as clipping occurs. In fairness, it usually bleeds over on the
microphones and should be avoided, the net.effect of all that is the
instruments are quieter, the metronome stays the same, and the voices may
or may not be raised. It becomes quite clear to me that we need the
ability to load a template session so that we don't have to manually copy
over pan, gain, and ladspa settings between sessions as the same group
does multiple songs. They seem to sing the same way from day to day, so
associating compressor settings with a person results in minor adjustments
each day.

I have drifted away from the initial topic which was latency. In closing,
I'd like to hear from (from the ardour community) what kinds of uses you
see ardour playing and what kind of signal flow you anticipate. Please
try to describe in text rather than drawring.

and from the other multitrack recording people, like those using ecasound,
do you feel these issues apply to you? Do you see your app used in mixing,
storage and retrieval, or both? Taibin I think somehow these observations
should be reported in the general usability of ardour section (is there
such a thing) in the FAQ? Is it entirely obvious to others but me the
subtleties associated with the latency? Is what I am doing such a corner
case that it doesn't matter?

PUtting the mixing agent in the computer saved me thousands, so maybe it's
not quite just a corner case.

> when you trace the execution path and see that we almost always have
> more than 50% of the period to spare, its deeply frustrating to find
> that this is not enough headroom.
>
NO KIDDING!

as an aside, do _ALL_ jack clients need to run within the frames_per_cycle
time? So if there are two clients running, say an ardour and an ecasound,
to be fair, the ardour can only have half the frames_per_cycle time,
ecasound can have the other half. If they both can run in under 50% of
the frames_per_cycle, then all is lovely. Trying to find the optimal most
aggressive setting that will not result in the jack monster having lunch,
or alsa_pcm reporting XRUNs requires experience and luck. If ardour takes
60% of the frames_per_cycle and ecasound takes39 or whatever the allowable
time is, does jack whine? Collectively they both finished in time, but
ardour was extra piggy.

If you're allowed to fake this, then the end user that wishes to capture
the whole session, including dead time, may wish to use a simple client
that connects up to all of ardour's outputs, optionally dithers, and rids
of the data either through a pipe, a file, multicasting it, it doesn't
matter, it doesn't even matter if we're going to finish late because a
small XRUN isn't important, you're just wanting this for archival purposes
or a remote listener. In my current setup, remote listeners could grab
the mixed signal from the control room if they wish. Experiments proved
to be useful as a remote listener was able to observe what was going on a
few hundred miles away. I'm finding it quite humourous to have it rolling
a live signal into say a 128k mp3 stream that goes to a file. It doesn't
cost much in space and quality isn't as big a deal as quantity. I have
found the biggest time waster is cable reconfigs at the patch bay, copying
xml data by hand, and repositioning microphones. Another big waster is
education on the ardour concepts to a random user that happens to be near
the computer and is helping in an "emergency" situation.

Most of the difficulties come when I need to play an instrument and am
unable to mix. I need the metronome loud enough to be able to hear it but
not too loud that it bleeds over on the instrument mics. This requires:

a person watching me for visual notifications, a pained look or some
signal that I need something, that person then must signal the controller
in a visual method, the controller must make the effective change as
quickly as possible. The disappearance of the visual exception causes the
spotter to instruct the controller to sease whatever it was told to do,
waiting for more instruction. If the controller can't intuitively figure
out what the musician wants, this signaling must occur.

Most of the time, the controller is the blond instructed to press some key
or move the mouse(in the case of gtk) btw they seem to like those movable
squares that we originally thought were useless, so maybe we could turn
those into peak metres somehow, or cause a display flag to be setable so
they turn into peak metres instead of the level metres they are now. In
fact one became fascinated she could make the little square move from left
to right on the screen by depressing the inc or dec key and holding it
down. SHe shrieked, however, when it went off the screen. Interrupting a
talkative person with the mute button also seems to be a favourite.

Failing to activate the master record button also generates confusion as
the singer is singing but not.

These seem like silly natural observations, but maybe they help some. I
know I've drifted FAR from the original topic of latency, but I'd like to
see at least some of this explanation either showing up in the manual, the
FAQ, or simply common-sense notes. Do others have suggestions on this? Am
I simply out in left field somewhere?

_J

> --p

--
To unsubscribe send message 'unsubscribe' in the body of the
message to <ecasound-list-request_AT_wakkanet.fi>.


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Tue Apr 09 2002 - 13:50:30 EEST