[ardour-users] Re: [ardour-dev] Latency: AGP vs PCI video card and XFS journalling vs not

Eric ej at ir.iit.edu
Mon Feb 23 10:39:15 PST 2004

I will try 2.4.  My box is APIC, I imagine, since it's a dual athlon.
Can I still help anything along interrupt-wise?  Unfortunately,
reading tracks and writing others to the disk at the same time is
usually a reality for me...I have been wondering whether moving all my
completed tracks to one disk and writing to the other would be faster
though, although it doesn't seem like one disk is enough to read 12
tracks concurrently with low latency.  Anyone have experience w/
whether it's better to read all from one and write to the other or
read some from each and writ concurrently?


On Mon, Feb 23, 2004 at 11:10:33AM +0200, Tommi Sakari Uimonen wrote:
> > beta9+3) with a 2.6.3 kernel and have played around w/ all the
> Try 2.4 series with lowlatency & preemtive patches. AIUI, 2.4+ll+pe is
> still better than any 2.6.
> > standard latency issues (no extra stuff running, swapped PCI cards
> > around for interrupts, etc.).  However, I'm not able to go lower than
> irq 9 would be best for soundcard. The irq order for non-apic system is,
> from highest priority to lowest: 1,8,9,10,11,12,13,14,15,3,4,5,6,7
> > 2.  Could it be my XFS filesystems?  If I have 8 or so tracks on one
> > drive, it can't even handle playback w/o giving disk errors.  I've
> > tweaked all the XFS options, and currently have a large (64MB) log for
> > each filesystem that resides on the opposite disk.  Would turning the
> > log size up or down help?  Is there a way to just turn off journalling
> > in XFS, or would ext2 be better?  I did notice that exporting to my
> > default param (i.e. small log) ext3 root partition seems to be faster
> > than to my XFS.
> For recording purposes, "write as much as possible, as fast as possible",
> (so no reading other tracks at the same time, just writing) I found ext2
> to be fastest (ext3,reiserfs were included in the test), since it doesn't
> spend time journaling.


More information about the Ardour-Users mailing list