> > > No details about the protocols used. Gigabit ethernet is recommended for
> > > "the best performance".
> > >
> > Mmm ... It says:
> >
> > The protocol is independent of the network hardware, thanks
> > to the use of the standard TCP.
> >
> >
> > So, tcp/ip?
>
> More likely is mLAN ove
> > > > It's not unusable, but IIRC it can get to several ms of jitter.
> > >
> > > Why is that? The USB iso clock is every ms IIRC, so naively you
> > > would expect the maximum jitter to be just under 1ms (if the bus was
> > > saturated by audio transfers), and less in proportion to the degree of
> > [...]
> > > > > > The problem here is that class compliant devices suffer bad timing
> > > > > > because they use bulk transfers for MIDI data. The standard for
> > > > > > MIDI over FireWire is much better.
> > [...]
> > > > > Is the timing really that bad? I don't even think a firewire 8x8
>
[...]
> > MIDI streams need a reliable transport with guaranteed bandwidth. If
> > USB can't provide this, then it is not really suitable for MIDI, but I'm not
> > saying it is unusable, just that it may perform worse then traditional
> > serial multiport MIDI interfaces.
>
> USB can provide this
> > The problem here is that class compliant devices suffer bad timing
> > because they use bulk transfers for MIDI data. The standard for
> > MIDI over FireWire is much better.
>
> I don't agree on the subject that USB bulk transfers cause bad MIDI timing.
> Of course, you can't use the same US
[...]
> > > > The problem here is that class compliant devices suffer bad timing
> > > > because they use bulk transfers for MIDI data. The standard for
> > > > MIDI over FireWire is much better.
[...]
> > > Is the timing really that bad? I don't even think a firewire 8x8
> > > rackmount MIDI inte
> On Fri, 2004-09-10 at 07:49, Martijn Sipkema wrote:
> > >> [...] the USB specification. And it even appears like some vendors
> > >> are (finally!) starting to follow suit:
> > >>
> > >>http://midiman.com/products/en_us/KeystationPro88-m
>> [...] the USB specification. And it even appears like some vendors
>> are (finally!) starting to follow suit:
>>
>>http://midiman.com/products/en_us/KeystationPro88-main.html
>>
>>- "USB class compliant-no drivers required for
>> Windows XP or Mac OS X"
>
> M-Audio started following
From: "Steve Harris" <[EMAIL PROTECTED]>
> On Sat, Aug 14, 2004 at 10:07:06PM +0200, Benno Senoner wrote:
> > >UDP also has unbounded transit time. In practice its OK if you dont want
> > >low latencies (just use RTP), but for low latency you really need one of
> > >the non-IP ethernet protocols th
From: "Bill Huey (hui)" <[EMAIL PROTECTED]>
> On Tue, Jul 13, 2004 at 11:44:59PM +0100, Martijn Sipkema wrote:
> [...]
> > The worst case latency is the one that counts and that is the contended case. If
> > you could guarantee no contention then the worst case
From: "Bill Huey (hui)" <[EMAIL PROTECTED]>
> On Tue, Jul 13, 2004 at 01:09:28PM +0100, Martijn Sipkema wrote:
> > [...]
> > > Please double-check that there are no priority inversion problems and that
> > > the application is correctly setting the sche
From: "Paul Davis" <[EMAIL PROTECTED]>
> >Hmm, I've just recently learned about the Priority Ceiling Protocol,
> >an extension to Priority Inversion Protocol, which explicitly prevents
> >deadlocks. And I've learned about both in a RTOS course, so I'm a little
> >surprised by your statement about t
From: "Christian Henz" <[EMAIL PROTECTED]>
> On Tue, Jul 13, 2004 at 10:55:48AM -0400, Paul Davis wrote:
> > >Thus, the fact that Linux does not support protocols to prevent priority
> > >inversion (please correct me if I am wrong) kind of suggests that supporting
> > >realtime applications is not
From: "Paul Davis" <[EMAIL PROTECTED]>
> >Thus, the fact that Linux does not support protocols to prevent priority
> >inversion (please correct me if I am wrong) kind of suggests that supporting
> >realtime applications is not considered very important.
>
> we went through this (you and i in parti
Benno Senoner wrote:
> Martijn Sipkema wrote:
> >It is often heard in the Linux audio community that mutexes are not realtime
> >safe and a lock-free ringbuffer should be used instead. Using such a lock-free
> >ringbuffer requires non-standard atomic integer operations and
[...]
> Please double-check that there are no priority inversion problems and that
> the application is correctly setting the scheduling policy and that it is
> mlocking everything appropriately.
I don't think it is currently possible to have cooperating threads with
different priorities without p
[...]
> interestingly, the design of ASIO only allows 2 interrupts per
> hardware buffer. ALSA is much more flexible in handling this kind of
> thing.
A huge mistake of ASIO IMHO. On the Audiowerk8 for example,
running 3 interrupts per buffer allows using the input DMA interrupt
only; this interr
[...]
> > Sorry Fons, but define acceptable! Please!
>
> I will define as non-acceptable the implication:
>
>Paul uses a text based mail client
>=>
>this explains why his GUI designs are cluttered.
>
> It would be acceptable and in this context even funny with a :-),
> but I didn't s
> [...]
> > > Right, but resolution is just a matter of RAMDAC parameters. All
> > > I want is a 3856x1536 framebuffer with one RAMDAC displaying a
> > > 2048x1536 window and the other displaying a 1808x1356 window. I
> > > don't care about one tiny MB of VRAM being invisible.
> >
> > True, this sh
[...]
> > > There is another mode, where a single buffer forms a big desktop,
> > > of which each RAMDAC displays a part. Seems like stupid driver
> > > limitations restrict this mode to using the same resolution for
> > > both heads, but I'm not sure.
> >
> > It is to be expected that a single ren
[...]
> I just remembered; the Matrox G100/200/400 drivers and/or hardware has
> problems with multiple OpenGL contexts. Matrox have known about it
> for ages, but seem to ignore the problem. This is part of the reason
> why I gave up on my G400.
Most likely a driver problem (or elsewhere in the s
[...]
> > IIRC Matrox cards have a way of making a single framebuffer (with
> > xinerama hints) that appears on two monitors. That way you should
> > get 3d accel on both displays.
>
> ATI has something similar, but their drivers don't seem to work with
> Xinerama the normal way. It (sort of) wor
> > [...]
> > > Xinerama _does_ support open GL, at least with my matrox card, I can
have
> > > openGL on one monitor of the two. That is a limitation of the card
> > > hardware, AFAIK, not of X.
> >
> > I doubt this is a hardware limitation. The hardware just renders to AGP
or
> > local memory. I
[...]
> Xinerama _does_ support open GL, at least with my matrox card, I can have
> openGL on one monitor of the two. That is a limitation of the card
> hardware, AFAIK, not of X.
I doubt this is a hardware limitation. The hardware just renders to AGP or
local memory. I may be wrong though...
--
[...]
> What is 'ALport' and 'ALconfig', and where are they
> defined?
Those are part of the SGI audio library and I woudn't expect them
to be available under Linux.
--ms
[...]
> Conventional PCM techniques are unable to reproduce high frequencies
> correctly. And the explanation is very simple.
Actually a correct explanation isn't that simple. Yours is much _too_
simple.
Theoretically a 20 kHz bandlimited signal can be represented _exactly_ as a
40 kHz PCM stream.
> On Thursday 24 July 2003 13:46, Michael Ost wrote:
> > Is there SCHED_FIFO style priority available in the new kernel, with its
> > new threading model? Realtime audio processing doesn't share the CPU
> > very well. The ear can pick out even the slightest glitches or delays.
> > So for Linux to b
> so i've tried to make a new scheduling policy for linux. i've
> called it SCHED_USERFIFO. the intent is basically to allow a process
> ask for x amount of processor time out of every y jiffies. (the user
> part is in the hope that the administrator can set rlimits on the
> amount of perce
> In a system/application, that recieves external midi data of any kind,
> is there anything one can assume about _when_ some midi data is recieved?
>
> i mean, with audio data, you have the buffer size of the dac/adc, which
> (together with sampling rate) enforces some kind of "global clock"
>
> > Because I think ALSA does too much in the kernel (and it is not
> > well documented eiter).
>
> Wait a minute, why do you say that?
Because:
- I think ALSA is not that well documented.
- I'd rather see a combination of a device specific kernel driver and
a user-space driver than a common kern
[...]
> >I don't think an application should ask for a certain number of frames
> >to wakeup. The driver should dictate when to wake up. This is the way
> >my Audiowerk8 JACK driver works and it would get a lot more
> >complicated if I had to add support for user-space wake up at
> >arbitrary inter
> > Well, I'll shut up about it. I still think it is a mistake. I haven't
heard
> > any
> > convincing (to me) arguments why an application should not handle
variable
> > sized callbacks.
>
> Because it makes certain types of processing viable, which they are not
> really in variable block systems
> >any
> >convincing (to me) arguments why an application should not handle
variable
> >sized callbacks. VST process() is variable size I think as are EASI xfer
> >callbacks, but clearly JACK needs constant callbacks and there is nothing
> >I can do about that...
>
> as i understand it, VST is only
> > > According to the mLAN spec you need a buffer of around ~250us
(depending
> > > on format) to collate the packets.
> >
> > Still there is no guarantee that 10 packets always have exactly the
same
> > number of samples. You say the mLAN spec says you need a buffer of
> > around ~250us. No
> On Wed, Feb 26, 2003 at 12:38:38 +0100, Martijn Sipkema wrote:
> > Still there is no guarantee that 10 packets always have exactly the same
> > number of samples. You say the mLAN spec says you need a buffer of
> > around ~250us. Note that is doesn't say a buffer of a
[...]
> The bottom level packets are sent at fixed time intervals (obviously,
> corresponding to the frame clock of the bus), but these packets are tiny
> and you get millions of them per second. A useful packet of audio data
> will be made up of a bunch of these.
>
> According to the mLAN spec yo
> > I'm not sure, but it seems the audio transport over FireWire does not
> > deliver a constant number of frames per packet. Does this mean that
> > JACK cannot support FireWire audio without extra buffering?
>
> ISO packets are a fixed size, so there will be a constant number of
> frames per pac
> Some folks here might find this of interest. I do...
>
> http://www.cs.ru.ac.za/research/g99s2711/thesis/thesis-final.pdf
I'm not sure, but it seems the audio transport over FireWire does not
deliver a constant number of frames per packet. Does this mean that
JACK cannot support FireWire audio
> >IMHO the hardware should dictate the blocksize. For most audio
> >interfaces this will be constant. For some it is not.
>
> the claim is that designs in which it is not constant are at least
> less than optimal and at worst, just stupid.
Well, I disagree. I don't think it is a stupid design.
[...]
> I'm a newbie to LAD, but I have some years of experience of developing
> and using a system similar to JACK for routing blocks of samples to
> DSP modules of a digital satellite control receiver and transmitter
> system running on Solaris (we are talking about some megasamples per
> second
[...]
> Instead I would suggest a built in poll mode in JACK for audio hardware
> with strange period sizes. Although not the most beautiful solution, it
> will work reliably and will only be needed for the type of hardware
> that cannot provide interrupt rates which is a power of two.
I'm not a J
> > [...]
> > > > Perhaps you would reconsider having JACK use constant (frames)
> > > > callbacks?
> > >
> > > I think a better solution might be to buffer up enough samples so that
> > > jackd can provide a constant number of frames.
> >
> > I don't think that is a better solution. JACK should be
[...]
> > Perhaps you would reconsider having JACK use constant (frames)
> > callbacks?
>
> I think a better solution might be to buffer up enough samples so that
> jackd can provide a constant number of frames.
I don't think that is a better solution. JACK should be close to the
hardware and del
[...]
> many USB audio interfaces work in a fundamentally different way than
> other audio interfaces. rather than sending an "interrupt" to the host
> after processing 2^N frames, they send an interrupt every N
> msecs.
And JACK doesn't support this because it needs a constant size
(frames) peri
> I'm trying to run a low latency kernel and audio applications on a
> crusoe processor laptop.
>
> Yes, I'm crazy.
You might want to take a look at the following:
http://www.mindcontrol.org/~hplus/aes-2001
In slide 5/6 three different processors are compared and the 400 MHz
Transmeta Crusoe is
[...]
> Isn't the pulse of MIDI clock defined by the BPM, that's how all
> instruments synced to MIDI clock work...
24 MIDI clock messages are to be sent per quarter note.
--ms
[...]
> If you don't know *every* detail of
> a struct, you can't create an instance of one, because you don't know
> it's *size*.
And the offset of its members.
[...]
> So, the basic problem is that it's not the constructor that allocates
> memory for the instance; it's the code generated by the
[...]
> >No, it requires a pure virtual class per distinct interface (abstract
> >class). And I don't see why this would not scale.
>
> you should try writing ardour :)
It might be me who won't scale :) I know writing large applications is
not easy.
> >A friend is just like a member function, i.e
[...]
> i love C++. i think its one of the best things ever. but i happen to
> agree with Erik. the solution proposed by martijn doesn't scale well,
> and doesn't really address the issue in a comprehensive way. it
> requires one pure virtual class per distinct set of private members,
> for a start
> > You are not forced to define the private data members and functions at
the
> > same time as the public ones in C++. The way to handle this is to put
the public
> > interface in a pure virtual class:
>
> In my opinion (please note that this IS an opinion) the method you propose
> is at least as
[...]
> for an on-topic rant, see Erik de Castro Lopo's interview on mstation.org:
>
> http://mstation.org/erikdecl.php
>
> where he discusses the OO design of libsndfile and libsamplerate
> (surely two of the most rock-solid audio libraries ever!)
>From this article:
"I also think that there i
[...]
> :-) I have exactly the same problem with templates, it pretends to be
> dynamic while it's just statically generated (=similar to preprocessor,
> which I guess is your point)
I think the C++ STL is great and a perfect example of the power of
templates. Much better than GLib, and it's sta
[...]
> hold it, guys. (i know, i sometimes can't resist, too.)
I was just able to resist this time...
> please stop this thread and respect anybody's choice of "license" or
> whatever conditions they might offer their software under. if you don't
> like it, don't use it.
I think the question a
Hi,
I've written a low level, i.e. it is not ALSA/OSS but a device
specific interface, driver for the Emagic Audiowerk8 audio card.
I still need to implement I2C (for setting the sample rate) and
switching the input (analog/digital) and the buffer size is currently
fixed (set at compile time) at
> Why is it important to keep the API simple, shouldn't it be functional in
first place and make the API usage simply?
Who says a simple API can't be functional?
> Anyway (IMHO), there should really be an API which combines audio and MIDI
playback, recording and timing of events and makes it pos
> > I don't want to support tempo (MIDI clock) scheduling in my MIDI API.
This
> > could be better handled in the application itself. Also, when slaved to
MIDI
> > clock
> > it is no longer possible to send messages ahead of time, and not
supporting
> > this
> > in the API makes that clear to the
[...]
> Within ALSA we have two priority queues, one for tick (bar,beat) scheduled
> events, and one for clock (ns) scheduled events.
As MIDI uses MIDI tick messages for time based sync and MIDI clock messages
for tempo based sync I kind of feel the ALSA sequencer naming is a little
confusing :)
> >MIDI through and any other 'immediate' type MIDI messages do
> >not need to be scheduled, they can be written to the interface
immediately.
>
> Yes, they could. It would however necessitate different input routes
> for 'immediate' and 'queued' events to the MIDI output handler.
The MIDI I/O AP
[...]
> >User space MIDI scheduling should run at high rt priority. If scheduling
> >MIDI events is not done at a higher priority than the audio processing
> >then it will in general suffer jitter at the size of the audio interrupt
> >period.
>
> Jitter amounting to the length of time the audio cy
[...]
> Is there already a commonly available UST on linux? To my knowledge the
only
> thing that comes close is the (cpu specific) cycle counter.
No, not yet. I think we should try to get hard- or firm-timers and POSIX
CLOCK_MONOTONIC into the Linux kernel.
--martijn
> Hi! I wanted to ask, how about forcing
> an absolute timestamp for _every_ midi event?
> I think this would be great for softsynths,
> so they dont need to work with root/schedfifo/lowlatency
> to have a decent timing. Not allways you are willing
> to process midi at the lowest latency possible.
[...]
> i just want to note my happiness at reading a post from martijn with
> which i agree 100% !! who says there is no such thing as progress ? :))
Indeed Paul, I'd agree you've made some real progress here :)
--martijn
> This is an idea I had some time ago and simply have not had the time to
> explore.
>
> Nowadays few people would want to do Midi without doing audio at the same
> time. This potentially leads to a great simplification in the handling of
> Midi.
>
> Why not lock the Midi processing to the audio p
> So we need something which handles the timing like the DirectMusic(tm) in
> the Linux kernel.
I would prefer not to have this in the kernel. If the kernel provides
accurate
scheduling and CLOCK_MONOTONIC then I think this can and should
be done from user-space. A driver should be able to read
C
> I find that for sending MIDI to an external device, "resolution = RTC
> Hz" works very well. It is a problem that a realtime audio thread
> 'suffocates' a RTC thread if low-latency is required, and only one
> processor available. It's very hard to find a clean solution in this
> case, but firm
> >Haven't written anything using MIDI and JACK (or LADSPA), but would it be
poss
> >ible to have a such system as with Cubase where the softsynths are
plugins whi
> >ch receive time-stamped MIDI events (time-stamp is an offset from the
block be
> >ginning in samples).
Either this (use audio samp
> How does the pull model work with block-based algorithms that cannot
> provide any output until it has read a block on the input, and thus
> inherently has a lower bound on delay?
>
> I'm considering a redesign of I/O handling in BruteFIR to add Jack
> support (I/O is currently select()-based),
> >on every callback. If a node does internal buffering that should not affect
> >the MSCs.
>
> right, because there isn\'t really an MSC anywhere. as you noted, the
> global jack transport time isn\'t really equivalent. nothing in the
> main JACK API says anything about doing anything except han
[...]
>
> consider:node B
>/\\
> ALSA PCM -> node Anode D -> ALSA PCM
>\\/
>node C
>
> what is the latency for output of data from node A ? it depends on
> what happens at n
> >If I use an absolute sleep there is basically no difference. The drift
> >will be the same, but instead of scheduling events from \'now\' I can
> >spcify the exact time. So a callback would then be like:
> >
> >- get the UST and MSC for the first frame of the current buffer for input
>
> MSC
> nanosleep isn\'t based on time-of-day, which is what is subject to
> adjustment. nanosleep uses the schedule_timeout, which is based on
> jiffies, which i believe are monotonic.
I\'m not sure how nanosleep() is supposed to handle clock adjustment
but I agree it would probably not change its beh
> >[...]
> >> UST can be used for timestamping, but thats sort of useless, since the
> >> timestamps need to reflect audio time (see below).
> >
> >I\\\'d like to have both a frame count (MSC) and a corresponding system
time
> >(UST) for each buffer (the first frame). That way I can predict when (
[...]
> if you find the link for the
> ex-SGI video API developer\'s comment on the dmSDK, i think you may
> also see some serious grounds for concern about using this API. i\'m
> sorry i don\'t have it around right now.
is it http://www.lurkertech.com/linuxvideoio.html ?
this is about the older
> their stuff has never been widely (if at all) used for low latency
> real time processing of audio.
[...]
> ...it doesn\'t get used this way
> because (1) their hardware costs too much (2) their API for audio
> doesn\'t encourage it in any way (3) their API for digital media in
> general is conf
[...]
> Its worth noting that SGI\'s \"DM\" API has never really taken
> off, and there are lots of reasons why, some technical, some
> political.
Perhaps. See http://www.khronos.org/ for where SGI\'s dmSDK might
still be going. I think this API might be good for video. So maybe
it is not that go
[...]
> UST can be used for timestamping, but thats sort of useless, since the
> timestamps need to reflect audio time (see below).
I\'d like to have both a frame count (MSC) and a corresponding system time
(UST) for each buffer (the first frame). That way I can predict when (UST)
a certain perfo
[...]
> UST = Unadjusted System Time
I believe this is a good introduction to UST/MSC:
http://www.lurkertech.com/lg/time/intro.html
--martijn
Powered by ASHosting
[...]
> UST = Unadjusted System Time
>
> I haven\'t seen any implementations of UST where you could specify a
> different source of the clock tick than the system clock/cycle timer.
Well, no. Is this needed. The UST should just be an accurate unadjusted
clock that can be used for timestamping/sc
> >Does that mean that MIDI output can only be done from a callback?
>
> No, it would mean that MIDI is only actually delivered to a timing
> layer during the callback. Just as with the ALSA sequencer and with
> audio under JACK, the application can queue up MIDI at any time, but
> its only deli
On 23.07.2002 at 15:35:58, Paul Davis <[EMAIL PROTECTED]> wrote:
> >On Tue, Jul 23, 2002 at 07:48:45 -0400, Paul Davis wrote:
> >> the question is, however, to what extent is it worth it. the reason
> >> JACK exists is because there was nothing like it available for moving
> >> audio data around.
> Yep, think of 0-127 ranges for controller data :(
> That is too coarse;
MIDI provides 14bit controller resolution by having controller
pairs. That should be enough for controller since most sliders/knobs
on hardware have much less than that.
Pitch bend is 14bit also, allthough there is a lot of
> >Well, it\'s not _that_ important, but there are a few good reasons...
> >
> >1) The LADSPA API was not designed for ABI changes (most notably the
> > interface version is not exported by plugins). This means that
> > old plugins that you didn\'t remember to delete/recompile can
> > caus
> Let\'s say we have threads A (SCHED_FIFO/50) and B (SCHED_FIFO/90). If both
> threads are runnable, B will always be selected and it will run until
> (quoting the man page) \"... either it is blocked by an I/O request, it is
> preempted by a higher priority process, or it calls sched_yield\".
>
> >> For instance if you have
> >> a mixer element in the signal graph, it is just easier if all the inputs
> >> deliver the same amount of data at every iteration.
> > Hmm, why? I can see that it is a requirement that at every iteration there
> > is the same data available at the input(s) as is
> >> The two threads must run with SCHED_FIFO as they both need to complete
> >> their cycle before the next soundcard interrupt.
> > And even if they both run SCHED_FIFO, they should then also run at the
same
> > priority.
>
> Not needed. The SCHED_FIFO protects against from other tasks taking t
> On Thu, Jul 11, 2002 at 04:31:18PM +0200, Martijn Sipkema wrote:
> > When implementing a FIFO that is read by a low priority thread and written
> > by a higher priority thread (SCHED_FIFO) that is not allow to block when
> > writing the FIFO for a short, bounded time. Then
> I must have explained things quite poorly in the article you said you
> read. Having a live scheduler allows you to _not_ understand all the
> complex interactions between blocking operations in your system because
> the liveness means that eventually whatever thread you are waiting for
> will
> [constant nframes]
> > But why is this needed? The only valid argument I heard for this is
> > optimization of frequency domain algorithm latency. I suggested a
> > capability interface for JACK as in EASI where it is possible to ask
> > whether nframes is constant. The application must still ha
> One simple reason, whether a valid design or not, is that there\'s a lot of
> code that handles audio in constant size blocks.
Ok. I give up. It\'s clear that most find this more important that the
issue with hardware.
> For instance if you have
> a mixer element in the signal graph, it is jus
so, to summarize:
nframes != constant
- (better) support for various (stupid) hardware.
nframes == constant:
1- Easier porting of read/write applications.
2- Optimized latency of FFT; FFT done on period size. This would
also require a power of 2 sized period.
So not having nframes constant
> >Again, I agree with you. That\'s also why I am against a constant nframes,
> >because there is hardware that really doesn\'t want nframes constant.
>
> such as?
How should I know? :)
I heard some yamaha soundcards generate interrupts at a constant rate not
depending on the sample rate. Perhap
> There\'re two separate problems here. Constant nframes might be required
> even if the application supports engine iteration from outside sources
> (ie. all audio processing happens inside the process() callback).
> Ecasound is one example of this.
But why is this needed? The only valid argum
> >An examples situation for using priority inheritance
> >might be porting a read/write audio i/o application to a
> >callback based interface without too much effort. This can\\\'t in the
> >general case be done without adding latency, if there is no blocking
> >allowed in the callback funct
> Linux is not real-time, it has a scheduler that, generally, makes sure
> eventually everyone gets to run. I think people often understimate
> how useful a \\\"live\\\" scheduler is and how limited a real-time priority
> scheduler is.
I agree. That\'s why it is needed. And for realtime threads
> victor yodaiken wrote a nice article on why priority inheritance as a
> way of handling priority inversion isn\\\'t a particularly attractive
> solution.
I read this article, but I am not convinced. The only argument against
using priority inheritance is that it is supposed to have poor
perform
>Apple aquired Emagic.
>
>no Windows version of any Logic after Sept 30
>
>What implies that to us? Any guess?
>
> haven't seen any other news about it yet.
I saw it on http://www.heise.de
I sure hope that Emagic will now be willing to give the specifications
on their AMT protocol
> Going back to the issue of latency, it should be pointed out that while
> it might not be a big deal if your softsynth takes 25 ms to trigger,
It is unless you only use it with a sequencer.
> latency on the PCI bus is a big problem. If you can't get data from
> your HD (or RAM)
>From memo
> if i read this correctly, it's about latency wrt _another_player_. all
> trained ensemble musicians are easily able to compensate for the rather
> long delays that occur on normal stages. not *hearing_oneself_in_time*
> is a completely different thing. if i try to groove on a softsynth, 10
> ms
> How about the 1.0-1.5 ms latencies that everbody tries to obtain (or
already
> has) in both Linux/Win world? That always made me wonder if this isn't
just
> hype like the 192 kHz issue.
>
> I'm not a professional musician, but a 25 ms latency makes me more than
> happy.
I would say that for pla
> You certainly can't play an instrument with 10ms
> latency.
in 10ms sound travels somewhat more than 3 meters.
that why i use nearfield monitors :)
--martijn
1 - 100 of 173 matches
Mail list logo