On Thu, 22 Nov 2001, Paul Davis wrote: > >I still don't understand the difference. Seeing jack code, there is > >nothing new. You are using threads, shm for multiprocess communication > >and callbacks for the final communication between the application audio > >code and arbiter client. Can I see the global JACKs scheme which beats > >this scheme? > > synchronous execution of the processing graph. read on. > > >Here is our scheme for share and future mix plugins / aserver: > > > ><audio stream producer> -> <shm sender & poll> -> <arbiter & poll> -> > ><final point - audio device or other consumer & poll> > > > >Note that alsa-lib covers the <shm sender & poll>. > > aserver doesn't provide a frame count to the client client telling it > how much to do. when the client returns from poll(2), it can find out > how much space/data is available, but at this time, that has no direct > correlation with how much it should actually process. i can't see that > it ever would unless you come up with a whole new set of fake ALSA PCM > devices in which the relevant alsa-lib calls meet this requirement.
I don't agree here. If application doesn't send sufficient count of frames, then xrun situation occurs. An arbiter removes the execution list. > in general, aserver is just not written in a way conducive to low > latency operation. its not that its badly written - abramo and/or > yourself just clearly did not have this idea in mind. Well, I'm only trying to determine, if your code is really a better solution. For me, it seems that it will work with the same overhead as our idea. You have distributed arbiter code, we work with a compact arbiter. > lets take the scheme above and make it have 2 producers and one destination: > > producer1 -> [ whatever ] -+ > |-> consumer > producer2 -> [ whatever ] -+ > > producer1 and producer2 will not run in sample sync unless something > causes them to be executed synchronously. i've seen nothing in aserver > that provides for synchronous exection. having them both return from Yes, aserver takes care about shm only. The real kernel is the share or mix (non-existent) plugin. > poll isn't adequate unless upon return from poll they are guaranteed > to process the same number of frames *and* its also guaranteed that > the consumer will do nothing until both producers are finished. Because the share plugin works with one destination ring buffer, then these requirements are met. The synchronization time frame is given by size of the destination ring buffer. > aserver doesn't, as far as i can tell, guarantee either of > these things. > > [ note: i am assuming that all the participants are "well behaved"; a > different set of actions becomes necessary when some are not, and JACK > handles that by removing them from the processing graph] > > we went over and over this on LAD, even with Richard from GLAME. > eventually, it became clear to everyone, i believe, that if operating > the way that dedicated h/w works is the goal (which for almost > everyone, it is), asynchronous execution of graph nodes and blocking > on data ready conditions is not acceptable. any other design can lead > to stalls in the graph and dropouts. > > further, there was widespread agreement on LAD that most people don't > want "arbiters". everyone on other OS's (including some Unix systems > like IRIX) has gotten along fine with a standard sample format. > > >I have a strong suspect, that the JACK engine is only some pre-cache > >tool which can be solved using bigger ring-buffer. > > if this means what i think i means (i don't really understand the > sentence), then no. more buffering is precisely what's unacceptable to > those of us who want to use linux for realtime work. if buffering was > acceptable, then none of this stuff would be up for discussion: just > buffer the hell out of everything, and it will work. > > > I think that the > >global serializing and parallelizing scheme can't be avoided or changed > >in the audio dataflow. > > the JACK engine orders all clients in the correct execution order (and > it picks an order if there is no correct execution order). it > dynamically reorders the execution chain whenever the processing graph > is changed. note that it also does not involve a context switch back > to the server every time a client is finished - clients are chained so Really? How you can do it? I think that the scenario for mentioned 2 produces and one consumer would be like: 1st csw: consumer 2nd csw: producer1 callback 3rd csw: consumer 4th csw: producer2 callback ALSA scenario (if you imagine mmaped hardware and application pointer using shm - same behaviour as allows the ALSA kernel code for non-context communication with applications using mmap): 1st csw: producer1 2nd csw: producer2 3rd csw: consumer, waken up in regular time intervals Where is the difference in count of context switches? > that they directly cause the next one in the chain to execute its > callback. Well, I think that you noted two opposite ways now. You don't want to have an arbiter, and you are using an arbiter code to remove bad clients. It's really same situation as we mark the slower producers to be in the underrun state. Nothing else. >From my point of view: you simplified the communication, using one stream format and fixed data amounts, but the final result is and will be same. Jaroslav ----- Jaroslav Kysela <[EMAIL PROTECTED]> SuSE Linux http://www.suse.com ALSA Project http://www.alsa-project.org _______________________________________________ Alsa-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/alsa-devel