On Wed, 14 Dec 2005, Martin Konold wrote:
> 
> Using SIGIO is often a suitable solution if you want to avoid the complexity 
> of threads. I use it rather often in order to make legacy software more 
> responsive and in order to add some asynchronous behaviour.

Sure. And it's a perfectly fine capability. I'm not saying that SIGIO 
should be replaced by threading in general. 

I just wanted to point out what the _implications_ of SIGIO are. The main 
one being that the process that uses it still very much is 
single-threaded, and can only do one thing at a time. And in particular, 
system calls are NOT pre-empted by signals - even "interruptible" system 
calls.

In particular, an interruptible system call does not mean that signals 
will interrupt it any any random point in the system call. It really only 
means that it will check whether signals are pending on certain 
well-defined points.

For example, something as simple as reading from a pipe is "interruptible" 
in the sense that a reader won't wait around if a signal happens and the 
pipe is empty. BUT, it will still wait for other events: it will block on 
page faults (signals or no signals), and it will block on getting the lock 
that protects the pipe (ie it may block on the _writer_ taking a page 
fault, because the writer is about to fill the pipe and already took the 
lock).

The same is very much true of sockets etc. 

So SIGIO is a good method of getting notified on IO events, and it works 
really well (and is generally much simpler - and can often be more 
efficient too - than doing the same with threads), but you just have to 
accept that signals happen in the same context as all other processing, 
and while that is what makes them simpler (and more efficient) to use, it 
is also what makes them inherently more serialized and that they will only 
be as "real time" as the process that uses them is.

Normally that is fine. Signals are seldom delayed very much (unless the 
process explicitly blocks them, of course), but people notice certain 
things more easily. A tenth of a second is not very long at all for most 
things, but people _do_ notice small breaks in video (and especially 
audio).

In video, you usually notice it only if some continuous smooth movement 
suddenly isn't. In audio, you notice it every time ;)

> > The problem is
> > the memory allocations inherent in both the mallocs and internally in the
> > kernel in the writev() itself.
> 
> Can these problems with memory allocation be somewhat dampened by using some 
> smarter memory pooling algorithm which can claim memory from preallocated 
> space?

The normal malloc() is trivial to dampen, and probably is effectively 
dampened by just about _any_ userspace malloc library. And Keith indicates 
that X does some of that on its own in addition to all the normal malloc 
stuff.

So the malloc will probably hurt much less often, unless the X server is 
doing something else (ie it's repainting the screen and does a lot more 
stuff than just send out a couple of mouse events).

> Linus: Can the kernel help so that small memory allocations which might be 
> reclaimed soon can be made fast even when large file IO does happen. This 
> basically also would boil down to do some pooling of memory per process.

Oh, we do. You'd see a lot more of this if we didn't. The kernel also 
tries to make the process that does the dirtying do the writeback most of 
the time, although that's really just a heuristic and it depends on what 
the ratio of dirty to other pages is.

But in the end, the kernel tries even harder to use memory efficiently, 
and that does mean that disk caches in particular are allowed to grow 
aggressively. And that the kernel very much doesn't try to keep lots of 
memory free - free memory is wasted memory. 

And when it comes to actually sending the message, the thing is that the 
process that allocates the buffers for sending is _not_ the same as the 
one that frees them. We obviously pass the allocations around over the 
socket when the data is sent, so the sender continually allocates memory 
and the receiver frees it.

Things are seldom so balanced that you'd see a smooth "allocate as often 
as free". There's a lot of "bustyness" in these things, and the 
sender/receiver situations are seldom mirror-images either (ie you often 
have a situation where one process sends a lot more than it receives).

And that's when you'll occasionally block and wait for a writeout to 
complete (or start a few write-outs). 

> Are the SIGIOs queued or aggregated by the kernel in case they can not be 
> immediately delivered? 

SIGIO isn't an RT signal, so it's aggregated.

> Sofar in all my programming I always assumed that the Signals are aggregated 
> and that I have to loop in userspace in order to get all data which might 
> have accumulated in the meantime.

Yes. Although you can ask for discrete signals by using the proper 
realtime extensions (in this case you obviously don't even -want- to do 
that, but some other situations you do: you can send a "siginfo" block 
along with the signal).

                        Linus
_______________________________________________
Desktop_architects mailing list
Desktop_architects@lists.osdl.org
https://lists.osdl.org/mailman/listinfo/desktop_architects

Reply via email to