m
On Mar 21, 2010, at 8:56 AM, Andriy Gapon wrote:

> on 21/03/2010 16:05 Alexander Motin said the following:
>> Ivan Voras wrote:
>>> Hmm, it looks like it could be easy to spawn more g_* threads (and,
>>> barring specific class behaviour, it has a fair chance of working out of
>>> the box) but the incoming queue will need to also be broken up for
>>> greater effect.
>> 
>> According to "notes", looks there is a good chance to obtain races, as
>> some places expect only one up and one down thread.
> 
> I haven't given any deep thought to this issue, but I remember us discussing
> them over beer :-)
> I think one idea was making sure (somehow) that requests traveling over the 
> same
> edge of a geom graph (in the same direction) do it using the same 
> queue/thread.
> Another idea was to bring some netgraph-like optimization where some 
> (carefully
> chosen) geom vertices pass requests by a direct call instead of requeuing.
> 

Ah, I see that we were thinking about similar things.  Another tactic, and one 
that is
easier to prototype and implement than moving GEOM to a graph, is to allow 
separate
but related bio's to be chained.  If a caller, like maybe physio or the 
bufdaemon or 
even a middle geom transform, knows that it's going to send multiple bio's at 
once,
it chains them together into a single request, and that request gets pipelined 
through
the stack.  Each layer operates on the entire chain before requeueing to the 
next layer.
Layers/classes that can't operate this way will get the bio serialized 
automatically for them,
breaking the chain, but those won't be the common cases.  This will bring cache 
locality
benefits, and is something that know benefits high-transaction load network 
applications.

Scott

_______________________________________________
[email protected] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "[email protected]"

Reply via email to