> I wonder if can anybody can say what is the advantages/disadvantages
> of the linux networking kernel layer comapred to STREAMS network layer
> (of Solaris for examples).
hi John,
I'll take a stab at this question, since I'm working on a stream-like
framework in/op top of Linux anyway. First of all, though, to everyone else:
hi. I don't normally post here because I don't develop mainline kernel code,
instead I write code for a network project outside the kerneltree. If this
question doesn't belong on this list let me know, we can take it offline.
First some background info: the original STREAMS by Dennis Ritchie stacks
processing elements in the kernel. More recent derivates are Click and
ScoutOS. You can find a Linux port of the last online, search for 'Scout in
the Linux Kernel', aka SILK.
The major architectural difference between streamlike architectures and
functional architectures (like the Linux kernel) is that in the first paths
are transient while in the latter they're hardcoded (as functions). You can
setup a streams environment in general by calling
create_processor("ip"),
create_processor("tcp"),
link_processors("ip", "tcp")
In functional methods, on the other hand, you have to know the exact function
name to call at compile time, i.e.:
int tcp(char * pkt, int plen){
if (!ip(pkt, plen))
return -1;
//[... process tcp stuff ..]
return 0;
}
Now then, which should you prefer? STREAMS (which can still be found in all
Solaris versions IIRC) has been found slower than sockets, and since most
kernelpaths are relatively simple and static, sockets have basically won for
mainline kernels.
Why do people keep developing stream-like architectures, then? First of all,
streams are not intrinsically slower than sockets. <plug_work>Indeed, the
system I'm currently finishing of (ffpf) has been shown to outperform linux
kernel code for filtering (LSF) and we're hoping to show positive results in
general processing as well.</plug_work>
Second, some of us believe that with more modular, flexible paths, we can
reduce code complexity and increase reuse. I'm not too familiar with
netfilter, but I think it uses a stream-like framework itself. One advantage
is that a new netfilter function can be inserted in the stream if the module
is loaded. In a purely functional framework this would be impossible, as all
functions have to be known in advance.
NB: there is a large gray area here. Do plug-in functions that correspond to
switch cases count as stream objects, or is this a regular functional block
of code?
Lastly, there are uses where reconfigurability is essential. Click had routing
as application domain, where the stream-topology is very dependent on
administrator policies. Therefore it fitted better than a static functional
codeblock with many if/then statements. Perhaps the comparison between an
FPGA and a CPU is in order here. The one is slower, but more optimized than
the other. Whichever wins in terms of speed depends on many other details.
anyway, that's my two cents. And I'm certainly biased. Perhaps someone else
will chip in.
cheers,
Willem de Bruijn
http://ffpf.sf.net for more (slightly outdated) info on my project
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html