On Wed, Feb 28, 2001 at 11:47:26AM +0100, Sander Striker wrote:
> Hi,
> 
> Before I start throwing stuff at you, I'll introduce myself.
> I'm Sander Striker, one of the Samba TNG team members. We have
> been looking at the APR a bit to find out if we can use it in
> our code.

Hi! :-)

>...
> Below is a simple draft of the NAL. The goal is to be able to
> add transports easily, even when not present in the kernel.
> Also this allows an easy way of protocol/transport stacking.
> A 'transport' can also do buffering, authentication, filtering.

It might be interesting to examine the filters that we have in Apache right
now. They provide for the protocol-stacking, buffering, and (hey!)
filtering.

Apache handles authentication outside of the filter stacks. (two: one stack
for input, one for output)

That said: what you outlined is reasonably close to a concept we had a while
back called IOLs. (Input/Output Layers)  They were tossed about six months
back in favor of the filter stacks.

Filters are actually very little code. The heavy lifting is all in the
buckets and the brigades (see APRUTIL's "buckets" directory). For the most
part, implementing a filter stack is mostly design policy rather than code.
What I'm trying to say :-), is that almost everything is already there. If
there *is* something missing, then we can shove it down from Apache.

The essence is that you create a brigade of some data. Then you pass that
brigade through N functions which each manipulate the brigade in interesting
ways. Let's take a simple case of generating a file request over an
encrypted pipe:

*) Samba creates a brigade containing two buckets: a FILE bucket for the
   request pipe, and an EOS bucket to signify "end of stream"
*) the brigade is sent into the "output stack" associated with the current
   request
*) the output stack happens to have two filters in it: an encryption filter,
   and a network filter (the latter places the brigade contents onto the
   network).
*) the encryption filter sequences through the brigade, rewriting the
   contents for the requested encryption. it periodically flushes chunks of
   work to the next filter in the stack.
*) the network filter places the content of any brigades passed to right
   onto the network.

So... that is the design we happen to be using in Apache, and is emboded in
the various APR and APRUTIL bits. It might be interesting to determine
whether a similar design would work for Samba's needs.

> Ignore name clashes with current APR code please.
> Also, I haven't looked into the apr_pool package to understand
> how it works quite yet. I want it to be _possible_ to tie transports
> to memory management (so we can dump all memory that we used for:
> a session, a request, a reply, etc).

We use pools within Apache to toss everything associated with a request (and
its response). Same for a connection. I've got to believe they will map very
well onto Samba's needs.

Subversion is using APR, too, and it makes heavy use of pools (no malloc!).
Using pools is so much nicer coding-wise, maintenance-wise, and
understandibility-wise since you don't have to do a bunch of free() calls
everywhere or the nasty "cleanup because an error occurred partway through
this function" series of goto's. At key points, you just toss the pool and
everything goes. In SVN, we have identified key points to partition memory
and to dump when we're done so that a given operation doesn't consume all
memory (e.g. limit the working set). For example, when we recurse into a
directory, we create a pool, use it for everything in the directory, and
then toss it on the way "out" of the directory. Thus, the working set is
proportional to depth rather than everything in a tree.

> I'll post a memory management draft aswell in a few days. It will
> be similar to this when it comes abstraction.

Look forward to it :-)

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/

Reply via email to