Ah -- old message I didn't see at first.... replies in-line below....


On Dec 9, 2009, at 4:06 PM, Tom Jackson wrote:

> Jim,
> 
> Looks like a lot of really cool stuff.
> 
> One question about the ns_quewait: the only events are NS_SOCK_READ
> and NS_SOCK_WRITE, which matches up with the tcl level, and also match
> up with Ns_QueueWait. Are the other possible file events handled
> somewhere else? Note that tcl includes exceptional conditions into
> read/write, which seems not ideal, but this interface seems to ignore
> all other conditions.
> 
> If the idea that the connection will fail by default, which will cause
> connection abort, then this is a great design.
> 
> Anyway, for instance, the WaitCallback function distinguishes
> READ/WRITE/DROP, but the registration proc NsTclQueWaitObjCmd only
> handles "readable,writable".
> 
> One huge advantage  of the AOLserver "fileevent" interface (over the
> tcl interface) is that the event type is more clearly defined. This
> makes callback/handlers a little simpler. My only worry in this
> particular case is that a connection will get stuck with an unhandled
> event.  We currently define:
> 
> #define NS_SOCK_READ          0x01
> #define NS_SOCK_WRITE         0x02
> #define NS_SOCK_EXCEPTION     0x04
> #define NS_SOCK_EXIT          0x08
> #define NS_SOCK_DROP          0x10
> #define NS_SOCK_CANCEL                0x20
> #define NS_SOCK_TIMEOUT       0x40
> #define NS_SOCK_INIT          0x80
> #define NS_SOCK_ANY           (NS_SOCK_READ|NS_SOCK_WRITE|NS_SOCK_EXCEPTION)


Yup -- at the C level it's read and/or write and the Tcl level read or write.  
But, the code could be hacked to handle more conditions, e.g., "priority" data 
(although I'm not sure what that is) or specific checks for dropped connections 
(apparently the Ns_Task interface silently sets POLLIN if it sees a POLLHUP but 
that's not the case elsewhere in the code).    Being consistent would be smart 
although perhaps we may be inviting new bugs in weird ways.


> 
> Another question about use of interps.
> 
> Interps are bound to threads, so they don't move around or follow a 
> connection.
> 
> The new filter points may create an interp. I'm not sure which thread
> creates the interp. The prequeue filter runs after all content is
> uploaded.  In the prequeue filter you register a read/write filter
> (opening a socket). This is quite new, something like a recursive
> filter. (Or do these filters fire for I/O on the main conn?)
> 
> Are these interps created and destroyed for each connection, or can
> they be shared?



Nope -- the interps are allocated/deallocated as needed just like ordinary 
connection interps.  But, since the connection will shuffle from one thread to 
the next, interps used by the connection (if any) go through the "garbage 
collection" phase as needed, e.g., closing all open files and clearing global 
vars.  This is why the "ns_cls" interface would be needed to stash 
per-connection context between threads.

And, this means you could have 3 interp/threads involved:

-- read callbacks in a "reader" thread (if configured, optional for ordinary 
sockets, required for ssl)
-- pre-queue callbacks in the "driver" thread
-- normal execution in the connection thread.

The code I checked in a month ago tried to deal with all that stuff and avoid 
the leak that was reported by not doing it properly.  Digging in, it was clear 
the interface wasn't complete -- hopefully it's complete and robust now and the 
manpages are close to accurate.



> 
> It seems that there a lot of interesting possibilities with this new
> code. It is actually difficult to compare with tcl's [fileevent]
> interface because this appears much more powerful. For instance, it
> seems very likely that you could turn AOLserver into a proxy server
> without ever invoking connection threads, everything would be done in
> high-speed C based event I/O, but the transfer would still have access
> to a tcl interp.


Yup -- the interface is a bit arcane but it can do interesting things like that.


> 
> My last question is the initialization of the interp. One driver
> thread could service multiple virtual servers. When an interp is
> created for use is there any choice? My understanding of the
> conn-thread pools is that they partition interps into somewhat similar
> groups. For instance, thread pools which handle static files would
> tend to not grow in size over time. Threads which handle adp or tcl
> files could be expected to grow as they serve unrelated dynamic
> content.


The interps are allocated from per-server caches just like in a connection 
thread so the state should look like you expect (although, as mentioned, global 
vars will "disappear" between the reader/pre-queue interps and the connection 
interps).  As the driver thread never exists, misused interps in this case 
would lead to memory leaks/bloat that may be possible to mitigate in connection 
threads via the "die after so many connections..." options.  I suppose we could 
add a "die after so many uses..." config to get the same result in the driver 
thread -- that's a good idea, a call to Ns_TclMarkForDelete should do the trick 
after some counter...


BTW: I'm learning more and more about the whole LAMP stack (like most of us, I 
suspect).  While PHP is quite comfortable, the gymnastics AOLserver goes 
through to "warm-up" and "re-use" interps is absent in the LAMP world. There 
are "APC" caches which store and re-use the bytecodes (similar to ADP's caches) 
but none of the complexity around registering at-init routines, garbage 
collection, etc.  While all that stuff was a bit messy and confusing on 
AOLserver, it worked -- performance of interesting LAMP apps like Drupal seem 
to suffer for lack of such lower level design principles we had in AOLserver.  
Interesting to see how this has evolved over 15 years (some of the first code 
for multithreaded AOLserver appeared in early 1995 -- I had more hair then).

-Jim






> 
> tom jackson
> 
> On Mon, Dec 7, 2009 at 8:21 PM, Jim Davidson <jgdavid...@mac.com> wrote:
>> Hi,
>> 
>> I just checked in some changes to hopefully fix the pre-queue interp leak 
>> muck (and other bugs).  I also added read and write filter callbacks -- the 
>> read callbacks can be used to report file upload progress somewhere.  And, I 
>> added new ns_cls and ns_quewait commands to work with the curious 
>> Ns_QueueWait interface.  Some man pages were updated to reflect the changes 
>> including an example in ns_quewait.n.   There's not yet an HTTP protocol 
>> interface on top of ns_quewait but it could be added, letting you do some 
>> REST things, for example, in an event-driven manner before your connection 
>> gets running.
>> 
>> -Jim
>> 
>> 
>> On Dec 1, 2009, at 4:45 PM, Jeff Rogers wrote:
>> 
>>> Jim Davidson wrote:
>>>> Right -- the pre-queue thing operates within the driver thread only,
>>>> after all content is read, before it's dispatched to a connection.
>>>> The idea is that you may want to use the API to fetch using
>>>> event-style network I/O some other bit of context to attach to the
>>>> connection using the "connection specific storage" API.  So, it won't
>>>> fire during the read.
>>> 
>>> Can you share any specific examples of how it has been used?  It's always 
>>> struck me as an unfinished (or very specific-purpose) API since it's 
>>> undocumented and it seems that doing anything nontrivial is liable to gum 
>>> up the whole works since the driver is just a single thread.
>>> 
>>>> However, a progress callback API could be good -- wouldn't be called
>>>> unless someone wrote something special and careful enough to not bog
>>>> things down.  Maybe an "onstart", "onend", and "on %" and/or "on #"
>>>> callback which could do something like dump the progress in some
>>>> shared (process or machine wide) thinger for other threads to check.
>>>> I like the idea...
>>>> Oh, and the pre-queue leak sounds like a bug -- should just be
>>>> re-using the same Tcl interp for all callbacks.
>>> 
>>> In the case of a tcl filter proc, ProcFilter gets the server/thread 
>>> specific interpreter for the connection and expects NsFreeConnInterp to be 
>>> called later, but in the case of pre-queue filters NsFreeConnInterp is 
>>> never called in the driver thread so it allocates (PopInterp) a new interp 
>>> every time.  Adding in a call to NsFreeConnInterp after the prequeue 
>>> filters looks like it fixes the problem.  If a filter proc is added into 
>>> SockRead the same thing would need to happen (potentially in the reader 
>>> thread instead of the driver thread).
>>> 
>>> One thing I am confused about tho, is why without calling NsFreeConnInterp 
>>> in the driver thread it just leaks the interps rather than crashing when it 
>>> tries to use the interp in the conn thread, since it looks like a new conn 
>>> interp wouldn't get allocated in that case.
>>> 
>>> I also don't understand why there can be multiple interps per server+thread 
>>> combo in the first place (PopInterp/PushInterp); I'd expect that only one 
>>> conn can be in a thread at a time and that it always releases the interp 
>>> when it leaves the thread.
>>> 
>>> -J
>>> 
>>>> -Jim
>>>> On Nov 25, 2009, at 5:46 PM, Jeff Rogers wrote:
>>>>> It looks like the pre-queue filters are run after the message body
>>>>> has been read, but before it is passed off to the Conn thread, so
>>>>> no help there.  However it looks like it would not be hard to add
>>>>> in a new callback to the middle of the read loop, tho it's
>>>>> debatable if that's a good idea or not (for one, it would get
>>>>> called a *lot*).
>>>>> Curious about that tcl prequeue leak.  I guess no one uses or cares
>>>>> about it, since the symptom is serious (more than just a really big
>>>>> memory leak, it crashes the server too), the cause is pretty
>>>>> obvious and the fix appears on the surface to be pretty obvious,
>>>>> although it does result in prequeue filters working differently
>>>>> from all the others, in particular that it would use a different
>>>>> interp from the rest of the request.
>>>>> -J
>>>>> Tom Jackson wrote:
>>>>>> There is one possibility. There is a pre-queue filter in
>>>>>> AOLserver (run inside the driver thread). It works from the Tcl
>>>>>> level, but creates a memory leak equal to the size of an interp,
>>>>>> in other words a huge memory leak. However, maybe at the C level,
>>>>>> you could create a handler which would do something interesting
>>>>>> before returning control back to the driver thread and ultimately
>>>>>> the conn thread. I'm not sure exactly when the pre-queue filters
>>>>>> are activated, but if it is before reading the message body, it
>>>>>> might be useful.
>>>>> -- AOLserver - http://www.aolserver.com/
>>>>> To Remove yourself from this list, simply send an email to
>>>>> <lists...@listserv.aol.com> with the body of "SIGNOFF AOLSERVER" in
>>>>> the email message. You can leave the Subject: field of your email
>>>>> blank.
>>>> -- AOLserver - http://www.aolserver.com/
>>>> To Remove yourself from this list, simply send an email to
>>>> <lists...@listserv.aol.com> with the body of "SIGNOFF AOLSERVER" in
>>>> the email message. You can leave the Subject: field of your email
>>>> blank.
>>> 
>>> 
>>> --
>>> AOLserver - http://www.aolserver.com/
>>> 
>>> To Remove yourself from this list, simply send an email to 
>>> <lists...@listserv.aol.com> with the
>>> body of "SIGNOFF AOLSERVER" in the email message. You can leave the 
>>> Subject: field of your email blank.
>> 
>> 
>> --
>> AOLserver - http://www.aolserver.com/
>> 
>> To Remove yourself from this list, simply send an email to 
>> <lists...@listserv.aol.com> with the
>> body of "SIGNOFF AOLSERVER" in the email message. You can leave the Subject: 
>> field of your email blank.
>> 
> 
> 
> --
> AOLserver - http://www.aolserver.com/
> 
> To Remove yourself from this list, simply send an email to 
> <lists...@listserv.aol.com> with the
> body of "SIGNOFF AOLSERVER" in the email message. You can leave the Subject: 
> field of your email blank.



--
AOLserver - http://www.aolserver.com/

To Remove yourself from this list, simply send an email to 
<lists...@listserv.aol.com> with the
body of "SIGNOFF AOLSERVER" in the email message. You can leave the Subject: 
field of your email blank.

Reply via email to