On Thu, Apr 28, 2005 at 07:54:29AM -0700, Tom Jackson wrote:

> Bottom line: AOLserver is great for mass virtual hosting of the sort I have
> described: static, database dynamic or offsite redirects. It is less helpful
> for file based dynamic sites since you will likely have to rely on plain old
> CGI. The built in Tcl scripting and adp (AOLserver Dynamic Pages) share
> memory between requests and over the life of the server and system
> user/group. So you would need to carefully control what your users are
> allowed to run, otherwise they could mess with each other and with the server
> operation. Using separate AOLserver virtual hosts will not work on a massive
> scale since each virtual host requires a lot of memory, and requires time to
> start up. It also requires a restart to add virtual hosts.

This sounds like a good argument for some form of FastCGI-like
solution.  One main AOLserver process, but then also give each user
his own FastCGI server process.

Possibly that FastCGI process could be just another AOLserver that's
been tweaked a bit differently for minimal footprint and low
concurrency.  E.g., turn of the memory-hungry threaded memory
allocator because you don't need its speed under high concurrency.  Or
possibly a specialized tclsh-based process would be better.

But either way, looks like with the proper work, you could set things
up so that the user's custom code needn't much are whether it's runnin
in the primary or the FastCGI AOLserver process.

Actually though, this FastCGI scenario sounds pretty similar to the
one process per user/site style of virtual hosting anyway.  How would
the two scenarios differ, exactly?

Ah, but either way even with Zoran's ttrace, the per-thread proc
memory overhead would still bite you hard, you'd need to do extra
hacking on AOLserver to drive that down much further for this
scenario.

You'd probably want to keep shared (non-user-custom) code in read-only
SysV shared memory so that ALL the AOLserver processes could see it.
In order to do that you'd probably want to also extend the nsv/tsv API
to transparently work in shared memory too, that part at least
probably isn't too hard.

User custom code, by definition, needs to be per-process, but ttrace
might not be good enough, you might need to get it down to really
genuinely only 1 copy of each Tcl proc process-wide for any number of
threads.  You'd need per-user memory usage tracking and limits of
course to make sure user's don't just go crazy defining all sorts of
procs on the fly per-thread even when they don't need to.

If you do all that, then I guess the only remaining problem might be
the size of all the per-thread C stacks used by the Tcl/AOLserver
process.  Say 1000 users, each with 4 threads, each with one thread
with a 0.5 MB C stack.  That 2 GB of RAM just for the C stacks.  Still
doable, but somewhat costly.  Hm, Linux has automatically re-sizing
stacks though, possibly all you need to do is make AOLserver and Tcl
use that (both growing and shrinking) rather than a fixed-size stack.
Anybody know what that would require?

--
Andrew Piskorski <[EMAIL PROTECTED]>
http://www.piskorski.com/


--
AOLserver - http://www.aolserver.com/

To Remove yourself from this list, simply send an email to <[EMAIL PROTECTED]> 
with the
body of "SIGNOFF AOLSERVER" in the email message. You can leave the Subject: 
field of your email blank.

Reply via email to