Hi Alex,

> The SmApper system also has an application manager, and tens to
> hundreds of interconnected database applications on blade clusters.

that sounds impressive, what does it actually mean?  Just curious, is
that all picolisp processes spread over many machines managing many
databases all communicating (interconnected, in what sence?) to solve
one (or many "unrelated") business problem?  I looked at SmApper
website and it seems quite a secret technology;-)

>> families): an admin part (quite complex, can change a lot and
>> significantly, can stop quite "often" for upgrades etc.) and public
>> part (quite simple, changes little, minimize downtime).
>
> It depends a lot on the logical structure, and what exchange of
> information has to take place, but I think PicoLisp has the necessary
> mechanisms, mainly using TC/P connections.
>
>> - open the database 'pool'
>
> I would use a separate 'pool' for each logical application.
>
>> - fork into two apps (each forked process would load code for
>> different app)
>
> Yes, but you have to keep in mind that it works only well if the
> processes accessing the db are children of a single parent (see 'tell'
> in the reference). In this case it would mean that the two forked apps
> should not do any further forks, which could be inconvenient.
>
> If you want to go with a single parent but different sources, it is no
> problem though, just load the sources after the fork. I do this
> currently for one customer who is still using the old Java applet API,
> and the new 'form' API in parallel (depending on the user). This app has
> in fact three entry URLs, one for the AWT version, one for the Swing
> version, and the new one.

I still do not know how to organize the processes:

My original idea was:

 +--------------+ fork
 | app 1 server | ---> 
 | lightweight  | ---> many http handler processes
 | 24/7         | ...  for app 1 server
 +--------------+

 +----------------+ fork
 | admin 1 server | ----
 | complicated    | ---- many http handler processes
 | upgrades often | ...  for admin 1 server
 +----------------+

 (app 1 server and admin 1 server share the same database: read-write)

     ... etc for other applications

Now, to keep common parent process, I thought:

                        +--------------+ fork
                        | app 1 server | ---> 
                        | lightweight  | ---> many http handler processes
 +--------------+ fork  | 24/7         | ...  for app 1 server
 |              | --->  +--------------+
 | app 1 parent |
 |              | --->  +----------------+ fork
 +--------------+       | admin 1 server | --->
                        | complicated    | ---> many http handler processes
                        | upgrades often | ...  for admin 1 server
                        +----------------+

 (app 1 server and admin 1 server share the same database: read-write)

     ... etc for other applications

>From what you say, I guess none of the pictures above would not work.

What do you mean by "processes accessing db are children of single
parent"?  Does it mean just the one above or any common parent up high
the hierarchy tree?

As the http server forks new child process for each request, does not
it mean that for each request I would have to load whole app 1 code
depending whether the request is for "app 1 server" or "admin 1
server" if the parent must be just the one above?  I am not convinced
that is the best solution... Or maybe is it the only solution?

I use separate pool for each "logical application" (not sure whether
we understand the same thing under that) but I am not sure whether to
use separate pool for each "app 1 server" and "admin 1 server";
probably separate as you recommend.

If I understand it well, then the db stays consistent and properly
synchronized no matter which process calls pool as long as the
processes have the same direct parent process.

> Another disadvantage of having all applications using the same pool
> is that you cannot put them on separate machines.

Well, that is not a problem for me at the moment.

>> What is '*Ext', I cannot find anything about that?
>
> The global '*Ext', in cooperation with RPC functions and the 'ext'
> function, allows to send pilog queries or other requests to remote
> machines, have external symbols sent back from these machines, and
> operate on these external symbols just as if they would reside on the
> local database. These objects are read-only, though.

Sorry, I am missing something, where can I find this 'ext' function
and '*Ext' global?

Thanks,

Tomas
-- 
UNSUBSCRIBE: mailto:[EMAIL PROTECTED]

Reply via email to