The API for nsproxy looks like a bit of an accident to me. The module
is based on the external db driver interface, hence the name 'proxy'
and the explicit handle management. But 'proxy' really doesn't apply
here, and the worst part of the nsdb module has always been the handle
management...

Also, the module presents an async API but doesn't follow the lead of
the existing ns_job (very similar aims) or ns_http.  For comparison,
here's ns_proxy, ns_job, ns_http and some threading commands:


  *** Proxy Pools ***

ns_proxy get       pool ?-opt val ...? (get proxy handle from pool)
ns_proxy put       handle              (return proxy handle to pool)
ns_proxy release   handle              (identical to put)
ns_proxy ping      handle              (is slave responding?)
ns_proxy cleanup                       (release all handles for interp)

ns_proxy eval      handle script       (send script, wait for and recv result)

ns_proxy send      handle script       (send script to slave)
ns_proxy wait      handle ?timeout?    (wait for result from salve)
ns_proxy recv      handle              (get the result from slave)

ns_proxy pools                         (list all pools for all servers)
ns_proxy configure pool ?opt val ...?  (create new / cfg existing pool)

ns_proxy free      pool                (list free slaves in pool)
ns_proxy active    pool                (list busy slaves in pool)
ns_proxy handles                       (list allocated handles for interp)
ns_proxy clear     pool ?handle?       (close all slaves in all pools/pool)
ns_proxy stop      pool ?handle?       (close running slaves in all pools/pool)

(NB: 'pools', 'clear' and 'stop' sub-commands not virtual-server safe)


  *** Job Queues ***

ns_job create  ?-desc d?    queueId ?maxT? (create new q)
ns_job delete               queueId        (delete q when jobs complete)

ns_job queues                              (list of all queues)
ns_job queuelist                           (list of all q's w/q-info)
ns_job jobs                 queueId        (list of jobIds for q)
ns_job joblist              queueId        (list of jobIds w/job-info for q)
ns_job threadlist                          (return min/max threads info)

ns_job queue   ?-detached?  queueId script (send script to queue, ret jobid)
ns_job wait    ?-timeout t? queueId jobId  (fech job result)
ns_job waitany ?-timeout t? queueId        (fetch next result from q)
ns_job cancel               queueId jobId  (discard pending result)

ns_job genid                               (generate a unique q name)

(q names are not virtual-server safe)


  *** Async http requests ***

ns_http queue  ?qflags? url  (open conn to server, send task to io thread)
ns_http wait   ?wflags? hId  (wait for http opp result from io thread)

ns_http run    ?qflags? url  (queue and wait)

ns_http cancel          hId  (abort given http opp)

ns_http cleanup              (abort all opps in current interp)


   *** Threads ***

ns_thread begin         script (run script)
ns_thread begindetached script (run script, discard result, exit thread)
ns_thread wait          tId    (get result of non-detached thread)



The ns_job command isn't great -- it shares the same brokenness with
ns_proxy in that job queue names are global, like proxy pool names.
There's some ideas though:

- 'queue' initiates an action, returns an opaque handle which can be waited
 upon for the result.
- 'wait' waits for the result of a queued action.
- 'cancel' discards the result of a pending action.
- 'detached' actions can not be waited upon -- they do not have any result (or
 the result is discarded)


I was thinking that as ns_job and ns_proxy were so similar they could
be merged, but there is a significant difference: the 'eval' command
makes sense for ns_proxy but not for ns_job.

Naming is easy -- it's the explicit handle management that's making
things different.  Low level C APIs just don't map directly into C.
For example, you never expect a C programmer to grab a handle then
leave it dangling, that's just broken code.  But with Tcl this sort of
thing is expected.  You have to clean up, take extra precautions with
locking, and so on.

As far as I can see, the handles aren't needed.  How about something like this:


API:

result ns_exec_eval               ?-pool p? ?-timeout t?  script

Id     ns_exec_queue  ?-detached? ?-pool p? ?-timeout t?  script
      ns_exec_wait                         ?-timeout t?  Id ?Id ...?
      ns_exec_cancel

      ns_exec_set    ?-opt val ...?                      pool
      ns_exec_stats                                      pool

Example:

      set result [ns_exec_eval {
          set x y
          exec /bin/blah $x
      }]

Notes:

      Create pool 'default' on startup.

      Reuse identical processes (same binary, ulimits, etc.) among pools, in
      the same way that threads are shared globally among ns_job queues.

      'maxexecs' parameter, after which processes exit and are respawned.



As well as not being very descriptive, the name 'proxy' also clashes
with genuine HTTP proxy functions such as ns_register_proxy.  The
proxy functionality that Vlad's been adding recently obviously should
be in a a module called nsproxy...

Some things are definitely version 2, ulimits for example.  It would
be a pain to get stuck with an awkward API though.

What do you think?


I noticed a couple of things while looking through the code:



nsproxymod.c: SrvMod structure is statically allocated, one per
process, but is assigned to each time the module is loaded,
potentially x times per virtual server.


nsproxylib.c, ProxyPop(): Extra call to Ns_CondBroadcast() without
lock being held.


   if (err != ENone) {
       while ((proxyPtr = firstPtr) != NULL) {
           firstPtr = proxyPtr->nextPtr;
           PushProxy(proxyPtr);
       }
       Ns_CondBroadcast(&poolPtr->cond);
       return TCL_ERROR;
   }


nsproxylib.c, ProxyPush(): Ns_CondBroadcast() called whether proxy is
returned to pool or not.

   Ns_MutexLock(&poolPtr->lock);
   poolPtr->nused--;
   poolPtr->nfree++;
   if ((poolPtr->nused + poolPtr->nfree) <= poolPtr->max) {
       proxyPtr->nextPtr = poolPtr->firstPtr;
       poolPtr->firstPtr = proxyPtr;
       if (proxyPtr->procPtr) {
           SetExpire(proxyPtr->procPtr, proxyPtr->timers.tidle);
       }
       proxyPtr->timers = poolPtr->timers;
       proxyPtr = NULL;
   } else {
       poolPtr->nfree--;
   }
   Ns_CondBroadcast(&poolPtr->cond);
   Ns_MutexUnlock(&poolPtr->lock);


GetPool(): Proxy pool is created if name pool name does not already
exist. What about typos? This is error prone. Alternative: always
create a pool 'default' on start up and make specifying pool names
optional?

The term 'procPtr' is confusing when used to mean pointer to Proxy
Slave. In the rest of the code base, procPtr means pointer to C
function, i.e. a callback.

Reply via email to