>       Server.shutdown() sets a flag that tells the main server to /stop
> accepting new requests/.

Can it be that this method should perform a bit more resource management
(according to the selected configuration like “socketserver.ThreadingMixIn”)?


>       So far as I can tell, for a threaded server, any threads/requests that
> were started and haven't completed their handlers will run to completion --
> however long that handler takes to finish the request. Whether
> threaded/forked/single-thread -- once a request has been accepted, it will
> run to completion.

This is good to know and such functionality fits also to my expectations
for the software behaviour.


> Executing .shutdown() will not kill unfinished handler threads.

I got a result like the following from another test variant
on a Linux system.

elfring@Sonne:~/Projekte/Python> time python3 test-statistic-server2.py
incidence|"available records"|"running threads"|"return code"|"command output"
80|6|1|0|
1|4|3|0|
1|4|4|0|
3|5|2|0|
1|5|3|0|
5|7|1|0|
1|3|4|0|
1|8|1|0|
1|4|1|0|
3|5|1|0|
1|4|2|0|
1|3|2|0|
1|6|2|0|

real    0m48,373s
user    0m6,682s
sys     0m1,337s


>> How do you think about to improve the distinction for the really
>> desired lock granularity in my use case?
>
>       If your request handler updates ANY shared data, YOU have to code the
> needed MUTEX locks into that handler.

I suggest to increase the precision for such a software requirement.


> You may also need to code logic to ensure any handler threads have completed

Can a class like “BaseServer” be responsible for the determination
if all extra started threads (or background processes) finished their work
as expected?


> before your main thread accesses the shared data

I became unsure at which point a specific Python list variable
will reflect the received record sets from a single test command
in a consistent way according to the discussed data processing.


> -- that may require a different type of lock;

I am curious on corresponding software adjustments.


> something that allows multiple threads to hold in parallel
> (unfortunately, Event() and Condition() aren't directly suitable)

Which data and process management approaches will be needed finally?


>       Condition() with a global counter (the counter needs its own lock)
> might work: handler does something like

How much do the programming interfaces from the available classes support
the determination that submitted tasks were completely finished?


> The may still be a race condition on the last request if it is started
> between the .shutdown call and the counter test (ie; the main submits
> .shutdown, server starts a thread which doesn't get scheduled yet,
> main does the .acquire()s, finds counter is 0 so assumes everything is done,
> and THEN the last thread gets scheduled and increments the counter.

Should mentioned system constraints be provided already by the Python
function (or class) library?

Regards,
Markus
-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to