On 24Jan2020 21:08, Dennis Lee Bieber <wlfr...@ix.netcom.com> wrote:
My suggestion for your capacity thing: use a Semaphore, which is a
special thread safe counter which cannot go below zero.

   from threading import Semaphore

   def start_test(sem, args...):
       sem.acquire()
       ... do stuff with args ...
       sem.release()

   sem = Semaphore(10)

   threads = []
   for item in big_list:
       t = Thread(target=start_test, args=(sem, item))
       t.start()
       threads.append(t)
   ... wait for all the threads here ...

This version starts many threads, but only 10 at a time will do "work"
because they stall until they can acquire the Semaphore. The first 10
acquire it immediately, then the later only stall until an earlier
Thread releases the Semaphore.

        You are actually proposing to create {200} threads, with related stack
and thread overhead -- and then block all but 10, releasing a blocked
thread only when a previous unblocked thread exits?

Well, yeah, but largely because semaphores are averlooked as a resource constraint tool, and because the expression is simple and clear.

I'd much prefer to create only 10 threads with the semaphore control in the thread dispatcher, but it was harder to write and be clear in its intent. Basic concepts first, superior complication later.

I also was wanting a scheme where the "set it all up" phase could be fast (== start alll the threads, wait later) versus process a capacity limited queue (takes a long time, stalling the "main" programme). Of course one might dispatch a thread to run the queue...

I'm aware this makes a lot of threads and they're not free, that's a very valid criticism.

Cheers,
Cameron Simpson <c...@cskk.id.au>
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to