Re: Threading question .. am I doing this right?

2022-02-28 Thread Robert Latest via Python-list
Chris Angelico wrote:
> I'm still curious as to the workload (requests per second), as it might still
> be worth going for the feeder model. But if your current system works, then
> it may be simplest to debug that rather than change.

It is by all accounts a low-traffic situation, maybe one request/second. But
the view in question opens four plots on one page, generating four separate
requests. So with only two clients and a blocking DB connection, the whole
application with eight uwsgi worker threads comes down. Now with the "extra
load thread" modification, the app worked fine for several days with only two
threads.

Out of curiosity I tried the "feeder thread" approach with a dummy thread that
just sleeps and logs something every few seconds, ten times total. For some
reason it sometimes hangs after eight or nine loops, and then uwsgi cannot
restart gracefully probably because it is still waiting for that thread to
finish. Also my web app is built around setting up the DB connections in the
request context, so using an extra thread outside that context would require
doubling some DB infrastructure. Probably not worth it at this point.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading question .. am I doing this right?

2022-02-25 Thread Chris Angelico
On Sat, 26 Feb 2022 at 05:16, Robert Latest via Python-list
 wrote:
>
> Chris Angelico wrote:
> > Depending on your database, this might be counter-productive. A
> > PostgreSQL database running on localhost, for instance, has its own
> > caching, and data transfers between two apps running on the same
> > computer can be pretty fast. The complexity you add in order to do
> > your own caching might be giving you negligible benefit, or even a
> > penalty. I would strongly recommend benchmarking the naive "keep going
> > back to the database" approach first, as a baseline, and only testing
> > these alternatives when you've confirmed that the database really is a
> > bottleneck.
>
> "Depending on your database" is the key phrase. This is not "my" database that
> is running on localhost. It is an external MSSQL server that I have no control
> over and whose requests frequently time out.
>

Okay, cool. That's crucial to know.

I'm still curious as to the workload (requests per second), as it
might still be worth going for the feeder model. But if your current
system works, then it may be simplest to debug that rather than
change.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading question .. am I doing this right?

2022-02-25 Thread Robert Latest via Python-list
Greg Ewing wrote:
> * If more than one thread calls get_data() during the initial
> cache filling, it looks like only one of them will wait for
> the thread -- the others will skip waiting altogether and
> immediately return None.

Right. But that needs to be dealt with somehow. No data is no data.

> * Also if the first call to get_data() times out it will
> return None (although maybe that's acceptable if the caller
> is expecting it).

Right. Needs to be dealt with.

> * The caller of get_data() is getting an object that could
> be changed under it by a future update.

I don't think that's a problem. If it turns out to be one I'll create a copy of
the data while I hold the lock and pass that back.
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading question .. am I doing this right?

2022-02-25 Thread Robert Latest via Python-list
Chris Angelico wrote:
> Depending on your database, this might be counter-productive. A
> PostgreSQL database running on localhost, for instance, has its own
> caching, and data transfers between two apps running on the same
> computer can be pretty fast. The complexity you add in order to do
> your own caching might be giving you negligible benefit, or even a
> penalty. I would strongly recommend benchmarking the naive "keep going
> back to the database" approach first, as a baseline, and only testing
> these alternatives when you've confirmed that the database really is a
> bottleneck.

"Depending on your database" is the key phrase. This is not "my" database that
is running on localhost. It is an external MSSQL server that I have no control
over and whose requests frequently time out.

> Hmm, it's complicated. There is another approach, and that's to
> completely invert your thinking: instead of "request wants data, so
> let's get data", have a thread that periodically updates your cache
> from the database, and then all requests return from the cache,
> without pinging the requester. Downside: It'll be requesting fairly
> frequently. Upside: Very simple, very easy, no difficulties debugging.

I'm using a similar approach in other places, but there I actually have a
separate process that feeds my local, fast DB with unwieldy data. But that is
not merely replicating, it actually preprocesses and "adds value" to the data,
and the data is worth retaining on my server. I didn't want to take that
approach in this instance because it is a bit too much overhead for essentially
"throwaway" stuff. I like the idea of starting a separated "timed" thread in
the same application. Need to think about that.

Background: The clients are SBCs that display data on screens distributed
throughout a manufacturing facility. They periodically refresh every few
minutes. Occasionally the requests would pile up waiting for the databsase, so
that some screens displayed error messages for a minute or two. Nobody cares
but my pride was piqued and the error logs filled up.

I've had my proposed solution running for a few days now without errors. For me
that's enough but I wanted to ask you guys if I made some logical mistakes.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading question .. am I doing this right?

2022-02-25 Thread Greg Ewing

On 25/02/22 1:08 am, Robert Latest wrote:

My question is: Is this a solid approach? Am I forgetting something?


I can see a few of problems:

* If more than one thread calls get_data() during the initial
cache filling, it looks like only one of them will wait for
the thread -- the others will skip waiting altogether and
immediately return None.

* Also if the first call to get_data() times out it will
return None (although maybe that's acceptable if the caller
is expecting it).

* The caller of get_data() is getting an object that could
be changed under it by a future update.

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: Threading question .. am I doing this right?

2022-02-24 Thread Chris Angelico
On Fri, 25 Feb 2022 at 06:54, Robert Latest via Python-list
 wrote:
>
> I have a multi-threaded application (a web service) where several threads need
> data from an external database. That data is quite a lot, but it is almost
> always the same. Between incoming requests, timestamped records get added to
> the DB.
>
> So I decided to keep an in-memory cache of the DB records that gets only
> "topped up" with the most recent records on each request:

Depending on your database, this might be counter-productive. A
PostgreSQL database running on localhost, for instance, has its own
caching, and data transfers between two apps running on the same
computer can be pretty fast. The complexity you add in order to do
your own caching might be giving you negligible benefit, or even a
penalty. I would strongly recommend benchmarking the naive "keep going
back to the database" approach first, as a baseline, and only testing
these alternatives when you've confirmed that the database really is a
bottleneck.

> Since it is better to quickly serve the client with slightly outdated data 
> than
> not at all, I came up with the "impatient" solution below. The idea is that an
> incoming request triggers an update query in another thread, waits for a short
> timeout for that thread to finish and then returns either updated or old data.
>
> class MyCache():
> def __init__(self):
> self.cache = None
> self.thread_lock = Lock()
> self.update_thread = None
>
> def _update(self):
> new_records = query_external_database()
> if self.cache is None:
> self.cache = new_records
> else:
> self.cache.extend(new_records)
>
> def get_data(self):
> if self.cache is None:
> timeout = 10 # allow more time to get initial batch of data
> else:
> timeout = 0.5
> with self.thread_lock:
> if self.update_thread is None or not 
> self.update_thread.is_alive():
> self.update_thread = Thread(target=self._update)
> self.update_thread.start()
> self.update_thread.join(timeout)
>
> return self.cache
>
> my_cache = MyCache()
>
> My question is: Is this a solid approach? Am I forgetting something? For
> instance, I believe that I don't need another lock to guard 
> self.cache.append()
> because _update() can ever only run in one thread at a time. But maybe I'm
> overlooking something.

Hmm, it's complicated. There is another approach, and that's to
completely invert your thinking: instead of "request wants data, so
let's get data", have a thread that periodically updates your cache
from the database, and then all requests return from the cache,
without pinging the requester. Downside: It'll be requesting fairly
frequently. Upside: Very simple, very easy, no difficulties debugging.

How many requests per second does your service process? (By
"requests", I mean things that require this particular database
lookup.) What's average throughput, what's peak throughput? And
importantly, what sorts of idle times do you have? For instance, if
you might have to handle 100 requests/second, but there could be
hours-long periods with no requests at all (eg if your clients are all
in the same timezone and don't operate at night), that's a very
different workload from 10 r/s constantly throughout the day.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading and multiprocessing deadlock

2021-12-06 Thread Dieter Maurer
Johannes Bauer wrote at 2021-12-6 00:50 +0100:
>I'm a bit confused. In my scenario I a mixing threading with
>multiprocessing. Threading by itself would be nice, but for GIL reasons
>I need both, unfortunately. I've encountered a weird situation in which
>multiprocessing Process()es which are started in a new thread don't
>actually start and so they deadlock on join.

The `multiprocessing` doc
(--> 
"https://docs.python.org/3/library/multiprocessing.html#module-multiprocessing;)
has the following warning:
"Note that safely forking a multithreaded process is problematic."

Thus, if you use the `fork` method to start processes, some
surprises are to be expected.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading and multiprocessing deadlock

2021-12-06 Thread Barry Scott



> On 5 Dec 2021, at 23:50, Johannes Bauer  wrote:
> 
> Hi there,
> 
> I'm a bit confused. In my scenario I a mixing threading with
> multiprocessing. Threading by itself would be nice, but for GIL reasons
> I need both, unfortunately. I've encountered a weird situation in which
> multiprocessing Process()es which are started in a new thread don't
> actually start and so they deadlock on join.
> 
> I've created a minimal example that demonstrates the issue. I'm running
> on x86_64 Linux using Python 3.9.5 (default, May 11 2021, 08:20:37)
> ([GCC 10.3.0] on linux).
> 
> Here's the code:

I suggest that you include the threading.current_thread() in your messages.
Then you can see which thread is saying what.

Barry

> 
> 
> import time
> import multiprocessing
> import threading
> 
> def myfnc():
>   print("myfnc")
> 
> def run(result_queue, callback):
>   result = callback()
>   result_queue.put(result)
> 
> def start(fnc):
>   def background_thread():
>   queue = multiprocessing.Queue()
>   proc = multiprocessing.Process(target = run, args = (queue, 
> fnc))
>   proc.start()
>   print("join?")
>   proc.join()
>   print("joined.")
>   result = queue.get()
>   threading.Thread(target = background_thread).start()
> 
> start(myfnc)
> start(myfnc)
> start(myfnc)
> start(myfnc)
> while True:
>   time.sleep(1)
> 
> 
> What you'll see is that "join?" and "joined." nondeterministically does
> *not* appear in pairs. For example:
> 
> join?
> join?
> myfnc
> myfnc
> join?
> join?
> joined.
> joined.
> 
> What's worse is that when this happens and I Ctrl-C out of Python, the
> started Thread is still running in the background:
> 
> $ ps ax | grep minimal
> 370167 pts/0S  0:00 python3 minimal.py
> 370175 pts/2S+ 0:00 grep minimal
> 
> Can someone figure out what is going on there?
> 
> Best,
> Johannes
> -- 
> https://mail.python.org/mailman/listinfo/python-list
> 

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading and multiprocessing deadlock

2021-12-06 Thread Johannes Bauer
Am 06.12.21 um 13:56 schrieb Martin Di Paola:
> Hi!, in short your code should work.
> 
> I think that the join-joined problem is just an interpretation problem.
> 
> In pseudo code the background_thread function does:
> 
> def background_thread()
>   # bla
>   print("join?")
>   # bla
>   print("joined")
> 
> When running this function in parallel using threads, you will probably
> get a few "join?" first before receiving any "joined?". That is because
> the functions are running in parallel.
> 
> The order "join?" then "joined" is preserved within a thread but not
> preserved globally.

Yes, completely understood and really not the issue. That these pairs
are not in sequence is fine.

> Now, I see another issue in the output (and perhaps you was asking about
> this one):
> 
> join?
> join?
> myfnc
> myfnc
> join?
> join?
> joined.
> joined.
> 
> So you have 4 "join?" that correspond to the 4 background_thread
> function calls in threads but only 2 "myfnc" and 2 "joined".

Exactly that is the issue. Then it hangs. Deadlocked.

> Could be possible that the output is truncated by accident?

No. This is it. The exact output varies, but when it hangs, it always
also does not execute the function (note the lack of "myfnc"). For example:

join?
join?
myfnc
join?
myfnc
join?
myfnc
joined.
joined.
joined.

(only three threads get started there)

join?
myfnc
join?
join?
join?
joined.

(this time only a single one made it)

join?
join?
join?
myfnc
join?
myfnc
joined.
myfnc
joined.
joined.

(three get started)

> I ran the same program and I got a reasonable output (4 "join?", "myfnc"
> and "joined"):
> 
> join?
> join?
> myfnc
> join?
> myfnc
> join?
> joined.
> myfnc
> joined.
> joined.
> myfnc
> joined.

This happens to me occasionally, but most of the time one of the
processes deadlocks. Did you consistently get four of each? What
OS/Python version were you using?

> Another issue that I see is that you are not joining the threads that
> you spawned (background_thread functions).

True, I kindof assumed those would be detached threads.

> I hope that this can guide you to fix or at least narrow the issue.

Depending on what OS/Python version you're using, that points in that
direction and kindof reinforces my belief that the code is correct.

Very curious.

Thanks & all the best,
Joe
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading and multiprocessing deadlock

2021-12-06 Thread Martin Di Paola

Hi!, in short your code should work.

I think that the join-joined problem is just an interpretation problem.

In pseudo code the background_thread function does:

def background_thread()
  # bla
  print("join?")
  # bla
  print("joined")

When running this function in parallel using threads, you will probably
get a few "join?" first before receiving any "joined?". That is because
the functions are running in parallel.

The order "join?" then "joined" is preserved within a thread but not
preserved globally.

Now, I see another issue in the output (and perhaps you was asking about 
this one):


join?
join?
myfnc
myfnc
join?
join?
joined.
joined.

So you have 4 "join?" that correspond to the 4 background_thread 
function calls in threads but only 2 "myfnc" and 2 "joined".


Could be possible that the output is truncated by accident?

I ran the same program and I got a reasonable output (4 "join?", "myfnc" 
and "joined"):


join?
join?
myfnc
join?
myfnc
join?
joined.
myfnc
joined.
joined.
myfnc
joined.

Another issue that I see is that you are not joining the threads that 
you spawned (background_thread functions).


I hope that this can guide you to fix or at least narrow the issue.

Thanks,
Martin.


On Mon, Dec 06, 2021 at 12:50:11AM +0100, Johannes Bauer wrote:

Hi there,

I'm a bit confused. In my scenario I a mixing threading with
multiprocessing. Threading by itself would be nice, but for GIL reasons
I need both, unfortunately. I've encountered a weird situation in which
multiprocessing Process()es which are started in a new thread don't
actually start and so they deadlock on join.

I've created a minimal example that demonstrates the issue. I'm running
on x86_64 Linux using Python 3.9.5 (default, May 11 2021, 08:20:37)
([GCC 10.3.0] on linux).

Here's the code:


import time
import multiprocessing
import threading

def myfnc():
print("myfnc")

def run(result_queue, callback):
result = callback()
result_queue.put(result)

def start(fnc):
def background_thread():
queue = multiprocessing.Queue()
proc = multiprocessing.Process(target = run, args = (queue, 
fnc))
proc.start()
print("join?")
proc.join()
print("joined.")
result = queue.get()
threading.Thread(target = background_thread).start()

start(myfnc)
start(myfnc)
start(myfnc)
start(myfnc)
while True:
time.sleep(1)


What you'll see is that "join?" and "joined." nondeterministically does
*not* appear in pairs. For example:

join?
join?
myfnc
myfnc
join?
join?
joined.
joined.

What's worse is that when this happens and I Ctrl-C out of Python, the
started Thread is still running in the background:

$ ps ax | grep minimal
370167 pts/0S  0:00 python3 minimal.py
370175 pts/2S+ 0:00 grep minimal

Can someone figure out what is going on there?

Best,
Johannes
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Threading plus multiprocessing plus cv2 error

2020-09-03 Thread Python
On Sat, Aug 29, 2020 at 06:24:10PM +1000, John O'Hagan wrote:
> Dear list
> 
> Thanks to this list, I haven't needed to ask a question for
> a very long time, but this one has me stumped.
> 
> Here's the minimal 3.8 code, on Debian testing:
> 
> -
> from multiprocessing import Process
> from threading import Thread
> from time import sleep
> import cv2
> 
> def show(im, title, location):
> cv2.startWindowThread()
> cv2.namedWindow(title)
> cv2.moveWindow(title, *location)
> cv2.imshow(title, im)
> sleep(2) #just to keep window open
> 
> im1 = cv2.imread('/path/to/image1')
> im2 = cv2.imread('/path/to/image2')
> 
> Thread(target=show, args=(im1, 'im1', (600,0))).start()
> sleep(1)
> Process(target=show, args=(im2, 'im2', (0, 0))).start()
> -
> 
> Here's the error:
> 
> -
> [xcb] Unknown sequence number while processing queue
> [xcb] Most likely this is a multi-threaded client and XInitThreads has
> not been called 
> [xcb] Aborting, sorry about that.
> python3: ../../src/xcb_io.c:260: poll_for_event: Assertion
> `!xcb_xlib_threads_sequence_lost' failed.
> -

It's hard to say EXACTLY what the nature of the error is, without
knowing what the underlying libraries are doing.  But from this one
thing is clear:  You are starting a new thread, and then forking a new
process.  This is very likely to break, because contrary to what
several people have said in this thread, the new thread WILL NOT be
copied to the new program, so in the new process, part of your program
is literally missing.  This can have all kinds of consequences.

For that reason, you can't spawn a new thread and then fork a new
process, in that order, and expect your program to function correctly.
Whatever your program does that depends on that new thread won't work
correctly, because that thread does not exist in the child (new
process).  You can, however, do those things in the reverse order and
it should be fine... new threads will be started in both processes
(unless you take steps to ensure only one of the processes creates the
thread).  Both processes retain their integrity and should run fine.

> There's no error without the sleep(1), nor if the Process is started
> before the Thread, nor if two Processes are used instead, nor if two
> Threads are used instead. IOW the error only occurs if a Thread is
> started first, and a Process is started a little later.

Hopefully my explanation above makes it clear why all of those things
are true, other than the sleep() issue.  That one is most likely just
a timing issue:  Whatever resource is causing the problem hasn't been
set up yet or the critical thread or process has already finished
execution before the issue can arise, or something of the sort.  When
you start new threads or processes, usually there's some delay as your
OS schedules each process/thread to run, which is somewhat random
based on how the scheduler works and how loaded the system is.  Such
timing problems (bugs that seem to happen randomly with each run of
the program, or over time in a long-executing program) are common in
multi-threaded programs that are written incorrectly, especially when
the state of one thread depends on the state of the other thread, and
the two don't synchronize correctly.  This is another way that the
first problem above can manifest, too: The threads can't synchronize
because one of them does not exist!

Hope that helps.



signature.asc
Description: PGP signature
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading plus multiprocessing plus cv2 error

2020-09-01 Thread John O'Hagan
On Sun, 30 Aug 2020 09:59:15 +0100
Barry Scott  wrote:

> 
> 
> > On 29 Aug 2020, at 18:01, Dennis Lee Bieber 
> > wrote:
> > 
> > On Sat, 29 Aug 2020 18:24:10 +1000, John O'Hagan
> >  declaimed the following:
> > 
> >> There's no error without the sleep(1), nor if the Process is
> >> started before the Thread, nor if two Processes are used instead,
> >> nor if two Threads are used instead. IOW the error only occurs if
> >> a Thread is started first, and a Process is started a little later.
> >> 
> >> Any ideas what might be causing the error?
> >> 
> > 
> > Under Linux, multiprocessing creates processes using
> > fork(). That means that, for some fraction of time, you have TWO
> > processes sharing the same thread and all that entails (if it
> > doesn't overlay the forked process with a new executable, they are
> > sharing the thread until the thread exits).
> 
> In the parent you have 1 or more threads.
> 
> After fork the new process has 1 thread, which is not shared with the
> parent. Any extra threads are not in the new process. But the memory
> in the new process will have data structures from the parents other
> threads.
> 
> So no you never have two processes sharing an threads.
> 
> This leads to problems with locks.
> 
> Barry
> 
> 
> > 
> > https://stackoverflow.com/questions/54466572/how-to-properly-multithread-in-opencv-in-2019
> > (which points to)
> > https://answers.opencv.org/question/32415/thread-safe/?answer=32452#post-id-32452
> > """
> > The library itself is thread safe in that you can have multiple
> > calls into the library at the same time, however the data is not
> > always thread safe. """
> > 
> > The sleep(1), when compounded with the overhead of starting
> > the thread, and then starting the process, likely means the thread
> > had exited before the process actually is started. Try replacing
> > the sleep(2) in the work code with something like sleep(30) -- I
> > hypothesize that you'll get the same error condition even with the
> > sleep(1) in place.
> > 
> > 
> > -- 
> > Wulfraed Dennis Lee Bieber AF6VN
> > wlfr...@ix.netcom.com
> > http://wlfraed.microdiversity.freeddns.org/
> > 
> > -- 
> > https://mail.python.org/mailman/listinfo/python-list
> > 
> 

Thanks to all who replied. I'm still none the wiser about the error, but
I learned something about multiprocessing!

John
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading plus multiprocessing plus cv2 error

2020-08-30 Thread Stephane Tougard via Python-list
On 2020-08-30, Barry  wrote:
>*  The child process is created with a single thread—the one that
>   called fork().  The entire virtual address space of the parent is
>   replicated in the child, including the states of mutexes,
>   condition variables, and other pthreads objects; the use of
>   pthread_atfork(3) may be helpful for dealing with problems that

Indeed, I have a similar entry on my NetBSD:

 In case of a threaded program, only the thread calling fork() is
 still running in the child processes.

Very interesting.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading plus multiprocessing plus cv2 error

2020-08-30 Thread Barry


> On 30 Aug 2020, at 11:03, Stephane Tougard via Python-list 
>  wrote:
> 
> On 2020-08-30, Chris Angelico  wrote:
>>> I'm not even that makes sense, how 2 processes can share a thread ?
>>> 
>> They can't. However, they can share a Thread object, which is the
>> Python representation of a thread. That can lead to confusion, and
>> possibly the OP's error (I don't know for sure, I'm just positing).
> 
> A fork() is a copy of a process in a new process. If this process has a
> thread (or several), they are part of the copy and the new process has
> those threads as well.

No. See https://www.man7.org/linux/man-pages/man2/fork.2.html which says:

“ Note the following further points:

   *  The child process is created with a single thread—the one that
  called fork().  The entire virtual address space of the parent is
  replicated in the child, including the states of mutexes,
  condition variables, and other pthreads objects; the use of
  pthread_atfork(3) may be helpful for dealing with problems that
  this can cause.”

Barry
> 
> Unless there is a memory sharing between those processes, what happens
> on one thread in the first process is totally independant of what
> happens in the copy of this thread in the other process.
> 
> I'm not specialist on multi-threading in Python, but it should not
> change anything. Both processes (father and child) don't share the same
> thread, each one has its own copy of the thread.
> 
> -- 
> https://mail.python.org/mailman/listinfo/python-list
> 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading plus multiprocessing plus cv2 error

2020-08-30 Thread Stephane Tougard via Python-list
On 2020-08-30, Chris Angelico  wrote:
>> I'm not even that makes sense, how 2 processes can share a thread ?
>>
> They can't. However, they can share a Thread object, which is the
> Python representation of a thread. That can lead to confusion, and
> possibly the OP's error (I don't know for sure, I'm just positing).

A fork() is a copy of a process in a new process. If this process has a
thread (or several), they are part of the copy and the new process has
those threads as well.

Unless there is a memory sharing between those processes, what happens
on one thread in the first process is totally independant of what
happens in the copy of this thread in the other process.

I'm not specialist on multi-threading in Python, but it should not
change anything. Both processes (father and child) don't share the same
thread, each one has its own copy of the thread.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading plus multiprocessing plus cv2 error

2020-08-30 Thread Barry Scott



> On 29 Aug 2020, at 18:01, Dennis Lee Bieber  wrote:
> 
> On Sat, 29 Aug 2020 18:24:10 +1000, John O'Hagan 
> declaimed the following:
> 
>> There's no error without the sleep(1), nor if the Process is started
>> before the Thread, nor if two Processes are used instead, nor if two
>> Threads are used instead. IOW the error only occurs if a Thread is
>> started first, and a Process is started a little later.
>> 
>> Any ideas what might be causing the error?
>> 
> 
>   Under Linux, multiprocessing creates processes using fork(). That means
> that, for some fraction of time, you have TWO processes sharing the same
> thread and all that entails (if it doesn't overlay the forked process with
> a new executable, they are sharing the thread until the thread exits).

In the parent you have 1 or more threads.

After fork the new process has 1 thread, which is not shared with the parent.
Any extra threads are not in the new process. But the memory in the new
process will have data structures from the parents other threads.

So no you never have two processes sharing an threads.

This leads to problems with locks.

Barry


> 
> https://stackoverflow.com/questions/54466572/how-to-properly-multithread-in-opencv-in-2019
> (which points to)
> https://answers.opencv.org/question/32415/thread-safe/?answer=32452#post-id-32452
> """
> The library itself is thread safe in that you can have multiple calls into
> the library at the same time, however the data is not always thread safe.
> """
> 
>   The sleep(1), when compounded with the overhead of starting the thread,
> and then starting the process, likely means the thread had exited before
> the process actually is started. Try replacing the sleep(2) in the work
> code with something like sleep(30) -- I hypothesize that you'll get the
> same error condition even with the sleep(1) in place.
> 
> 
> -- 
>   Wulfraed Dennis Lee Bieber AF6VN
>   wlfr...@ix.netcom.comhttp://wlfraed.microdiversity.freeddns.org/
> 
> -- 
> https://mail.python.org/mailman/listinfo/python-list
> 

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading plus multiprocessing plus cv2 error

2020-08-30 Thread John O'Hagan
On Sat, 29 Aug 2020 13:01:12 -0400
Dennis Lee Bieber  wrote:

> On Sat, 29 Aug 2020 18:24:10 +1000, John O'Hagan
>  declaimed the following:
> 
> >There's no error without the sleep(1), nor if the Process is started
> >before the Thread, nor if two Processes are used instead, nor if two
> >Threads are used instead. IOW the error only occurs if a Thread is
> >started first, and a Process is started a little later.
> >
> >Any ideas what might be causing the error?
> >  
> 
>   Under Linux, multiprocessing creates processes using fork().
> That means that, for some fraction of time, you have TWO processes
> sharing the same thread and all that entails (if it doesn't overlay
> the forked process with a new executable, they are sharing the thread
> until the thread exits).
> 
> https://stackoverflow.com/questions/54466572/how-to-properly-multithread-in-opencv-in-2019
> (which points to)
> https://answers.opencv.org/question/32415/thread-safe/?answer=32452#post-id-32452
> """
> The library itself is thread safe in that you can have multiple calls
> into the library at the same time, however the data is not always
> thread safe. """
> 
>   The sleep(1), when compounded with the overhead of starting
> the thread, and then starting the process, likely means the thread
> had exited before the process actually is started. Try replacing the
> sleep(2) in the work code with something like sleep(30) -- I
> hypothesize that you'll get the same error condition even with the
> sleep(1) in place.
> 
> 

Thanks for the reply. 

You're right, the error also happens with a longer sleep, or no sleep
at all, inside the function. That sleep is only there in the example to
keep the windows open long enough to see the images if they are
successfully displayed.

I could well be wrong as I'm not fully across multiprocessing, but I
think the Stackoverflow question and answer you linked above relate to a
different situation, with multithreaded cv2 operations on shared image
data. 

In my example, AFAIK (which is not very far) it shouldn't matter whether
the new process is sharing the thread, or whether the thread has
exited, because the thread and the process aren't using the same data.
Or (as is quite likely) am I misunderstanding your point?

Cheers

John
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading plus multiprocessing plus cv2 error

2020-08-30 Thread Karen Shaeffer via Python-list


> On Aug 29, 2020, at 10:12 PM, Stephane Tougard via Python-list 
>  wrote:
> 
> On 2020-08-29, Dennis Lee Bieber  wrote:
>>  Under Linux, multiprocessing creates processes using fork(). That means
>> that, for some fraction of time, you have TWO processes sharing the same
>> thread and all that entails (if it doesn't overlay the forked process with
>> a new executable, they are sharing the thread until the thread exits).
>> same error condition even with the sleep(1) in place.
> 
> I'm not even that makes sense, how 2 processes can share a thread ?
> 

Hello,
On linux, fork is a kernel system call. The linux kernel creates two identical 
processes running in separate memory spaces. At the time of creation, these 
memory spaces have the same content. There are some issues to be aware of. Just 
type ‘man fork’ on the command line of a linux system, and you can read about 
the issues of concern, presuming you have installed the manual pages for the 
linux kernel system calls.

If the forked process doesn’t overlay onto a separate memory space, then the 
fork system call fails, returning a failure code to the parent process. When 
the linux kernel is executing the fork system call, the parent (forking 
process) is blocked on the system call. The linux kernel actually takes over 
the parent process during execution of the system call, running that process in 
kernel mode during the execution of the fork process. The parent (forking) 
process only restarts, after the kernel returns.

On linux, within a given process, threads share the same memory space. If that 
process is the python interpreter, then the Global lock ensures only one thread 
is running when the fork happens. After the fork, then you have two distinct 
processes running in two separate memory spaces. And the fork man page 
discusses the details of concern with regards to specific kernel resources that 
could be referenced by those two distinct processes. The thread context is just 
a detail in that respect. All the threads of the parent process that forked the 
new process all share the same parent memory space.

humbly,
kls

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading plus multiprocessing plus cv2 error

2020-08-30 Thread Chris Angelico
On Sun, Aug 30, 2020 at 4:01 PM Stephane Tougard via Python-list
 wrote:
>
> On 2020-08-29, Dennis Lee Bieber  wrote:
> >   Under Linux, multiprocessing creates processes using fork(). That 
> > means
> > that, for some fraction of time, you have TWO processes sharing the same
> > thread and all that entails (if it doesn't overlay the forked process with
> > a new executable, they are sharing the thread until the thread exits).
> > same error condition even with the sleep(1) in place.
>
> I'm not even that makes sense, how 2 processes can share a thread ?
>

They can't. However, they can share a Thread object, which is the
Python representation of a thread. That can lead to confusion, and
possibly the OP's error (I don't know for sure, I'm just positing).

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading plus multiprocessing plus cv2 error

2020-08-30 Thread Stephane Tougard via Python-list
On 2020-08-29, Dennis Lee Bieber  wrote:
>   Under Linux, multiprocessing creates processes using fork(). That means
> that, for some fraction of time, you have TWO processes sharing the same
> thread and all that entails (if it doesn't overlay the forked process with
> a new executable, they are sharing the thread until the thread exits).
> same error condition even with the sleep(1) in place.

I'm not even that makes sense, how 2 processes can share a thread ?


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading module and embedded python

2020-04-16 Thread Eko palypse
Thank you for your help.
I made a minimum example which just creates the dialog window.
This runs, using the standard python37.exe without a problem.
But this also runs within the embedded interpreter slightly different
then my previous code.
In my previous code I used the hwnd from the C++ app

  DialogBoxIndirectParam(None,
(ctypes.c_ubyte *
len(self._array)).from_buffer_copy(self._array),
self.parent_hwnd,
DIALOGPROC(self._dlgproc),
0)

now I'm using None.
This still does NOT show the dialog when it is called,
BUT the C++ app keeps responsive and if I close it, then the dialog
appears. (???)

> Maybe in the embedded version you do not have an HWND that is usable, can
NULL be used?

The HWND I normally use should be correct as I have to use it in various
other calls
with SendMessageW and those methods works.

>  Is there a SHOW arg that you need to pass?

No.


Here the minimal code, just in case one is interested.

import ctypes
from ctypes import wintypes
import platform
from threading import Thread

user32 = ctypes.WinDLL('user32')

LRESULT = wintypes.LPARAM
DIALOGPROC = ctypes.WINFUNCTYPE(LRESULT,
wintypes.HWND,
wintypes.UINT,
wintypes.WPARAM,
wintypes.LPARAM)

INT_PTR = wintypes.INT if platform.architecture()[0] == '32bit' else
wintypes.LARGE_INTEGER

DialogBoxIndirectParam = user32.DialogBoxIndirectParamW
DialogBoxIndirectParam.restype = wintypes.HWND
DialogBoxIndirectParam.argtypes = [wintypes.HINSTANCE,
   ctypes.POINTER(ctypes.c_ubyte),
   wintypes.HWND,
   DIALOGPROC,
   wintypes.LPARAM]

GetDlgItem = user32.GetDlgItem
GetDlgItem.restype = wintypes.HWND
GetDlgItem.argtypes = [wintypes.HWND, wintypes.INT]

EndDialog = user32.EndDialog
EndDialog.restype = wintypes.BOOL
EndDialog.argtypes = [wintypes.HWND, INT_PTR]


def align_struct(tmp):
''' align control structure to dword size '''
dword_size = ctypes.sizeof(wintypes.DWORD)
align = dword_size - (len(tmp) % dword_size)
if align < dword_size:
tmp += bytearray(align)
return tmp


class DlgWindow(Thread):
''' Implements a threaddialog template window '''

def __init__(self):
super().__init__()

self._array = bytearray()  # DLGTEMPLATEEX structure buffer
self._array += wintypes.WORD(1)  # dlgVer
self._array += wintypes.WORD(0x)  # signature
self._array += wintypes.DWORD(0)  # helpID
self._array += wintypes.DWORD(0)  # exStyle
#  WS_POPUP | WS_BORDER | WS_SYSMENU |
WS_CAPTION | DS_MODALFRAME | DS_SETFONT | DS_CENTER
self._array += wintypes.DWORD(0x8000 | 0x0080 | 0x0008
| 0x00C0 | 0x80 | 0x40 | 0x800)  # style
self._array += wintypes.WORD(0)  # cDlgItems
self._array += wintypes.SHORT(0)  # x
self._array += wintypes.SHORT(0)  # y
self._array += wintypes.SHORT(200)  # cx
self._array += wintypes.SHORT(200)  # cy
self._array += wintypes.WORD(0)  # menu
self._array += wintypes.WORD(0)  # windowClass
self._array += ctypes.create_unicode_buffer('Test Dialog')  # title
self._array += wintypes.WORD(9)  # pointsize
self._array += wintypes.WORD(400)  # weight
self._array += wintypes.BYTE(0)  # italic
self._array += wintypes.BYTE(0)  # charset
self._array += ctypes.create_unicode_buffer('MS Shell Dlg')  #
typeface


def _dlgproc(self, hwnd, msg, wparam, lparam):
print(hwnd, msg, wparam, lparam)
if msg == 16:  # WM_CLOSE
EndDialog(hwnd, 0)
return 1

elif msg == 272:  # WM_INITDIALOG
return 1

return 0


def run(self):
''' create the dialog window '''
self._array = align_struct(self._array)
DialogBoxIndirectParam(None,
   (ctypes.c_ubyte *
len(self._array)).from_buffer_copy(self._array),
   None,
   DIALOGPROC(self._dlgproc),
   0)


def test_window():
dlg = DlgWindow()
dlg.start()


if __name__ == '__main__':
test_window()

Thanks
Eren

Am Do., 16. Apr. 2020 um 18:33 Uhr schrieb Barry Scott <
ba...@barrys-emacs.org>:

>
>
> > On 16 Apr 2020, at 14:55, Eko palypse  wrote:
> >
> > Barry, sorry for sending you a private message yesterday, was not
> intended.
> >
> > No, I only have rudimentary knowledge of C++,
> > but it has been developing since I started using Cython.
> > I haven't done any stack analysis yet but hey, there's always a first
> time.
> > I think my stubbornness could be of help here :-)
>
> Its a 

Re: Threading module and embedded python

2020-04-16 Thread Barry Scott


> On 16 Apr 2020, at 14:55, Eko palypse  wrote:
> 
> Barry, sorry for sending you a private message yesterday, was not intended.
> 
> No, I only have rudimentary knowledge of C++,
> but it has been developing since I started using Cython.
> I haven't done any stack analysis yet but hey, there's always a first time.
> I think my stubbornness could be of help here :-)

Its a very useful when the simple debug stuff fails.

> 
> Visual Studio reports that the last location of the thread is in _ctypes.pyd
> and the call stack window shows that the last execution is
> user32.dll!InternalDailogBox().
> 
> Call Stack
> 
> 
> *user32.dll!InternalDialogBox()user32.dll!DialogBoxIndirectParamAorW()user32.dll!DialogBoxIndirectParamW()_ctypes.pyd!07fee7fc17e3()_ctypes.pyd!07fee7fbfee3()_ctypes.pyd!07fee7fbb4c5()_ctypes.pyd!07fee7fbc019()_ctypes.pyd!07fee7fb6dfa()python37.dll!_PyObject_FastCallKeywords(_object
> * callable=0x02fa8c78, _object * const * stack=0x05261c78,
> __int64 nargs=5, _object * kwnames=0x)*

My guess is that you are missing an important parameter to the dialog that 
allows it be seen.
Test the dialog code outside of the embedded python, with a command line python.
I recall that you have to pass in the parent for a dialog. Maybe in the 
embedded version
you do not have an HWND that is usable, can NULL be used?
Is there a SHOW arg that you need to pass?

I'd check the MSDN docs for the call you are making and check every param is as 
required.

(Its been a along time since I did low level win32 in anger so forgive the lack 
of solutions)


> 
> 
> The thread is neither suspended nor does it have any different status than
> the main thread
> which loops through its main event queue at this point.

It is suspended inside the user32.dll.

Barry

> 
> Thank you
> Eren
> 
> 
> Am Mi., 15. Apr. 2020 um 22:57 Uhr schrieb Barry :
> 
>> 
>> 
>>> On 15 Apr 2020, at 21:18, Eko palypse  wrote:
>>> 
>>> Thank you for your suggestion. I will give it a try.
>>> 
 What is the "stuck" thread doing? waiting for a lock?
>>> 
>>> No, it should open a dialog created with DialogBoxIndirectParamW
>>> <
>> https://docs.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-dialogboxindirectparamw
>>> 
>> 
>> I assume you are a C++ developer and can look at the stack of the thread.
>> What is the thread doing? Is it in python code? Is it in windows code?
>> 
>> Barry
>> 
>>> .
>>> 
>>> Eren
>>> 
 Am Mi., 15. Apr. 2020 um 20:12 Uhr schrieb Barry Scott <
 ba...@barrys-emacs.org>:
 
 
 
>> On 15 Apr 2020, at 13:30, Eko palypse  wrote:
> 
> Hi everyone,
> 
> the following happens on Windows7 x64 and Python37 x64
> 
> I have a plugin DLL for a C++ application in which Python37 is
>> embedded.
> The plugin itself works, except when I want to use the threading
>> module.
> 
> If I start a Python script in my plugin which uses the threading module
> I can verify via ProcessExplorer that the thread is started,
> but it doesn't do anything (??) and the c++ application doesn't really
 do anything anymore either.
> 
> Only when I stop the C++ Applikation, the thread becomes active for a
 short time.
> Verified with logging module over time print-outs.
> 
> Apparently I did not understand everything about threads and embedded
 python.
> 
> Any idea what I'm doing wrong?
 
 This is what I typically do.
 
 Make sure that you have installed the Python debug files.
 Now you can use the visual C++ debugger to attach to the process and
 look at what the threads are doing.
 
 I always have the python source code on hand to read as well.
 
 This should give you a clue.
 
 What is the "stuck" thread doing? waiting for a lock?
 
 Barry
 
 
 
 
> 
> 
> The whole thing is initialized by the DllMain routine.
> 
> 
> BOOL APIENTRY DllMain( HANDLE hModule,
> DWORD  reasonForCall,
> LPVOID /* lpReserved */ )
> {
>  switch ( reasonForCall )
>  {
>  case DLL_PROCESS_ATTACH:
>  if (!Py_IsInitialized())
>  {
>  PyImport_AppendInittab("Npp", _Npp);
>  Py_InitializeEx(0);
>  PyEval_InitThreads();  //<- this shouldn't be needed as I
 understand that it is called by Py_InitializeEx anyway
>  }
>  PyImport_ImportModule("Npp");
>  break;
>  case DLL_PROCESS_DETACH:
>  Py_Finalize();
>  break;
> 
>  case DLL_THREAD_ATTACH:
>  break;
> 
>  case DLL_THREAD_DETACH:
>  break;
>  }
> 
>  return TRUE;
> }
> 
> and the code in the plugin which executes the python scripts is this
> 
> cdef void run_code():
>  try:
>

Re: Threading module and embedded python

2020-04-16 Thread Eko palypse
Barry, sorry for sending you a private message yesterday, was not intended.

No, I only have rudimentary knowledge of C++,
but it has been developing since I started using Cython.
I haven't done any stack analysis yet but hey, there's always a first time.
I think my stubbornness could be of help here :-)

Visual Studio reports that the last location of the thread is in _ctypes.pyd
and the call stack window shows that the last execution is
user32.dll!InternalDailogBox().

Call Stack








*user32.dll!InternalDialogBox()user32.dll!DialogBoxIndirectParamAorW()user32.dll!DialogBoxIndirectParamW()_ctypes.pyd!07fee7fc17e3()_ctypes.pyd!07fee7fbfee3()_ctypes.pyd!07fee7fbb4c5()_ctypes.pyd!07fee7fbc019()_ctypes.pyd!07fee7fb6dfa()python37.dll!_PyObject_FastCallKeywords(_object
* callable=0x02fa8c78, _object * const * stack=0x05261c78,
__int64 nargs=5, _object * kwnames=0x)*


The thread is neither suspended nor does it have any different status than
the main thread
which loops through its main event queue at this point.

Thank you
Eren


Am Mi., 15. Apr. 2020 um 22:57 Uhr schrieb Barry :

>
>
> > On 15 Apr 2020, at 21:18, Eko palypse  wrote:
> >
> > Thank you for your suggestion. I will give it a try.
> >
> >> What is the "stuck" thread doing? waiting for a lock?
> >
> > No, it should open a dialog created with DialogBoxIndirectParamW
> > <
> https://docs.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-dialogboxindirectparamw
> >
>
> I assume you are a C++ developer and can look at the stack of the thread.
> What is the thread doing? Is it in python code? Is it in windows code?
>
> Barry
>
> > .
> >
> > Eren
> >
> >> Am Mi., 15. Apr. 2020 um 20:12 Uhr schrieb Barry Scott <
> >> ba...@barrys-emacs.org>:
> >>
> >>
> >>
>  On 15 Apr 2020, at 13:30, Eko palypse  wrote:
> >>>
> >>> Hi everyone,
> >>>
> >>> the following happens on Windows7 x64 and Python37 x64
> >>>
> >>> I have a plugin DLL for a C++ application in which Python37 is
> embedded.
> >>> The plugin itself works, except when I want to use the threading
> module.
> >>>
> >>> If I start a Python script in my plugin which uses the threading module
> >>> I can verify via ProcessExplorer that the thread is started,
> >>> but it doesn't do anything (??) and the c++ application doesn't really
> >> do anything anymore either.
> >>>
> >>> Only when I stop the C++ Applikation, the thread becomes active for a
> >> short time.
> >>> Verified with logging module over time print-outs.
> >>>
> >>> Apparently I did not understand everything about threads and embedded
> >> python.
> >>>
> >>> Any idea what I'm doing wrong?
> >>
> >> This is what I typically do.
> >>
> >> Make sure that you have installed the Python debug files.
> >> Now you can use the visual C++ debugger to attach to the process and
> >> look at what the threads are doing.
> >>
> >> I always have the python source code on hand to read as well.
> >>
> >> This should give you a clue.
> >>
> >> What is the "stuck" thread doing? waiting for a lock?
> >>
> >> Barry
> >>
> >>
> >>
> >>
> >>>
> >>>
> >>> The whole thing is initialized by the DllMain routine.
> >>>
> >>>
> >>> BOOL APIENTRY DllMain( HANDLE hModule,
> >>>  DWORD  reasonForCall,
> >>>  LPVOID /* lpReserved */ )
> >>> {
> >>>   switch ( reasonForCall )
> >>>   {
> >>>   case DLL_PROCESS_ATTACH:
> >>>   if (!Py_IsInitialized())
> >>>   {
> >>>   PyImport_AppendInittab("Npp", _Npp);
> >>>   Py_InitializeEx(0);
> >>>   PyEval_InitThreads();  //<- this shouldn't be needed as I
> >> understand that it is called by Py_InitializeEx anyway
> >>>   }
> >>>   PyImport_ImportModule("Npp");
> >>>   break;
> >>>   case DLL_PROCESS_DETACH:
> >>>   Py_Finalize();
> >>>   break;
> >>>
> >>>   case DLL_THREAD_ATTACH:
> >>>   break;
> >>>
> >>>   case DLL_THREAD_DETACH:
> >>>   break;
> >>>   }
> >>>
> >>>   return TRUE;
> >>> }
> >>>
> >>> and the code in the plugin which executes the python scripts is this
> >>>
> >>> cdef void run_code():
> >>>   try:
> >>>   global_dict = globals()
> >>>   if '__name__' not in global_dict or global_dict['__name__'] !=
> >> '__main__':
> >>>   global_dict.update({"__name__": "__main__",})
> >>>   exec(compile(editor.getText(), '', 'exec'), global_dict)
> >>>
> >>>   except Exception:
> >>>   MessageBoxW(nppData._nppHandle,
> >>>   traceback.format_exc(),
> >>>   'RUN CODE EXCEPTION',
> >>>   0)
> >>>
> >>> I don't know if this is important, but the DLL is generated by Cython.
> >>>
> >>> Thank you for reading and stay healthy
> >>>
> >>> Eren
> >>> --
> >>> https://mail.python.org/mailman/listinfo/python-list
> >>>
> >>
> >>
> > --
> > https://mail.python.org/mailman/listinfo/python-list
> >
>
>
-- 

Re: Threading module and embedded python

2020-04-15 Thread Barry


> On 15 Apr 2020, at 21:18, Eko palypse  wrote:
> 
> Thank you for your suggestion. I will give it a try.
> 
>> What is the "stuck" thread doing? waiting for a lock?
> 
> No, it should open a dialog created with DialogBoxIndirectParamW
> 

I assume you are a C++ developer and can look at the stack of the thread.
What is the thread doing? Is it in python code? Is it in windows code?

Barry

> .
> 
> Eren
> 
>> Am Mi., 15. Apr. 2020 um 20:12 Uhr schrieb Barry Scott <
>> ba...@barrys-emacs.org>:
>> 
>> 
>> 
 On 15 Apr 2020, at 13:30, Eko palypse  wrote:
>>> 
>>> Hi everyone,
>>> 
>>> the following happens on Windows7 x64 and Python37 x64
>>> 
>>> I have a plugin DLL for a C++ application in which Python37 is embedded.
>>> The plugin itself works, except when I want to use the threading module.
>>> 
>>> If I start a Python script in my plugin which uses the threading module
>>> I can verify via ProcessExplorer that the thread is started,
>>> but it doesn't do anything (??) and the c++ application doesn't really
>> do anything anymore either.
>>> 
>>> Only when I stop the C++ Applikation, the thread becomes active for a
>> short time.
>>> Verified with logging module over time print-outs.
>>> 
>>> Apparently I did not understand everything about threads and embedded
>> python.
>>> 
>>> Any idea what I'm doing wrong?
>> 
>> This is what I typically do.
>> 
>> Make sure that you have installed the Python debug files.
>> Now you can use the visual C++ debugger to attach to the process and
>> look at what the threads are doing.
>> 
>> I always have the python source code on hand to read as well.
>> 
>> This should give you a clue.
>> 
>> What is the "stuck" thread doing? waiting for a lock?
>> 
>> Barry
>> 
>> 
>> 
>> 
>>> 
>>> 
>>> The whole thing is initialized by the DllMain routine.
>>> 
>>> 
>>> BOOL APIENTRY DllMain( HANDLE hModule,
>>>  DWORD  reasonForCall,
>>>  LPVOID /* lpReserved */ )
>>> {
>>>   switch ( reasonForCall )
>>>   {
>>>   case DLL_PROCESS_ATTACH:
>>>   if (!Py_IsInitialized())
>>>   {
>>>   PyImport_AppendInittab("Npp", _Npp);
>>>   Py_InitializeEx(0);
>>>   PyEval_InitThreads();  //<- this shouldn't be needed as I
>> understand that it is called by Py_InitializeEx anyway
>>>   }
>>>   PyImport_ImportModule("Npp");
>>>   break;
>>>   case DLL_PROCESS_DETACH:
>>>   Py_Finalize();
>>>   break;
>>> 
>>>   case DLL_THREAD_ATTACH:
>>>   break;
>>> 
>>>   case DLL_THREAD_DETACH:
>>>   break;
>>>   }
>>> 
>>>   return TRUE;
>>> }
>>> 
>>> and the code in the plugin which executes the python scripts is this
>>> 
>>> cdef void run_code():
>>>   try:
>>>   global_dict = globals()
>>>   if '__name__' not in global_dict or global_dict['__name__'] !=
>> '__main__':
>>>   global_dict.update({"__name__": "__main__",})
>>>   exec(compile(editor.getText(), '', 'exec'), global_dict)
>>> 
>>>   except Exception:
>>>   MessageBoxW(nppData._nppHandle,
>>>   traceback.format_exc(),
>>>   'RUN CODE EXCEPTION',
>>>   0)
>>> 
>>> I don't know if this is important, but the DLL is generated by Cython.
>>> 
>>> Thank you for reading and stay healthy
>>> 
>>> Eren
>>> --
>>> https://mail.python.org/mailman/listinfo/python-list
>>> 
>> 
>> 
> -- 
> https://mail.python.org/mailman/listinfo/python-list
> 

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading module and embedded python

2020-04-15 Thread Eko palypse
Thank you for your suggestion. I will give it a try.

> What is the "stuck" thread doing? waiting for a lock?

No, it should open a dialog created with DialogBoxIndirectParamW

.

Eren

Am Mi., 15. Apr. 2020 um 20:12 Uhr schrieb Barry Scott <
ba...@barrys-emacs.org>:

>
>
> > On 15 Apr 2020, at 13:30, Eko palypse  wrote:
> >
> > Hi everyone,
> >
> > the following happens on Windows7 x64 and Python37 x64
> >
> > I have a plugin DLL for a C++ application in which Python37 is embedded.
> > The plugin itself works, except when I want to use the threading module.
> >
> > If I start a Python script in my plugin which uses the threading module
> > I can verify via ProcessExplorer that the thread is started,
> > but it doesn't do anything (??) and the c++ application doesn't really
> do anything anymore either.
> >
> > Only when I stop the C++ Applikation, the thread becomes active for a
> short time.
> > Verified with logging module over time print-outs.
> >
> > Apparently I did not understand everything about threads and embedded
> python.
> >
> > Any idea what I'm doing wrong?
>
> This is what I typically do.
>
> Make sure that you have installed the Python debug files.
> Now you can use the visual C++ debugger to attach to the process and
> look at what the threads are doing.
>
> I always have the python source code on hand to read as well.
>
> This should give you a clue.
>
> What is the "stuck" thread doing? waiting for a lock?
>
> Barry
>
>
>
>
> >
> >
> > The whole thing is initialized by the DllMain routine.
> >
> >
> > BOOL APIENTRY DllMain( HANDLE hModule,
> >   DWORD  reasonForCall,
> >   LPVOID /* lpReserved */ )
> > {
> >switch ( reasonForCall )
> >{
> >case DLL_PROCESS_ATTACH:
> >if (!Py_IsInitialized())
> >{
> >PyImport_AppendInittab("Npp", _Npp);
> >Py_InitializeEx(0);
> >PyEval_InitThreads();  //<- this shouldn't be needed as I
> understand that it is called by Py_InitializeEx anyway
> >}
> >PyImport_ImportModule("Npp");
> >break;
> >case DLL_PROCESS_DETACH:
> >Py_Finalize();
> >break;
> >
> >case DLL_THREAD_ATTACH:
> >break;
> >
> >case DLL_THREAD_DETACH:
> >break;
> >}
> >
> >return TRUE;
> > }
> >
> > and the code in the plugin which executes the python scripts is this
> >
> > cdef void run_code():
> >try:
> >global_dict = globals()
> >if '__name__' not in global_dict or global_dict['__name__'] !=
> '__main__':
> >global_dict.update({"__name__": "__main__",})
> >exec(compile(editor.getText(), '', 'exec'), global_dict)
> >
> >except Exception:
> >MessageBoxW(nppData._nppHandle,
> >traceback.format_exc(),
> >'RUN CODE EXCEPTION',
> >0)
> >
> > I don't know if this is important, but the DLL is generated by Cython.
> >
> > Thank you for reading and stay healthy
> >
> > Eren
> > --
> > https://mail.python.org/mailman/listinfo/python-list
> >
>
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading module and embedded python

2020-04-15 Thread Barry Scott



> On 15 Apr 2020, at 13:30, Eko palypse  wrote:
> 
> Hi everyone,
> 
> the following happens on Windows7 x64 and Python37 x64
> 
> I have a plugin DLL for a C++ application in which Python37 is embedded.
> The plugin itself works, except when I want to use the threading module.
> 
> If I start a Python script in my plugin which uses the threading module
> I can verify via ProcessExplorer that the thread is started,
> but it doesn't do anything (??) and the c++ application doesn't really do 
> anything anymore either.
> 
> Only when I stop the C++ Applikation, the thread becomes active for a short 
> time.
> Verified with logging module over time print-outs.
> 
> Apparently I did not understand everything about threads and embedded python.
> 
> Any idea what I'm doing wrong?

This is what I typically do.

Make sure that you have installed the Python debug files.
Now you can use the visual C++ debugger to attach to the process and
look at what the threads are doing.

I always have the python source code on hand to read as well.

This should give you a clue.

What is the "stuck" thread doing? waiting for a lock?

Barry




> 
> 
> The whole thing is initialized by the DllMain routine.
> 
> 
> BOOL APIENTRY DllMain( HANDLE hModule,
>   DWORD  reasonForCall,
>   LPVOID /* lpReserved */ )
> {
>switch ( reasonForCall )
>{
>case DLL_PROCESS_ATTACH:
>if (!Py_IsInitialized())
>{
>PyImport_AppendInittab("Npp", _Npp);
>Py_InitializeEx(0);
>PyEval_InitThreads();  //<- this shouldn't be needed as I 
> understand that it is called by Py_InitializeEx anyway
>}
>PyImport_ImportModule("Npp");
>break;
>case DLL_PROCESS_DETACH:
>Py_Finalize();
>break;
> 
>case DLL_THREAD_ATTACH:
>break;
> 
>case DLL_THREAD_DETACH:
>break;
>}
> 
>return TRUE;
> }
> 
> and the code in the plugin which executes the python scripts is this
> 
> cdef void run_code():
>try:
>global_dict = globals()
>if '__name__' not in global_dict or global_dict['__name__'] != 
> '__main__':
>global_dict.update({"__name__": "__main__",})
>exec(compile(editor.getText(), '', 'exec'), global_dict)
> 
>except Exception:
>MessageBoxW(nppData._nppHandle,
>traceback.format_exc(),
>'RUN CODE EXCEPTION',
>0)  
> 
> I don't know if this is important, but the DLL is generated by Cython.
> 
> Thank you for reading and stay healthy
> 
> Eren
> -- 
> https://mail.python.org/mailman/listinfo/python-list
> 

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading

2020-01-25 Thread Cameron Simpson

On 24Jan2020 21:08, Dennis Lee Bieber  wrote:

My suggestion for your capacity thing: use a Semaphore, which is a
special thread safe counter which cannot go below zero.

   from threading import Semaphore

   def start_test(sem, args...):
   sem.acquire()
   ... do stuff with args ...
   sem.release()

   sem = Semaphore(10)

   threads = []
   for item in big_list:
   t = Thread(target=start_test, args=(sem, item))
   t.start()
   threads.append(t)
   ... wait for all the threads here ...

This version starts many threads, but only 10 at a time will do "work"
because they stall until they can acquire the Semaphore. The first 10
acquire it immediately, then the later only stall until an earlier
Thread releases the Semaphore.


You are actually proposing to create {200} threads, with related stack
and thread overhead -- and then block all but 10, releasing a blocked
thread only when a previous unblocked thread exits?


Well, yeah, but largely because semaphores are averlooked as a resource 
constraint tool, and because the expression is simple and clear.


I'd much prefer to create only 10 threads with the semaphore control in 
the thread dispatcher, but it was harder to write and be clear in its 
intent. Basic concepts first, superior complication later.


I also was wanting a scheme where the "set it all up" phase could be 
fast (== start alll the threads, wait later) versus process a capacity 
limited queue (takes a long time, stalling the "main" programme). Of 
course one might dispatch a thread to run the queue...


I'm aware this makes a lot of threads and they're not free, that's a 
very valid criticism.


Cheers,
Cameron Simpson 
--
https://mail.python.org/mailman/listinfo/python-list


Re: Threading

2020-01-24 Thread Matt
>  Not quite.
>
> 1. Create a list of threads.
>
> 2. Put the items into a _queue_, not a list.
>
> 3. Start the threads.
>
> 4. Iterate over the list of threads, using .join() on each.
>
> If you're going to start the threads before you've put all of the items
> into the queue, you can also put a sentinel such as None into the queue
> after you've finished putting the items into it. When a thread sees the
> sentinel, it knows there are no more items to come. You could have one
> sentinel for each thread, or have only one and have each thread put it
> back when it sees it, for the other threads to see.
>


Is the list not thread safe and I need to use Queue instead or is it
just that using a Queue is more efficient?  I think I have everything
else you mentioned changed. Even in python3 now though I still need to
work in python2 in places for time being. Thanks.

import time
import datetime
import threading
import random

big_list = []

def date_stamp():
return "[" + datetime.datetime.now().strftime('%Y-%m-%d
%H:%M:%S:%f')[:-3] + "] "

for i in range(1, 5000):
big_list.append(i)

def start_test(id):
while big_list:
list_item = big_list.pop()
print(date_stamp(), "Thread", id, ":", list_item, port)
time.sleep(random.random())
print(date_stamp(), "Thread", id, "done...")

print(date_stamp(), "Creating Threads...")

port = 80
threads = []
for i in range(1, 10):
t = threading.Thread(target=start_test, args=(i,))
print(date_stamp(), "Starting Thread:", i)
t.start()
threads.append(t)

print(date_stamp(), "Waiting on Threads...")

for t in threads:
t.join()

print(date_stamp(), "Finished...")
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading

2020-01-24 Thread Cameron Simpson
First come remarks, then a different suggestion about your capacity 
problem (no more than 10). Details below.


On 24Jan2020 16:03, Matt  wrote:

Created this example and it runs.

[...]

big_list = []
for i in range(1, 200):
   big_list.append(i)


You can just go:

   big_list.extend(range(1,200))


def start_test():
   while big_list: #is this


This tests if big_list is nonempty. In Python, most "containers" (lists, 
tuples, dicts, sets etc) are "falsey" if empty and "true" if nonempty.



   list_item = big_list.pop() #and this thread safe


Don't think so.


   print list_item, port


Please use Python 3; Python 2 is end of life. So:

   print(list_item, port)


   time.sleep(1)

print "Creating Threads..."

port = 80
for i in range(1, 10):
   t = threading.Thread(target=start_test)
   t.start()


This starts only 10 Threads.


print "Waiting on Threads..."
t.join()


This waits for only the last Thread.

My suggestion for your capacity thing: use a Semaphore, which is a 
special thread safe counter which cannot go below zero.


   from threading import Semaphore

   def start_test(sem, args...):
   sem.acquire()
   ... do stuff with args ...
   sem.release()

   sem = Semaphore(10)

   threads = []
   for item in big_list:
   t = Thread(target=start_test, args=(sem, item))
   t.start()
   threads.append(t)
   ... wait for all the threads here ...

This version starts many threads, but only 10 at a time will do "work" 
because they stall until they can acquire the Semaphore. The first 10 
acquire it immediately, then the later only stall until an earlier 
Thread releases the Semaphore.


Cheers,
Cameron Simpson 
--
https://mail.python.org/mailman/listinfo/python-list


Re: Threading

2020-01-24 Thread Chris Angelico
On Sat, Jan 25, 2020 at 9:05 AM Matt  wrote:
>
> Created this example and it runs.
>
> import time
> import threading
>
> big_list = []
>
> for i in range(1, 200):
> big_list.append(i)
>
> def start_test():
> while big_list: #is this
> list_item = big_list.pop() #and this thread safe
> print list_item, port
> time.sleep(1)
>
> print "Creating Threads..."
>
> port = 80
> for i in range(1, 10):
> t = threading.Thread(target=start_test)
> t.start()
>
> print "Waiting on Threads..."
>
> t.join()
>
> print "Finished..."
>

Please don't top-post. Also, switch to a Python 3 interpreter before
you end up filling your code with Py2isms and make your job harder
later.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading

2020-01-24 Thread Matt
Created this example and it runs.

import time
import threading

big_list = []

for i in range(1, 200):
big_list.append(i)

def start_test():
while big_list: #is this
list_item = big_list.pop() #and this thread safe
print list_item, port
time.sleep(1)

print "Creating Threads..."

port = 80
for i in range(1, 10):
t = threading.Thread(target=start_test)
t.start()

print "Waiting on Threads..."

t.join()

print "Finished..."

On Fri, Jan 24, 2020 at 2:44 PM Chris Angelico  wrote:
>
> On Sat, Jan 25, 2020 at 7:35 AM Matt  wrote:
> >
> > I am using this example for threading in Python:
> >
> > from threading import Thread
> >
> > def start_test( address, port ):
> > print address, port
> > sleep(1)
> >
> > for line in big_list:
> > t = Thread(target=start_test, args=(line, 80))
> > t.start()
> >
> > But say big_list has thousands of items and I only want to have a
> > maximum of 10 threads open.  How do work my way through the big_list
> > with only 10 threads for example?
>
> First off, it is high time you move to Python 3, as the older versions
> of Python have reached end-of-life.
>
> The best way is to create your ten threads, and have each one request
> "jobs" (for whatever definition of job you have) from a queue. Once
> the queue is exhausted, the threads terminate cleanly, and then you
> can join() each thread to wait for the entire queue to be completed.
>
> ChrisA
> --
> https://mail.python.org/mailman/listinfo/python-list
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading

2020-01-24 Thread MRAB

On 2020-01-24 21:33, Matt wrote:

So I would create 10 threads.  And each would pop items off list like so?


def start_test():
 while big_list:
 list_item = big_list.pop()
 print list_item, port
 sleep(1)

port = 80
for i = 1 to 10
 t = Thread(target=start_test)
 t.start()

t.join()


Not quite.

1. Create a list of threads.

2. Put the items into a _queue_, not a list.

3. Start the threads.

4. Iterate over the list of threads, using .join() on each.

If you're going to start the threads before you've put all of the items 
into the queue, you can also put a sentinel such as None into the queue 
after you've finished putting the items into it. When a thread sees the 
sentinel, it knows there are no more items to come. You could have one 
sentinel for each thread, or have only one and have each thread put it 
back when it sees it, for the other threads to see.





Would that be thread safe?

On Fri, Jan 24, 2020 at 2:44 PM Chris Angelico  wrote:


On Sat, Jan 25, 2020 at 7:35 AM Matt  wrote:
>
> I am using this example for threading in Python:
>
> from threading import Thread
>
> def start_test( address, port ):
> print address, port
> sleep(1)
>
> for line in big_list:
> t = Thread(target=start_test, args=(line, 80))
> t.start()
>
> But say big_list has thousands of items and I only want to have a
> maximum of 10 threads open.  How do work my way through the big_list
> with only 10 threads for example?

First off, it is high time you move to Python 3, as the older versions
of Python have reached end-of-life.

The best way is to create your ten threads, and have each one request
"jobs" (for whatever definition of job you have) from a queue. Once
the queue is exhausted, the threads terminate cleanly, and then you
can join() each thread to wait for the entire queue to be completed.

ChrisA
--
https://mail.python.org/mailman/listinfo/python-list




--
https://mail.python.org/mailman/listinfo/python-list


Re: Threading

2020-01-24 Thread Matt
So I would create 10 threads.  And each would pop items off list like so?


def start_test():
while big_list:
list_item = big_list.pop()
print list_item, port
sleep(1)

port = 80
for i = 1 to 10
t = Thread(target=start_test)
t.start()

t.join()



Would that be thread safe?

On Fri, Jan 24, 2020 at 2:44 PM Chris Angelico  wrote:
>
> On Sat, Jan 25, 2020 at 7:35 AM Matt  wrote:
> >
> > I am using this example for threading in Python:
> >
> > from threading import Thread
> >
> > def start_test( address, port ):
> > print address, port
> > sleep(1)
> >
> > for line in big_list:
> > t = Thread(target=start_test, args=(line, 80))
> > t.start()
> >
> > But say big_list has thousands of items and I only want to have a
> > maximum of 10 threads open.  How do work my way through the big_list
> > with only 10 threads for example?
>
> First off, it is high time you move to Python 3, as the older versions
> of Python have reached end-of-life.
>
> The best way is to create your ten threads, and have each one request
> "jobs" (for whatever definition of job you have) from a queue. Once
> the queue is exhausted, the threads terminate cleanly, and then you
> can join() each thread to wait for the entire queue to be completed.
>
> ChrisA
> --
> https://mail.python.org/mailman/listinfo/python-list
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading

2020-01-24 Thread Dan Sommers
On Sat, 25 Jan 2020 07:43:49 +1100
Chris Angelico  wrote:

> On Sat, Jan 25, 2020 at 7:35 AM Matt  wrote:
> >
> > I am using this example for threading in Python:
> >
> > from threading import Thread
> >
> > def start_test( address, port ):
> > print address, port
> > sleep(1)
> >
> > for line in big_list:
> > t = Thread(target=start_test, args=(line, 80))
> > t.start()
> >
> > But say big_list has thousands of items and I only want to have a
> > maximum of 10 threads open.  How do work my way through the big_list
> > with only 10 threads for example?
> 
> First off, it is high time you move to Python 3, as the older versions
> of Python have reached end-of-life.
> 
> The best way is to create your ten threads, and have each one request
> "jobs" (for whatever definition of job you have) from a queue. Once
> the queue is exhausted, the threads terminate cleanly, and then you
> can join() each thread to wait for the entire queue to be completed.

Or use a thread pool:

https://docs.python.org/3/library/concurrent.futures.html

Dan

-- 
“Atoms are not things.” – Werner Heisenberg
Dan Sommers, http://www.tombstonezero.net/dan
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading

2020-01-24 Thread Chris Angelico
On Sat, Jan 25, 2020 at 7:35 AM Matt  wrote:
>
> I am using this example for threading in Python:
>
> from threading import Thread
>
> def start_test( address, port ):
> print address, port
> sleep(1)
>
> for line in big_list:
> t = Thread(target=start_test, args=(line, 80))
> t.start()
>
> But say big_list has thousands of items and I only want to have a
> maximum of 10 threads open.  How do work my way through the big_list
> with only 10 threads for example?

First off, it is high time you move to Python 3, as the older versions
of Python have reached end-of-life.

The best way is to create your ten threads, and have each one request
"jobs" (for whatever definition of job you have) from a queue. Once
the queue is exhausted, the threads terminate cleanly, and then you
can join() each thread to wait for the entire queue to be completed.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


RE: threading

2019-12-04 Thread David Raymond
100 increments happen very fast, and means each thread will probably complete 
before the main thread has even started the next one. Bump that up to 1_000_000 
or so and you'll probably trigger it.

I did a test with a print(x) at the start of test() to see what the number was 
when each thread kicked off, and the very first thread had got it up to 655,562 
by the time the second thread had started and gotten to that print statement.


-Original Message-
From: Python-list  On 
Behalf Of ast
Sent: Wednesday, December 4, 2019 10:18 AM
To: python-list@python.org
Subject: threading

Hi

An operation like x+=1 on a global variable x is not thread safe because 
there can be a thread switch between reading and writing to x.
The correct way is to use a lock

lock = threading.Lock

with lock:
 x+=1

I tried to write a program without the lock which should fail.
Here it is:

import threading

x = 0

def test():
 global x
 for i in range(100):
 x+=1

threadings = []

for i in range(100):
 t = threading.Thread(target=test)
 threadings.append(t)
 t.start()

for t in threadings:
 t.join()

print(x)

1

The result is always correct: 1
Why ?

Secondly, how the switch between threads is done by the processor ? Is 
there a hardware interrupt coming from a timer ?
-- 
https://mail.python.org/mailman/listinfo/python-list
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading Keyboard Interrupt issue

2019-05-29 Thread eryk sun
On 5/29/19, Dennis Lee Bieber  wrote:
>
>   In the OP's example code, with just one thread started, the easiest
> solution is to use
>
>   y.start()
>   y.join()
>
> to block the main thread. That will, at least, let the try/except catch the
> interrupt. It does not, however, kill the sub-thread.

join() can't be interrupted by Ctrl+C in Windows. To work around this,
we can join a thread with a short timeout in a loop. If we're managing
queued work items with a thread pool, note that Queue.join doesn't
support a timeout. In this case we need to poll empty() in a loop with
a short time.sleep(). Or we can let the main thread block and use
ctypes to install a console control handler that sets a flag that
tells child threads to exit. Windows calls the control handler in a
new thread.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading Keyboard Interrupt issue

2019-05-29 Thread eryk sun
On 5/29/19, David Raymond  wrote:
>
> Keyboard interrupts are only received by the main thread, which in this case
> completes real quick.
>
> So what happens for me is that the main thread runs to completion instantly
> and leaves nothing alive to receive the keyboard interrupt, which means the
> loop thread will run forever until killed externally. (Task manager,
> ctrl-break, etc)

The main thread is still running in order to join non-daemon threads.
In Windows, the internal wait used to join a thread can't be
interrupted by Ctrl+C, unlike POSIX platforms.
The Windows build could be modified to support Ctrl+C in this case,
but I'm only certain about the current build that uses emulated
condition variables.

When I run the OP's script in Linux, acquiring the internal
thread-state lock (which normally waits until the lock is reset when
the thread exits) gets interrupted by the SIGINT signal, and
KeyboardInterrupt is raised:

Exception ignored in: 
Traceback (most recent call last):
  File "/usr/lib/python3.6/threading.py", line 1294, in _shutdown
t.join()
  File "/usr/lib/python3.6/threading.py", line 1056, in join
self._wait_for_tstate_lock()
  File "/usr/lib/python3.6/threading.py", line 1072, in
_wait_for_tstate_lock
elif lock.acquire(block, timeout):
KeyboardInterrupt
-- 
https://mail.python.org/mailman/listinfo/python-list


RE: Threading Keyboard Interrupt issue

2019-05-29 Thread David Raymond
That's a little weird, and my running it works slightly differently. Please 
paste exactly what you're running (no time import and true being lowercase for 
example means we couldn't copy and paste it and have it immediately run)

In your script, the main thread hits y.start() which completes successfully as 
soon as the new thread gets going, so it exits the try/except block as a 
success. Then since there's no more code, the main thread completes.

The loop thread you started inherits the daemon-ness of the thread that called 
it, so by default it's started as a regular thread, and not a daemon thread. As 
a regular thread it will keep going even when the main thread completes.

Keyboard interrupts are only received by the main thread, which in this case 
completes real quick.

So what happens for me is that the main thread runs to completion instantly and 
leaves nothing alive to receive the keyboard interrupt, which means the loop 
thread will run forever until killed externally. (Task manager, ctrl-break, etc)

In this case, even if the main thread _was_ still alive to catch the keyboard 
interrupt, that exception does not get automatically passed to all threads, 
only the main one. So the main thread would have to catch the exception, then 
use one of the available signaling mechanisms to let the other threads know, 
and each of those other threads would have to consciously check for your signal 
of choice to see if the main thread wanted them to shut down.

Or, the other threads would have to be declared as demonic before they were 
started, in which case they would be killed automatically once all non-daemonic 
threads had ended.


-Original Message-
From: Python-list 
[mailto:python-list-bounces+david.raymond=tomtom@python.org] On Behalf Of 
nihar Modi
Sent: Wednesday, May 29, 2019 4:39 AM
To: python-list@python.org
Subject: Threading Keyboard Interrupt issue

I have written a simple code that involves threading, but it does not go to
except clause after Keyboard interrupt. Can you suggest a way out. I have
pasted the code below. It does not print 'hi' after keyboard interrupt and
just stops.

import threading

def loop():
 while true:
  print('hello')
  time.sleep(5)

if __name__ == '__main__':
 try:
  y = threading.Thread(target = loop, args = ())
  y.start()
 except KeyboardInterrupt:
  print('hi')

The program does not print hi and terminates immediately after ctrl+c
-- 
https://mail.python.org/mailman/listinfo/python-list
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading Keyboard Interrupt issue

2019-05-29 Thread Chris Angelico
On Thu, May 30, 2019 at 1:45 AM nihar Modi  wrote:
>
> I have written a simple code that involves threading, but it does not go to
> except clause after Keyboard interrupt. Can you suggest a way out. I have
> pasted the code below. It does not print 'hi' after keyboard interrupt and
> just stops.

Threads allow multiple things to run at once. The entire *point* of
spinning off a new thread is that the main code keeps going even while
the thread runs. They are independent.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading is foobared?

2016-04-04 Thread Mark Sapiro
Mark Sapiro wrote:
> Random832 wrote:
> 
>> Any chance that it could fix reference headers to match?
>> 
>> Actually, merely prepending the original Message-ID itself to the
>> references header might be enough to change the reply's situation from
>> "nephew" ("reply to [missing] sibling") to "grandchild" ("reply to
>> [missing] reply"), which might be good enough to make threading work
>> right on most clients, and would be *easy* (whereas maintaining an
>> ongoing reversible mapping may not be).
>> 
>> And if it's not too much additional work, maybe throw in an
>> X-Mailman-Original-Message-ID (and -References if anything is done with
>> that) field, so that the original state can be recovered.
> 
> 
> I think these are good ideas. I'm going to try to do something along
> these lines.


This is now implemented on mail.python.org for python-list@python.org
and the others that gateway to Usenet.

I hope this will mitigate at least some of the threading issues.

As noted earlier in this thread, the original Message-ID: is appended,
not prepended to References:. More specifically, if there is a
References: header, the original Message-ID: is appended. If not, one is
created with the In-Reply-To: value if any and the original Message-ID:.

-- 
Mark Sapiro The highway is for gamblers,
San Francisco Bay Area, Californiabetter use your sense - B. Dylan
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading is foobared?

2016-03-31 Thread Mark Sapiro
Random832 wrote:

> One additional thing that would be nice and would solve most of the
> duplicate problem with hypothetically including the rewritten
> Message-IDs in outgoing emails, would be to detect crossposts to
> multiple lists in the same Mailman instance, and to send them to Usenet
> (and to subscribers) as a single message, with appropriate headers for a
> crosspost.


This is difficult to do for various reasons. The main issue is gating to
news is asynchronously done by a separate process. Even if the process
could reliably determine that another gatewayed list in the installation
was a recipient of this post which it could only do by examining
explicit addressees and the other list might be a Bcc:, we'd still have
to arbitrate somehow which post gets gatewayed to the multiple news
groups and which ones get dropped. Although I suppose we could send each
one for all the news groups and let the news server figure it out.

Anyway, I don't plan to try this.

-- 
Mark Sapiro The highway is for gamblers,
San Francisco Bay Area, Californiabetter use your sense - B. Dylan
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading is foobared?

2016-03-31 Thread Mark Sapiro
Random832 wrote:

> Any chance that it could fix reference headers to match?
> 
> Actually, merely prepending the original Message-ID itself to the
> references header might be enough to change the reply's situation from
> "nephew" ("reply to [missing] sibling") to "grandchild" ("reply to
> [missing] reply"), which might be good enough to make threading work
> right on most clients, and would be *easy* (whereas maintaining an
> ongoing reversible mapping may not be).
> 
> And if it's not too much additional work, maybe throw in an
> X-Mailman-Original-Message-ID (and -References if anything is done with
> that) field, so that the original state can be recovered.


I think these are good ideas. I'm going to try to do something along
these lines.


> Rather than exclusively rewriting for usenet, maybe the rewritten
> headers could also be included in outgoing emails and the archive?
> 
> Putting it in outgoing emails would solve the problem entirely, though
> it would mean people get duplicates if they're subscribed to multiple
> lists to which something is posted or get CC'd. The archive wouldn't
> have this issue.


This is more difficult since archiving, gatewaying to Usenet and
delivery to list members are asynchronous processes that have no way to
communicate with each other.

It could be accomplished by doing a Usenet check in the incoming
pipeline and putting the Mailman Message-ID in the message metadata or
doing the mods at that point, but I don't think I want to expand the
scope of something that is non RFC compliant in the first place.

I need to think about these things some more.

-- 
Mark Sapiro The highway is for gamblers,
San Francisco Bay Area, Californiabetter use your sense - B. Dylan
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading is foobared?

2016-03-31 Thread Steven D'Aprano
On Thursday 31 March 2016 15:50, Mark Sapiro wrote:

> Hi all,
> 
> I'm jumping in on this thread because Tim asked.
[...]


Thanks for the explanation!



-- 
Steve

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading is foobared?

2016-03-30 Thread Random832
On Thu, Mar 31, 2016, at 01:25, Random832 wrote:
> Actually, merely prepending the original Message-ID itself

append, not prepend... I'd misremembered the order that References go
in.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading is foobared?

2016-03-30 Thread Random832
On Thu, Mar 31, 2016, at 01:25, Random832 wrote:
> > if a message is cross-posted to two lists which both gateway to Usenet,
> > and Mailman didn't make the Message-IDs unique, the news server would
> > discard one of the two posts as a duplicate and the post would be
> > missing from one of the recipient Usenet groups.

One additional thing that would be nice and would solve most of the
duplicate problem with hypothetically including the rewritten
Message-IDs in outgoing emails, would be to detect crossposts to
multiple lists in the same Mailman instance, and to send them to Usenet
(and to subscribers) as a single message, with appropriate headers for a
crosspost.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading is foobared?

2016-03-30 Thread Random832
On Thu, Mar 31, 2016, at 00:50, Mark Sapiro wrote:
> What Mailman does do as noted by Random832 is replace the Message-ID:
> header value in posts gated to Usenet with a list specific, Mailman
> generated unique value. There is a reason for this, and that reason is
> if a message is cross-posted to two lists which both gateway to Usenet,
> and Mailman didn't make the Message-IDs unique, the news server would
> discard one of the two posts as a duplicate and the post would be
> missing from one of the recipient Usenet groups.
> 
> Granted that this is bad and breaks threading, but avoiding message loss
> is a more important goal.
> 
> I understand I'm not providing any solutions here, but perhaps a more
> complete understanding of what the issues are will ease the pain.

Any chance that it could fix reference headers to match?

Actually, merely prepending the original Message-ID itself to the
references header might be enough to change the reply's situation from
"nephew" ("reply to [missing] sibling") to "grandchild" ("reply to
[missing] reply"), which might be good enough to make threading work
right on most clients, and would be *easy* (whereas maintaining an
ongoing reversible mapping may not be).

And if it's not too much additional work, maybe throw in an
X-Mailman-Original-Message-ID (and -References if anything is done with
that) field, so that the original state can be recovered.

Rather than exclusively rewriting for usenet, maybe the rewritten
headers could also be included in outgoing emails and the archive?

Putting it in outgoing emails would solve the problem entirely, though
it would mean people get duplicates if they're subscribed to multiple
lists to which something is posted or get CC'd. The archive wouldn't
have this issue.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading is foobared?

2016-03-30 Thread Mark Sapiro
Hi all,

I'm jumping in on this thread because Tim asked.

I'm here because I'm a Mailman developer and the primary maintainer of
Mailman for the @python.org lists.

Regarding the initial post in this thread from Steven D'Aprano
suggesting that broken threading is more common recently and quoting a
couple of Message-ID:/References: headers wherein a message ID was
apparently munged from

<1392737302.749065.1459024715818.javamail.ya...@mail.yahoo.com>
to
<1392737302.749065.1459024715818.javamail.yahoo@mail.yahoo.com>

Some Background:

Our long time mail.python.org server provided by xs4all died of severe
hardware failure late last October. We were able to get a replacement
server through the PSF and get it configured and running within a couple
of days, but this new server couldn't access the nntp server at xs4all.
With the kind assistance of members of the community we were able to get
access to a news server at the Free University of Berlin which is now
our gateway to Usenet.

This server undoubtedly has different policies and behaviors from the
prior server at xs4all. I'm not sure what mail or news server is
responsible for munging the IDs as above, but it could be our new Usenet
gateway. All I know for sure is that Mailman doesn't do that specific
munging.

What Mailman does do as noted by Random832 is replace the Message-ID:
header value in posts gated to Usenet with a list specific, Mailman
generated unique value. There is a reason for this, and that reason is
if a message is cross-posted to two lists which both gateway to Usenet,
and Mailman didn't make the Message-IDs unique, the news server would
discard one of the two posts as a duplicate and the post would be
missing from one of the recipient Usenet groups.

Granted that this is bad and breaks threading, but avoiding message loss
is a more important goal.

I understand I'm not providing any solutions here, but perhaps a more
complete understanding of what the issues are will ease the pain.

-- 
Mark Sapiro The highway is for gamblers,
San Francisco Bay Area, Californiabetter use your sense - B. Dylan



signature.asc
Description: OpenPGP digital signature
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading is foobared?

2016-03-30 Thread Sven R. Kunze

On 30.03.2016 01:43, Steven D'Aprano wrote:

On Tue, 29 Mar 2016 09:26 pm, Sven R. Kunze wrote:


On 27.03.2016 05:01, Steven D'Aprano wrote:

Am I the only one who has noticed that threading of posts here is
severely broken? It's always been the case that there have been a few
posts here and there that break threading, but now it seems to be much
more common.

I agree. Didn't we both already have a conversation about this? I
thought it is my thunderbird messing things up.

I'm not using Thunderbird, so whatever the cause of the problem, it is not
specific to Thunderbird.






Haha, how nice. My thread view shows your reply as a sibling not a child 
to my mail. I assume you replied to my mail. How strange.



Best,
Sven
--
https://mail.python.org/mailman/listinfo/python-list


Re: Threading is foobared?

2016-03-29 Thread Random832
On Tue, Mar 29, 2016, at 19:54, Rob Gaddi wrote:
> Just read on Usenet instead of through the mailing list.  That way
> you can accept broken threading as a given rather than wonder why it's
> happening in a particular case.

It's a given everywhere. Any thread that contains a sufficient number of
replies from both users using usenet and users using the mailing list
(gmane counts as the mailing list, since it _doesn't_ do the broken
stuff) is going to be broken for everyone everywhere, though it will be
broken in different places. For users reading by the mailing list,
Usenet users' replies to Mailing List users will be broken (but their
replies to each other will be fine). For users reading by Usenet,
Mailing List users' replies to each other will be broken (though all
replies made via Usenet or to Usenet users will be fine).
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading is foobared?

2016-03-29 Thread Rob Gaddi
Steven D'Aprano wrote:

> Am I the only one who has noticed that threading of posts here is severely
> broken? It's always been the case that there have been a few posts here and
> there that break threading, but now it seems to be much more common.
>
> For instance, I see Jerry Martens' post "help with program". According to my
> newsreader, KNode, there are no replies to that thread. But I see a reply
> from Chris A. Chris' reply has a header line:
>
> In-Reply-To: <1392737302.749065.1459024715818.javamail.ya...@mail.yahoo.com>
>
> but Jerry's original has:
>
> References:
> <1392737302.749065.1459024715818.javamail.yahoo@mail.yahoo.com>
>
> Notice the difference? Here the two values lined up:
>
> <1392737302.749065.1459024715818.javamail.yahoo@mail.yahoo.com>
> <1392737302.749065.1459024715818.javamail.ya...@mail.yahoo.com>
>

Just read on Usenet instead of through the mailing list.  That way
you can accept broken threading as a given rather than wonder why it's
happening in a particular case.

-- 
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order.  See above to fix.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading is foobared?

2016-03-29 Thread Steven D'Aprano
On Tue, 29 Mar 2016 09:26 pm, Sven R. Kunze wrote:

> On 27.03.2016 05:01, Steven D'Aprano wrote:
>> Am I the only one who has noticed that threading of posts here is
>> severely broken? It's always been the case that there have been a few
>> posts here and there that break threading, but now it seems to be much
>> more common.
> 
> I agree. Didn't we both already have a conversation about this? I
> thought it is my thunderbird messing things up.

I'm not using Thunderbird, so whatever the cause of the problem, it is not
specific to Thunderbird.



-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading is foobared?

2016-03-29 Thread Sven R. Kunze

On 27.03.2016 05:01, Steven D'Aprano wrote:

Am I the only one who has noticed that threading of posts here is severely
broken? It's always been the case that there have been a few posts here and
there that break threading, but now it seems to be much more common.


I agree. Didn't we both already have a conversation about this? I 
thought it is my thunderbird messing things up.


Best,
Sven
--
https://mail.python.org/mailman/listinfo/python-list


Re: Threading is foobared?

2016-03-27 Thread Tim Golden

On 27/03/2016 07:25, Random832 wrote:

On Sat, Mar 26, 2016, at 23:18, Ben Finney wrote:

What you've demonstrated is that at least one host is violating
communication standards by altering existing reference fields on
messages in transit.


The usenet gateway relays posts that originated on the mailing list to
usenet with their *Message-ID* replaced wholesale with
"" - 90% of the time when I've traced
a broken thread this has been to blame, and I've complained at least
twice of this before, once not two weeks ago.


And I apologise because I saw that complaint and had meant to reply. In 
short, it's not the list owners who manage the gateway but rather the 
mailman administrators for the whole of the python.org lists. I do 
remember some discussion / explanation of possible problems when the 
mailman version was upgraded a few months ago. I'll try to dig those out 
and follow up with the people involved.


FWIW I assume the issue is with the mail -> news gateway as I see no 
problems with the mailing list threading (at least using TB on Windows) 
and the archive doesn't appear to lose threading either AFAICT. Please 
feel free to point out if I'm wrong about that.


TJG
--
https://mail.python.org/mailman/listinfo/python-list


Re: Threading is foobared?

2016-03-27 Thread Random832
On Sat, Mar 26, 2016, at 23:18, Ben Finney wrote:
> What you've demonstrated is that at least one host is violating
> communication standards by altering existing reference fields on
> messages in transit.

The usenet gateway relays posts that originated on the mailing list to
usenet with their *Message-ID* replaced wholesale with
"" - 90% of the time when I've traced
a broken thread this has been to blame, and I've complained at least
twice of this before, once not two weeks ago.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading is foobared?

2016-03-26 Thread Ben Finney
Steven D'Aprano  writes:

> Am I the only one who has noticed that threading of posts here is
> severely broken? It's always been the case that there have been a few
> posts here and there that break threading, but now it seems to be much
> more common.

I can't give an objective assessment of whether it has increased or to
what extent.

Thanks for pointing out an objective symptom, though: munging of the
message ID values in various fields.

The resulting broken threads are a constant source of annoyance. I don't
know whether any one host is the culprit, or whether there are many
hosts that are munging the message IDs.

What you've demonstrated is that at least one host is violating
communication standards by altering existing reference fields on
messages in transit.

-- 
 \  “It is difficult to get a man to understand something when his |
  `\   salary depends upon his not understanding it.” —Upton Sinclair, |
_o__) 1935 |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading bug in strptime

2015-10-07 Thread Akira Li
Larry Martell  writes:

> We have been trying to figure out an intermittent problem where a
> thread would fail with this:
>
> AttributeError: 'module' object has no attribute '_strptime'
>
> Even though we were importing datetime. After much banging our heads
> against the wall, we found this:
>
> http://code-trick.com/python-bug-attribute-error-_strptime/
>
> The workaround suggested there, to call strptime before starting your
> threads, seems to have fixed the issue.
>
> I thought I'd mention it here in case anyone else is facing this.

I can reproduce it in Python 2 (but not in Python 3) even with
*threading* module:

  #!/usr/bin/env python
  import threading
  import time

  for _ in range(10):
  threading.Thread(target=time.strptime,
   args=("2013-06-02", "%Y-%m-%d")).start()

Don't use *thread* directly (it is even renamed to *_thread* in Python
3, to discourage an unintended usage), use *threading* module instead.

In Python 3.3+, PyImport_ImportModuleNoBlock()  is deprecated
  https://bugs.python.org/issue9260

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading: execute a callback function in the parent thread

2015-03-20 Thread Ian Kelly
On Fri, Mar 20, 2015 at 11:29 AM,  massi_...@msn.com wrote:
 Hi everyone,

 just like the title says I'm searching for a mechanism which allows to call a 
 callback function from a thread, being the callback executed in the parent 
 thread (it can also be the main thread). I'm basically looking for something 
 like the CallAfter method of the wx or the signal slot mechanism of the pyqt. 
 Is there something similar but made only in native python? any workaround is 
 welcome.

There's no general way to tell another thread to execute something,
because that thread is *already* executing something -- you can't just
interrupt it and say here, do this instead. There has to be some
mechanism in place for scheduling the thing to be executed. You can do
this with the main thread of frameworks like wx because the main
thread in that case is running an event loop which enables the
callback to be scheduled.

If your target thread is not running an event loop or being used as an
executor (either of which should provide a specific API for this),
then you will need to have that thread explicitly checking a callback
queue from time to time to see if there's anything ready for it to
call.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading in Python, Please check the script

2015-01-14 Thread sohcahtoa82
On Tuesday, January 13, 2015 at 10:22:32 PM UTC-8, Robert Clove wrote:
 Hi All,
 
 I have made a script in which i have started two thread named thread 1 and 
 thread 2.
 In thread 1 one function will run named func1 and in thread 2 function 2 will 
 run named func 2.
 Thread 1 will execute a command and wait for 60 seconds. 
 Thread 2 will run only till thread 1 is running .
 Again after that the same process continues in while after a break of 80 
 Seconds.
 
 I am a beginner in python.
 Please suggest what all i have done wrong and how to correct it.
 
 
 #!/usr/bin/python
 
 import threading
 import time
 import subprocess
 import datetime
 import os
 import thread
 
 thread.start_new_thread( print_time, (None, None))
 thread.start_new_thread( print_time1, (None, None))
 command= strace -o /root/Desktop/a.txt -c ./server
 final_dir = /root/Desktop
 exitflag = 0
 # Define a function for the thread
 def print_time(*args):
     os.chdir(final_dir)
     print IN first thread
     proc = subprocess.Popen(command,shell=True,stdout=subprocess.PIPE, 
 stderr=subprocess.PIPE)
     proc.wait(70)
     exitflag=1
    
 def print_time1(*args):
     print In second thread
     global exitflag
     while exitflag:
         thread.exit()
     #proc = subprocess.Popen(command1,shell=True,stdout=subprocess.PIPE, 
 sterr=subprocess.PIPE)
 
 
 
 # Create two threads as follows
 try:
     while (1):
         t1=threading.Thread(target=print_time)
         t1.start()
         t2=threading.Thread(target=print_time1)
         t2=start()
         time.sleep(80)
         z = t1.isAlive()
         z1 = t2.isAlive()
         if z:
             z.exit()
         if z1:
             z1.exit()
            threading.Thread(target=print_time1).start()
            threading.Thread(target=print_time1).start()
         print In try
 except:
    print Error: unable to start thread
 
  

In addition to what others have said, it looks like you're designing your 
script to run as root.  This is a Very Bad Idea(tm) and has the potential to 
destroy your system if you make a minor mistake and end up blowing away vital 
system files.

If you want to be able to run the script anywhere but have the results saved in 
your home directory, then look into calling os.path.expanduser('~/Desktop') to 
write to the Desktop directory in your home directory.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading in Python, Please check the script

2015-01-14 Thread Dave Angel

On 01/14/2015 01:22 AM, Robert Clove wrote:

Hi All,



In any new thread, you should specify what versions of Python and OS 
you're using.  I'll assume Python 2.7 and Linux for this message.



I have made a script in which i have started two thread named thread 1 and
thread 2.
In thread 1 one function will run named func1 and in thread 2 function 2
will run named func 2.
Thread 1 will execute a command and wait for 60 seconds.
Thread 2 will run only till thread 1 is running .
Again after that the same process continues in while after a break of 80
Seconds.

I am a beginner in python.
Please suggest what all i have done wrong and how to correct it.


#!/usr/bin/python

import threading
import time
import subprocess
import datetime
import os
import thread

thread.start_new_thread( print_time, (None, None))
thread.start_new_thread( print_time1, (None, None))


In these two lines you're referencing a function that hasn't been 
defined yet.  This top-level code should be moved to the end of the 
file, after the if __name__ = __main__: line


Or just drop it, since you've got a second set of code also trying to 
create new threads.



command= strace -o /root/Desktop/a.txt -c ./server
final_dir = /root/Desktop
exitflag = 0
# Define a function for the thread
def print_time(*args):
 os.chdir(final_dir)
 print IN first thread
 proc = subprocess.Popen(command,shell=True,stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
 proc.wait(70)
 exitflag=1


You just set a local variable, not the global one.  So it won't be 
visible in the other thread.  If you must rebind a top-level variable 
from a function, you need to use the 'global' declaration in your function.





def print_time1(*args):
 print In second thread
 global exitflag
 while exitflag:
 thread.exit()
 #proc =
subprocess.Popen(command1,shell=True,stdout=subprocess.PIPE,
sterr=subprocess.PIPE)



# Create two threads as follows
try:
 while (1):
 t1=threading.Thread(target=print_time)
 t1.start()
 t2=threading.Thread(target=print_time1)
 t2=start()
 time.sleep(80)
 z = t1.isAlive()
 z1 = t2.isAlive()
 if z:
 z.exit()
 if z1:
 z1.exit()
threading.Thread(target=print_time1).start()
threading.Thread(target=print_time1).start()


What are you trying to do in those two lines?  If nothing else, they'll 
give an indentation error.  But if you fix that, you'll still have the 
potential problem of creating more and more threads as you loop around.



 print In try
except:


Bare excepts are evil.  Your user can't tell what went wrong, from a 
syntax error to the user hitting control-C.  Even if you can't handle a 
particular kind of exception, at least have the courtesy of telling the 
user what went wrong.



print Error: unable to start thread





I'm sure there are other things, but these popped out at me.

--
DaveA
--
https://mail.python.org/mailman/listinfo/python-list


Re: Threading in Python, Please check the script

2015-01-14 Thread Robert Clove
Can u provide me the pseudo script.


On Wed, Jan 14, 2015 at 4:10 PM, Dave Angel da...@davea.name wrote:

 On 01/14/2015 01:22 AM, Robert Clove wrote:

 Hi All,


 In any new thread, you should specify what versions of Python and OS
 you're using.  I'll assume Python 2.7 and Linux for this message.

  I have made a script in which i have started two thread named thread 1 and
 thread 2.
 In thread 1 one function will run named func1 and in thread 2 function 2
 will run named func 2.
 Thread 1 will execute a command and wait for 60 seconds.
 Thread 2 will run only till thread 1 is running .
 Again after that the same process continues in while after a break of 80
 Seconds.

 I am a beginner in python.
 Please suggest what all i have done wrong and how to correct it.


 #!/usr/bin/python

 import threading
 import time
 import subprocess
 import datetime
 import os
 import thread

 thread.start_new_thread( print_time, (None, None))
 thread.start_new_thread( print_time1, (None, None))


 In these two lines you're referencing a function that hasn't been defined
 yet.  This top-level code should be moved to the end of the file, after the
 if __name__ = __main__: line

 Or just drop it, since you've got a second set of code also trying to
 create new threads.

  command= strace -o /root/Desktop/a.txt -c ./server
 final_dir = /root/Desktop
 exitflag = 0
 # Define a function for the thread
 def print_time(*args):
  os.chdir(final_dir)
  print IN first thread
  proc = subprocess.Popen(command,shell=True,stdout=subprocess.PIPE,
 stderr=subprocess.PIPE)
  proc.wait(70)
  exitflag=1


 You just set a local variable, not the global one.  So it won't be visible
 in the other thread.  If you must rebind a top-level variable from a
 function, you need to use the 'global' declaration in your function.



 def print_time1(*args):
  print In second thread
  global exitflag
  while exitflag:
  thread.exit()
  #proc =
 subprocess.Popen(command1,shell=True,stdout=subprocess.PIPE,
 sterr=subprocess.PIPE)



 # Create two threads as follows
 try:
  while (1):
  t1=threading.Thread(target=print_time)
  t1.start()
  t2=threading.Thread(target=print_time1)
  t2=start()
  time.sleep(80)
  z = t1.isAlive()
  z1 = t2.isAlive()
  if z:
  z.exit()
  if z1:
  z1.exit()
 threading.Thread(target=print_time1).start()
 threading.Thread(target=print_time1).start()


 What are you trying to do in those two lines?  If nothing else, they'll
 give an indentation error.  But if you fix that, you'll still have the
 potential problem of creating more and more threads as you loop around.

   print In try
 except:


 Bare excepts are evil.  Your user can't tell what went wrong, from a
 syntax error to the user hitting control-C.  Even if you can't handle a
 particular kind of exception, at least have the courtesy of telling the
 user what went wrong.

  print Error: unable to start thread




 I'm sure there are other things, but these popped out at me.

 --
 DaveA
 --
 https://mail.python.org/mailman/listinfo/python-list

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Threading in Python, Please check the script

2015-01-14 Thread Dave Angel

On 01/14/2015 07:11 AM, Robert Clove wrote:

Can u provide me the pseudo script.


You say you're a beginner.  If so, you shouldn't be trying to use 
threads, which are tricky.  I've been programming for 46 years, and I 
seldom have had to write multi-threading code.


But this looks like a school assignment.  And it looks like the code has 
little to do with the assignment.  So let's break down the assignment a bit.


Your assignment says the two threads will be called thread1 and thread2. 
 So why do you call them t1 and t2?


Your assignment says the two functions will be called func1 and func2. 
So why do you call them print_time and print_time1 ?


The assignment doesn't say anything and about running an external 
process.  Since that's also complex and error prone, perhaps you should 
keep it simple till everything else works.


Some general principles:

1) try not to do the same thing in several places.  You create threads 
in 3 places, and two of them are wrong or at least non-optimal.  Keep 
just the one where you do it right.


2) Use useful names for the various variables.  Except of course where 
the assignment specifies the name to be used.  Then use that name.


3) Learn one new concept at a time.  If you're concentrating on threads 
here, don't try to learn subprocess at the same time, unless that's what 
the assignment calls for.


4) Avoid chdir, especially in multithreaded programs.  Use an absolute 
path if you must when dealing with files.  Except for trivial programs, 
changing the working directory is likely to bite you in the foot 
somewhere else in the program.



--
DaveA
--
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-11 Thread Rustom Mody
On Friday, April 11, 2014 10:45:08 AM UTC+5:30, Rustom Mody wrote:
 On Friday, April 11, 2014 2:14:42 AM UTC+5:30, Marko Rauhamaa wrote:
   (1) oversimplification which makes it difficult to extend the design
   and handle all of the real-world contingencies
 
 This I dont...

I meant Dont understand
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-11 Thread Sturla Molden
Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote:

 I have an issue with the use of coroutines. I think they are to evil.
 
 They are to evil ... as what? To evil as chalk is to cheese? Black is to 
 white? Bees are to honey? 

I think coroutines are one of those things that don't fit the human mind.

A class with a couple of queues for input and output is far easier to
comprehend.

Sturla

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-11 Thread Roy Smith
In article 53477f29$0$29993$c3e8da3$54964...@news.astraweb.com,
 Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote:

 I think coroutines are awesome, but like all advanced concepts, sometimes 
 they can be abused, and sometimes they are hard to understand not because 
 they are hard to understand in and of themselves, but because they are 
 being used to do something inherently complicated.

Advanced, perhaps.  But certainly not new.  The first real computer I 
worked on (a pdp-11/45) had a hardware instruction which swapped 
execution contexts.  The documentation described it as being designed to 
support coroutines.  That's a machine which was designed in the early 
1970s.

Heh, Wikipedia's [[Coroutine]] article says, The term coroutine was 
coined by Melvin Conway in a 1963 paper.

At a high level, threads and coroutines are really very similar.  They 
are both independent execution paths in the same process.  I guess the 
only real difference between them is that thread switching is mediated 
by the operating system, so it can happen anywhere (i.e. at any 
instruction boundary).  Coroutines scheduling is handled in user code, 
so you have a lot more control over when context switches happen.  This 
makes it a lot easier to manage operations which must occur atomically.

They both operate in the same process memory space, so have the 
potential to stomp on each others data structures.  They also both have 
the property that there is flow of control happening which is not 
apparent from a top-down reading of the code (exceptions, to a certain 
extent, have this same problem).  This is fundamentally what makes them 
difficult to understand.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-11 Thread Marko Rauhamaa
Rustom Mody rustompm...@gmail.com:

 On Friday, April 11, 2014 10:45:08 AM UTC+5:30, Rustom Mody wrote:
 On Friday, April 11, 2014 2:14:42 AM UTC+5:30, Marko Rauhamaa wrote:
   (1) oversimplification which makes it difficult to extend the design
   and handle all of the real-world contingencies
 
 This I dont...

 I meant Dont understand

The simplest example: there's no general way to terminate a thread.
Hacks exist for some occasions, but they can hardly be considered
graceful.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-11 Thread Chris Angelico
On Sat, Apr 12, 2014 at 1:36 AM, Marko Rauhamaa ma...@pacujo.net wrote:
 The simplest example: there's no general way to terminate a thread.
 Hacks exist for some occasions, but they can hardly be considered
 graceful.

Having followed python-list for some time now, I have to agree... you
can't terminate a thread, no matter how many posts it has in it!

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-11 Thread Mark Lawrence

On 11/04/2014 16:53, Chris Angelico wrote:

On Sat, Apr 12, 2014 at 1:36 AM, Marko Rauhamaa ma...@pacujo.net wrote:

The simplest example: there's no general way to terminate a thread.
Hacks exist for some occasions, but they can hardly be considered
graceful.


Having followed python-list for some time now, I have to agree... you
can't terminate a thread, no matter how many posts it has in it!

ChrisA



Wro

--
My fellow Pythonistas, ask not what our language can do for you, ask 
what you can do for our language.


Mark Lawrence

---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com


--
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-11 Thread Grant Edwards
On 2014-04-11, Roy Smith r...@panix.com wrote:

 At a high level, threads and coroutines are really very similar.  They 
 are both independent execution paths in the same process.  I guess the 
 only real difference between them is that thread switching is mediated 
 by the operating system, so it can happen anywhere (i.e. at any 
 instruction boundary).

That's only true if your threading system has pre-emption. Python's
does, but not all do. If your threading system is cooperative rather
than preemptive, then using coroutines is completely idential to
threading with 2 threads.

 Coroutines scheduling is handled in user code,

As is cooperative multithreading.

-- 
Grant Edwards   grant.b.edwardsYow! My BIOLOGICAL ALARM
  at   CLOCK just went off ... It
  gmail.comhas noiseless DOZE FUNCTION
   and full kitchen!!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-11 Thread Rustom Mody
On Friday, April 11, 2014 9:06:47 PM UTC+5:30, Marko Rauhamaa wrote:
 Rustom Mody:
  On Friday, April 11, 2014 10:45:08 AM UTC+5:30, Rustom Mody wrote:
 
  On Friday, April 11, 2014 2:14:42 AM UTC+5:30, Marko Rauhamaa wrote:
 
(1) oversimplification which makes it difficult to extend the design
and handle all of the real-world contingencies
  
  This I dont...

 The simplest example: there's no general way to terminate a thread.
 Hacks exist for some occasions, but they can hardly be considered
 graceful.

I was about to say that this is fairly close to my point, viz:

Half-assed support in current languages does not imply any necessary problem
in the idea -- just in the mainstream implementations of it.

Then looking it up I find Go's goroutines have the same issue.
Erlang processes though, are kill-able like quite any unix process
http://www.erlang.org/doc/reference_manual/processes.html#id85098

What does that mean?? I am not quite sure... It may mean that Steven's

 I think coroutines are awesome, but like all advanced concepts, sometimes
 they can be abused, and sometimes they are hard to understand not because
 they are hard to understand in and of themselves, but because they are
 being used to do something inherently complicated.

is right.

Personally my view is the other way round:

Concurrency (generalizing from coroutines) is not hard.
The problem is that imperative programming and concurrency are deeply 
incompatible because imperative programming is almost by definition
sequential programming and concurrency is by definition concurrent,
ie non-sequential

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-11 Thread Marko Rauhamaa
Rustom Mody rustompm...@gmail.com:

 Half-assed support in current languages does not imply any necessary
 problem in the idea -- just in the mainstream implementations of it.

 Then looking it up I find Go's goroutines have the same issue.

the promise of threads is that you only need to consider a single event
out of any given state. Trouble is, in reality, you need ot consider a
multitude of events in every state. Threads are not good at presenting
that reality.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Mark Lawrence

On 10/04/2014 00:53, Roy Smith wrote:


Natural language is a wonderfully expressive thing.  I open the window,
stick my head out, look up at the sky, and say, Raining.  Forget the
pronoun, I don't even have a verb.  And yet everybody understands
exactly what I mean.



In the UK you can stay in bed and say Raining and the odds are you'll 
be correct :)


--
My fellow Pythonistas, ask not what our language can do for you, ask 
what you can do for our language.


Mark Lawrence

---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com


--
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Chris Angelico
On Thu, Apr 10, 2014 at 6:23 PM, Mark Lawrence breamore...@yahoo.co.uk wrote:
 On 10/04/2014 00:53, Roy Smith wrote:
 Natural language is a wonderfully expressive thing.  I open the window,
 stick my head out, look up at the sky, and say, Raining.  Forget the
 pronoun, I don't even have a verb.  And yet everybody understands
 exactly what I mean.


 In the UK you can stay in bed and say Raining and the odds are you'll be
 correct :)

Is the staying-in-bed part critical to that? The last few times I've
been to England, it's only rained a few times. Granted, I've always
come during your summer, but even so, the rumours suggest that rain
should still be plenty common. We've happily driven a costume rack
down the A53 (twice - once empty, once loaded, if I recall correctly),
without worrying about rain. There were a few times when the terrain
was treacherous (imagine this: you're at the top of a moderately-steep
(probably 1 in 10-20) of rough concrete or asphalt, depending on which
part you jog down, and it's been greased up by vehicles standing
there, and then rained on; and you need to run down it at full speed,
catch the porta-cabin before it closes for the last time this year,
get the DVDs that were being run off for you, and run back up at full
speed, all before a ceremony begins), but other than that, it's been
pretty dry every time we've been there.

But we don't stay in bed much.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Frank Millman

Chris Angelico ros...@gmail.com wrote in message 
news:CAPTjJmq2xx_WG2ymCC0NNqisDO=dnnjhnegpid3de+xeiy5...@mail.gmail.com...
 On Thu, Apr 10, 2014 at 12:30 AM, Frank Millman fr...@chagford.com 
 wrote:


 How does one distinguish betwen 'blocking' and 'non-blocking'? Is it
 either/or, or is it some arbitrary timeout - if a handler returns 
 within
 that time it is non-blocking, but if it exceeds it it is blocking?

 No; a blocking request is one that waits until it has a response, and
 a non-blocking request is one that goes off and does something, and
 then comes back to you when it's done.


Thanks for that clarification - I think I've got it now.

 def nonblocking_query(id):
print(Finding out who employee #%d is...%id)
def nextstep(res):
print(Employee #%d is %s.%(id,res[0].name))
db.asyncquery(nextstep, select name from emp where id=12345)


 In this example, what is 'db.asyncquery'?

 If you mean that you have a separate thread to handle database queries, 
 and
 you use a queue or other message-passing mechanism to hand it the query 
 and
 get the result, then I understand it. If not, can you explain in more
 detail.

 It's an imaginary function that would send a request to the database,
 and then call some callback function when the result arrives. If the
 database connection is via a TCP/IP socket, that could be handled by
 writing the query to the socket, and then when data comes back from
 the socket, looking up the callback and calling it. There's no
 additional thread here.


I need some time to get my head around that, but meanwhile can you resolve 
this stumbling block?

The current version of my program uses HTTP. As I understand it, a client 
makes a connection and submits a request. The server processes the request 
and returns a result. The connection is then closed.

In this scenario, does async apply at all? There is no open connection to 
'select' or 'poll'. You have to ensure that the request handler does not 
block the entire process, so that the main loop is ready to accept more 
connections. But passing the request to a thread for handling seems an 
effective solution.

Am I missing something?

Frank



-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Chris Angelico
On Thu, Apr 10, 2014 at 7:17 PM, Frank Millman fr...@chagford.com wrote:
 The current version of my program uses HTTP. As I understand it, a client
 makes a connection and submits a request. The server processes the request
 and returns a result. The connection is then closed.

 In this scenario, does async apply at all? There is no open connection to
 'select' or 'poll'. You have to ensure that the request handler does not
 block the entire process, so that the main loop is ready to accept more
 connections. But passing the request to a thread for handling seems an
 effective solution.

Let's take this to a slightly lower level. HTTP is built on top of a
TCP/IP socket. The client connects (usually on port 80), and sends a
string like this:

GET /foo/bar/asdf.html HTTP/1.0
Host: www.spam.org
User-Agent: Mozilla/5.0



The server then sends back something like this:

HTTP/1.0 200 OK
Content-type: text/html

html
body
Hello, world!
/body
/html


These are carried on a straight-forward bidirectional stream socket,
so the write and read operations (or send and recv, either way) can
potentially block. With a small request, you can kinda assume that the
write won't block, but the read most definitely will: it'll block
until the server writes something for you.

So it follows the usual model of blocking vs non-blocking. In blocking
mode, you do something like this:

data = socket.read()

and it waits until it has something to return. In non-blocking mode,
you do something like this:

def data_available(socket, data):
# whatever
socket.set_read_callback(data_available)

An HTTP handling library can then build a non-blocking request handler
on top of that, by having data_available parse out the appropriate
information, and return if it doesn't have enough content yet. So it
follows the same model; you send off the request (and don't wait for
it), and then get notified when the result is there.

When you write the server, you effectively have the same principle,
with one additional feature: a listening socket becomes readable
whenever someone connects. So you can select() on that socket, just
like you can with the others, and whenever there's a new connection,
you add it to the collection and listen for requests on all of them.
It's basically the same concept; as soon as you can accept a new
connection, you do so, and then go back to the main loop.

It's pretty simple when you let a lower-level library do the work for
you :) The neat thing is, you can put all of this into a single
program; I can't demo it in Python for you, but I have a Pike kernel
that I wrote for my last job, which can handle a variety of different
asynchronous operations: TCP, UDP (which just sends single packets,
normally), a GUI (in theory), timers, the lot. It has convenience
features for creating a DNS server, an HTTP server, and a stateful
line-based server (covers lots of other protocols, like SMTP). And
(though this bit would be hard to port to Python) it can update itself
without shutting down. Yes, it can take some getting your head around,
but it's well worth it.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Frank Millman

Chris Angelico ros...@gmail.com wrote in message 
news:CAPTjJmoWaHPZk=DAxbfJ=9ez2aj=4yf2c8wmbryof5vgn6e...@mail.gmail.com...
 On Thu, Apr 10, 2014 at 7:17 PM, Frank Millman fr...@chagford.com wrote:
 The current version of my program uses HTTP. As I understand it, a client
 makes a connection and submits a request. The server processes the 
 request
 and returns a result. The connection is then closed.

 In this scenario, does async apply at all? There is no open connection to
 'select' or 'poll'. You have to ensure that the request handler does not
 block the entire process, so that the main loop is ready to accept more
 connections. But passing the request to a thread for handling seems an
 effective solution.



[...]

Thanks, Chris - I am learning a lot!

I have skipped the first part of your reply, as it seems to refer to the 
client. I am using a web browser as a client, so I don't have to worry about 
programming that.


 When you write the server, you effectively have the same principle,
 with one additional feature: a listening socket becomes readable
 whenever someone connects. So you can select() on that socket, just
 like you can with the others, and whenever there's a new connection,
 you add it to the collection and listen for requests on all of them.
 It's basically the same concept; as soon as you can accept a new
 connection, you do so, and then go back to the main loop.


This is where it gets interesting. At present I am using cherrypy as a 
server, and I have not checked its internals. However, in the past I have 
dabbled with writing server programs like this -

while self.running:
try:
conn,addr = self.s.accept()
Session(args=(self, conn)).start()
except KeyboardInterrupt:
self.shutdown()

In this scenario, the loop blocks on 'accept'.

You seem to be suggesting that I set the socket to 'non-blocking', use 
select() to determine when a client is trying to connect, and then call 
'accept' on it to create a new connection.

If so, I understand your point. The main loop changes from 'blocking' to 
'non-blocking', which frees it up to perform all kinds of other tasks as 
well. It is no longer just a 'web server', but becomes an 'all-purpose 
server'.

Much food for thought!

Frank



-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Chris Angelico
On Thu, Apr 10, 2014 at 9:10 PM, Frank Millman fr...@chagford.com wrote:
 You seem to be suggesting that I set the socket to 'non-blocking', use
 select() to determine when a client is trying to connect, and then call
 'accept' on it to create a new connection.

Right! That's about how it goes. Of course, for production work you'll
want to wrap all that up for convenience, but fundamentally, that is
what happens.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Marko Rauhamaa
Frank Millman fr...@chagford.com:

 You seem to be suggesting that I set the socket to 'non-blocking', use
 select() to determine when a client is trying to connect, and then
 call 'accept' on it to create a new connection.

Yes.

 If so, I understand your point. The main loop changes from 'blocking'
 to 'non-blocking', which frees it up to perform all kinds of other
 tasks as well. It is no longer just a 'web server', but becomes an
 'all-purpose server'.

The server will do whatever you make it do.

Other points:

 * When you wake up from select() (or poll(), epoll()), you should treat
   it as a hint. The I/O call (accept()) could still raise
   socket.error(EAGAIN).

 * The connections returned from accept() have to be individually
   registered with select() (poll(), epoll()).

 * When you write() into a connection, you may be able to send only part
   of the data or get EAGAIN. You need to choose a buffering strategy --
   you should not block until all data is written out. Also take into
   account how much you are prepared to buffer.

 * There are two main modes of multiplexing: level-triggered and
   edge-triggered. Only epoll() (and kqueue()) support edge-triggered
   wakeups. Edge-triggered requires more discipline from the programmer
   but frees you from having to tell the multiplexing facility if you
   are interested in readability or writability in any given situation.

   Edge-triggered wakeups are only guaranteed after you have gotten an
   EAGAIN from an operation. Make sure you keep on reading/writing until
   you get an EAGAIN. On the other hand, watch out so one connection
   doesn't hog the process because it always has active I/O to perform.

 * You should always be ready to read to prevent deadlocks.

 * Sockets can be half-closed. Your state machines should deal with the
   different combinations gracefully. For example, you might read an EOF
   from the client socket before you have pushed the response out. You
   must not close the socket before the response has finished writing.
   On the other hand, you should not treat the half-closed socket as
   readable.

 * While a single-threaded process will not have proper race conditions,
   you must watch out for preemption. IOW, you might have Object A call
   a method of Object B, which calls some other method of Object A.
   Asyncio has a task queue facility. If you write your own main loop,
   you should also implement a similar task queue. The queue can then be
   used to make such tricky function calls in a safe context.

 * Asyncio provides timers. If you write your own main loop, you should
   also implement your own timers.

   Note that modern software has to tolerate suspension (laptop lid,
   virtual machines). Time is a tricky concept when your server wakes up
   from a coma.

 * Specify explicit states. Your connection objects should have a data
   member named state (or similar). Make your state transitions
   explicit and obvious in the code. In fact, log them. Resist the
   temptation of deriving the state implicitly from other object
   information.

 * Most states should be guarded with a timer. Make sure to document for
   each state, which timers are running.

 * In each state, check that you handle all possible events and
   timeouts. The state/transition matrix will be quite sizable even for
   seemingly simple tasks.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Roy Smith
In article 87wqexmmuc@elektro.pacujo.net,
 Marko Rauhamaa ma...@pacujo.net wrote:

  * When you wake up from select() (or poll(), epoll()), you should treat
it as a hint. The I/O call (accept()) could still raise
socket.error(EAGAIN).

People often misunderstand what select() does.  The common misconception 
is that a select()ed descriptor has data waiting to be read.  What the 
man page says is, A file descriptor is considered ready if it is 
possible to perform the corresponding I/O operation (e.g., read(2)) 
without blocking.  Not blocking includes failing immediately.

And, once you introduce threading, things get even more complicated.  
Imagine two threads, both waiting in a select() call on the same socket.  
Data comes in on that socket.  Both select() calls return.  If both 
threads then do reads on the socket, you've got a race condition.  One 
of them will read the data.  The other will block in the read call, 
because the data has already been read by the other thread!

So, yes, as Marko says, use select() as a hint, but then also do your 
reads in non-blocking mode, and be prepared for them to fail, regardless 
of whether select() said the descriptor was ready.

Note that modern software has to tolerate suspension (laptop lid,
virtual machines). Time is a tricky concept when your server wakes up
from a coma.

Not to mention running in a virtual machine.  Time is an equally tricky 
concept when your hardware clock is really some other piece of software 
playing smoke and mirrors.  I once worked on a time-sensitive system 
which was running in a VM.  The idiots who had configured the thing were 
running ntpd in the VM, to keep its clock in sync.  Normally, this is a 
good thing, but they were ALSO using the hypervisor's clock management 
gizmo (vmtools?) to adjust the VM clock.  The two mechanisms were 
fighting with each other, which did really weird stuff to time.

It took me forever to figure out what was going on.  How does one even 
observe that time is moving around randomly?  I eventually ended up 
writing a trivial NTP client in Python (it's only a few lines of code) 
and periodically logging the difference between the local system clock 
and what my NTP reference was telling me.  Of course, figuring out what 
was going on was the easy part.  Convincing the IT drones to fix the 
problem was considerably more difficult.
 
  * In each state, check that you handle all possible events and
timeouts. The state/transition matrix will be quite sizable even for
seemingly simple tasks.

And, those empty boxes in the state transition matrix which are blank, 
because those transitions are impossible?  Guess what, they happen, and 
you better have a plan for when they do :-)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Mark H Harris

On 4/9/14 12:52 PM, Chris Angelico wrote:


People with a fear of threaded programming almost certainly never grew
up on OS/2. :) I learned about GUI programming thus: Write your
synchronous message handler to guarantee that it will return in an
absolute maximum of 0.1s, preferably a lot less. If you have any sort
of heavy processing to do, spin off a thread.


heh  very true.

Any non trivial OS/2 GUI app required threads.  We had a template at our 
shop that we gave to noobs for copy-n-tweak.  It had not only the basics 
for getting the canvas on the screen with a tool bar and a button, but 
also the minimal code required to setup the thread to handle the button 
event (it was a database lookup in our case).




--
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Sturla Molden
Marko Rauhamaa ma...@pacujo.net wrote:

 Other points:
 
  * When you wake up from select() (or poll(), epoll()), you should treat
it as a hint. The I/O call (accept()) could still raise
socket.error(EAGAIN).
 
  * The connections returned from accept() have to be individually
registered with select() (poll(), epoll()).
 
  * When you write() into a connection, you may be able to send only part
of the data or get EAGAIN. You need to choose a buffering strategy --
you should not block until all data is written out. Also take into
account how much you are prepared to buffer.
 
  * There are two main modes of multiplexing: level-triggered and
edge-triggered. Only epoll() (and kqueue()) support edge-triggered
wakeups. Edge-triggered requires more discipline from the programmer
but frees you from having to tell the multiplexing facility if you
are interested in readability or writability in any given situation.
 
Edge-triggered wakeups are only guaranteed after you have gotten an
EAGAIN from an operation. Make sure you keep on reading/writing until
you get an EAGAIN. On the other hand, watch out so one connection
doesn't hog the process because it always has active I/O to perform.
 
  * You should always be ready to read to prevent deadlocks.
 
  * Sockets can be half-closed. Your state machines should deal with the
different combinations gracefully. For example, you might read an EOF
from the client socket before you have pushed the response out. You
must not close the socket before the response has finished writing.
On the other hand, you should not treat the half-closed socket as
readable.
 
  * While a single-threaded process will not have proper race conditions,
you must watch out for preemption. IOW, you might have Object A call
a method of Object B, which calls some other method of Object A.
Asyncio has a task queue facility. If you write your own main loop,
you should also implement a similar task queue. The queue can then be
used to make such tricky function calls in a safe context.
 
  * Asyncio provides timers. If you write your own main loop, you should
also implement your own timers.
 
Note that modern software has to tolerate suspension (laptop lid,
virtual machines). Time is a tricky concept when your server wakes up
from a coma.
 
  * Specify explicit states. Your connection objects should have a data
member named state (or similar). Make your state transitions
explicit and obvious in the code. In fact, log them. Resist the
temptation of deriving the state implicitly from other object
information.
 
  * Most states should be guarded with a timer. Make sure to document for
each state, which timers are running.
 
  * In each state, check that you handle all possible events and
timeouts. The state/transition matrix will be quite sizable even for
seemingly simple tasks.


And exactly how is getting all of this correct any easier than just using
threads and blocking i/o?

I'd like to see the programmer who can get all of this correct, but has no
idea how to use a queue og mutex without deadlocking.


Sturla

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Chris Angelico
On Fri, Apr 11, 2014 at 1:24 AM, Sturla Molden sturla.mol...@gmail.com wrote:
 And exactly how is getting all of this correct any easier than just using
 threads and blocking i/o?

For a start, nearly everything Marko just posted should be dealt with
by your library. I don't know Python's asyncio as it's very new and I
haven't yet found an excuse to use it, but with Pike, I just engage
backend mode, set callbacks on the appropriate socket/file/port
objects, and let things happen perfectly. All I need to do is check a
few return values (eg if I ask a non-blocking socket to write a whole
pile of data, it might return that it wrote only some of it, in which
case I have to buffer the rest - not hard but has to be done), and
make sure I always return promptly from my callbacks so as to avoid
lagging out other operations. None of the details of C-level APIs
matter to my high level code.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Marko Rauhamaa
Sturla Molden sturla.mol...@gmail.com:

 And exactly how is getting all of this correct any easier than just
 using threads and blocking i/o?

 I'd like to see the programmer who can get all of this correct, but
 has no idea how to use a queue og mutex without deadlocking.

My personal experience is that it is easier to get all of this correct
than threads. I've done it both ways.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Marko Rauhamaa
Chris Angelico ros...@gmail.com:

 For a start, nearly everything Marko just posted should be dealt with
 by your library.

Let's not kid ourselves: it is hard to get any reactive system right.

 I don't know Python's asyncio as it's very new and I haven't yet found
 an excuse to use it, but with Pike, I just engage backend mode, set
 callbacks on the appropriate socket/file/port objects, and let things
 happen perfectly.

That set callbacks and let things happen is the hard part. The
framework part is trivial.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Chris Angelico
On Fri, Apr 11, 2014 at 2:25 AM, Marko Rauhamaa ma...@pacujo.net wrote:
 I don't know Python's asyncio as it's very new and I haven't yet found
 an excuse to use it, but with Pike, I just engage backend mode, set
 callbacks on the appropriate socket/file/port objects, and let things
 happen perfectly.

 That set callbacks and let things happen is the hard part. The
 framework part is trivial.

Maybe. Here's a simple self-contained Pike program that makes a simple
echo server - whatever comes in goes out again:


//Create the port (listening connection).
object mainsock=Stdio.Port(12345,accept_callback);

void accept_callback()
{
//Get the newly-connected socket
object sock=mainsock-accept();
//Set up its callbacks
sock-set_nonblocking(read_callback, write_callback, close_callback);
//Keep track of metadata (here that'll just be the write buffer)
sock-set_id(([sock:sock]));
}

//Attempt to write some text, buffering any that can't be written
void write(mapping info, string text)
{
if (!text || text==) return;
if (info-write_me)
{
//There's already buffered text. Queue this text too.
info-write_me += text;
return;
}
int written = info-sock-write(text);
if (written  0)
{
//Deal with write errors brutally by closing the socket.
info-sock-close();
return;
}
info-write_me = text[written..];
}

//When more can be written, write it.
void write_callback(mapping info) {write(info, m_delete(info,write_me));}

void read_callback(mapping info, string data)
{
//Simple handling: Echo the text back with a prefix.
//Note that this isn't line-buffered or anything.
write(info,   + data);
}

//Not strictly necessary, but if you need to do something when a client
//disconnects, this is where you'd do it.
void close_callback(mapping info)
{
info-sock = (disconnected);
}

//Engage backend mode.
int main() {return -1;}



Setting callbacks? One line. There's a little complexity to the write
what you can, buffer the rest, but if you're doing anything even a
little bit serious, you'll just bury that away in a mid-level library
function. The interesting part is in the read callback, which does the
actual work (in this case, it just writes back whatever it gets). And
here's how easy it is to make it into a chat server: just replace the
read and close callbacks with these:

multiset(mapping) sockets=();
void read_callback(mapping info, string data)
{
//Simple handling: Echo the text back with a prefix.
//Note that this isn't line-buffered or anything.
sockets[info] = 1;
write(indices(sockets)[*],   + data);
}

//Not strictly necessary, but if you need to do something when a client
//disconnects, this is where you'd do it.
void close_callback(mapping info)
{
info-sock = (disconnected);
sockets[info] = 0;
}


If you want to handle more information (maybe get users to log in?),
you just stuff more stuff into the info mapping (it's just like a
Python dict). Handling of TELNET negotiation, line buffering, etc,
etc, can all be added between this and the user-level code - that's
what I did with the framework I wrote for work. Effectively, you just
write one function (I had it double as the read and close callbacks
for simplicity), put a declaration down the bottom to say what port
number you want (hard coded to 12345 in the above code), and
everything just happens. It really isn't hard to get callback-based
code to work nicely if you think about what you're doing.

I expect it'll be similarly simple with asyncio; does someone who's
worked with it feel like implementing similar functionality?

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Rustom Mody
On Thursday, April 10, 2014 10:38:49 PM UTC+5:30, Chris Angelico wrote:
 On Fri, Apr 11, 2014 at 2:25 AM, Marko Rauhamaa wrote:
  I don't know Python's asyncio as it's very new and I haven't yet found
  an excuse to use it, but with Pike, I just engage backend mode, set
  callbacks on the appropriate socket/file/port objects, and let things
  happen perfectly.
 
  That set callbacks and let things happen is the hard part. The
  framework part is trivial.
 
 Maybe. Here's a simple self-contained Pike program that makes a simple
 echo server - whatever comes in goes out again:
 

For analogy let me take a 'thought-discussion' between a C programmer and a 
python programmer regarding data structures.


-
PP: Is it not tedious and error prone, C's use of data structures? How/Why do 
you stick to that?
CP: Oh! Is it? And what do you propose I use?
PP: Why python of course! Or any modern language with first class data and 
garbage collection!  Why spend a lifetime tracking malloc errors?!
CP: Oh! is it? And what is python implemented in?
PP: But thats the whole point!  Once Guido-n-gang have done their thing we are 
unscathed by the bugs that prick and poke and torment you day in day out.
CP: Lets look at this in more detail shall we?
PP: Very well
CP: You give me any python data structure (so-called) and I'll give it to you 
in C. And note: Its very easy. I just open up the python implementation (its in 
C in case you forgot) and clean up all the mess that has been added for the 
support of lazy python programmers. In addition, I'll give you a couple of more
data-structures/algorithms that we have easy access to but for you, your only 
choice is to drop into C to use (HeHe!)
PP: You are setting the rules of the game... and winning. I did not say I want 
fancy algorithms and data structures. I said I want (primarily) the safety of 
garbage collection. Its also neat to have an explicit syntax for basic data 
types like lists rather than scrummaging around with struct and malloc and 
pointers (hoo boy!)
CP: Yeah.. Like I said you like to be mollycoddled; we like our power and 
freedom

---

If I may use somewhat heavy brush-strokes:
Marco (and evidently Chris) are in the CP camp whereas Sturla is in the PP camp.
Its just the 'data-structures (and algorithms)' is now replaced by 'concurrency'

Both these viewpoints assume that the status quo of current (mainstream) 
language support for concurrency is a given and not negotiable. Erlang/Go etc 
disprove this.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Marko Rauhamaa
Rustom Mody rustompm...@gmail.com:

 Marco (and evidently Chris) are in the CP camp whereas Sturla is in
 the PP camp. Its just the 'data-structures (and algorithms)' is now
 replaced by 'concurrency'

 Both these viewpoints assume that the status quo of current
 (mainstream) language support for concurrency is a given and not
 negotiable.

I think you misread me (us?). I'm not trying to make life hard on
myself. Nor am I disparaging fitting abstractions and high-level
utilities.

Threads are an essential tool when used appropriately. However, I do
believe the 90's fad of treating them like a silver bullet of
concurrency was a big mistake. The industry is noticing it, as is
evident in NIO and asyncio.

Threads are enticing in that they make it quick to put together working
prototypes. The difficulties only appear when it's too late to go back.
They definitely are not the high-level abstraction you're looking for.

 Erlang/Go etc disprove this.

URL: http://en.wikipedia.org/wiki/Leonhard_Euler#
Personal_philosophy_and_religious_beliefs:

  n
 a + b
Sir, -- = x, hence God exists—reply!
   n

Seriously, Erlang (and Go) have nice tools for managing state machines
and concurrency. However, Python (and C) are perfectly suitable for
clear asynchronous programming idioms. I'm happy that asyncio is
happening after all these long years. It would be nice if it supported
edge-triggered wakeups, but I suppose that isn't supported in all
operating systems.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Rustom Mody
On Friday, April 11, 2014 1:14:01 AM UTC+5:30, Marko Rauhamaa wrote:


 
 
 Seriously, Erlang (and Go) have nice tools for managing state machines
 and concurrency. However, Python (and C) are perfectly suitable for
 clear asynchronous programming idioms. I'm happy that asyncio is
 happening after all these long years. It would be nice if it supported
 edge-triggered wakeups, but I suppose that isn't supported in all
 operating systems.
 

Yes... Let me restate what (I hear you as) saying

Lets start with pure uniprocessor machines for ease of discussion (also of 
history)
An OS, sits between the uni-hardware and provides 
multi{processing,users,threads,etc}.
How does it do it? By the mechanisms process-switching, interleaving etc 
In short all the good-stuff... that constitutes asyncio (and relations)

What you are saying is that what the OS is doing, you can do better.
Analogous to said C programmer saying that what (data structures) the python 
programmer can make he can do better.

Note I dont exactly agree with Sturla either.
To see that time-shift the C/Python argument 30 years back when it was 
imperative
languages vs poorly implemented, buggy, interpreted Lisp/Prolog.

In that world, your 'I'd rather do it by hand/work out my state machine'
would make considerable sense.

Analogously, if the only choice were mainstream (concurrency-wise) languages --
C/C++/Java/Python -- + native threads + overheads + ensuing errors/headaches, 
then
the: Please let me work out my state machine and manage my affairs would be 
sound.
 
But its not the only choice!!

 http://en.wikipedia.org/wiki/Leonhard_Euler#Personal_philosophy_and_religious_beliefs
 
   n
  a + b
 Sir, -- = x, hence God exists--reply!
n


I always thought that God exists because was e^(ipi) + 1 = 0 :D
Evidently (s)he has better reasons for existing!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Marko Rauhamaa
Rustom Mody rustompm...@gmail.com:

 What you are saying is that what the OS is doing, you can do better.
 Analogous to said C programmer saying that what (data structures) the
 python programmer can make he can do better.

I'm sorry, but I don't quite follow you there.

I see the regular multithreaded approach as

 (1) oversimplification which makes it difficult to extend the design
 and handle all of the real-world contingencies

 (2) inviting race conditions carelessly--no mortal is immune.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Sturla Molden

On 10/04/14 21:44, Marko Rauhamaa wrote:

I'm happy that asyncio is
happening after all these long years. It would be nice if it supported
edge-triggered wakeups, but I suppose that isn't supported in all
operating systems.


I have an issue with the use of coroutines. I think they are to evil.

When a man like David Beazley says this

   https://twitter.com/dabeaz/status/440214755764994048

there is something crazy going on.



Sturla

--
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Sturla Molden

On 11/04/14 01:51, Sturla Molden wrote:


I have an issue with the use of coroutines. I think they are to evil.

When a man like David Beazley says this

https://twitter.com/dabeaz/status/440214755764994048

there is something crazy going on.


And why did Python get this Tulip beast, instead of a simple libuv 
wrapper? Node.js has already proven what libuv can do.



Sturla
--
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Terry Reedy

On 4/10/2014 7:51 PM, Sturla Molden wrote:

On 10/04/14 21:44, Marko Rauhamaa wrote:

I'm happy that asyncio is
happening after all these long years. It would be nice if it supported
edge-triggered wakeups, but I suppose that isn't supported in all
operating systems.


I have an issue with the use of coroutines. I think they are to evil.


I think 'magical' is the word you should be looking for.


When a man like David Beazley says this

https://twitter.com/dabeaz/status/440214755764994048

there is something crazy going on.


There is understanding how to use them, and understanding how they work 
in this context. I suspect that DB was talking about the second, deeper 
level.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Rustom Mody
On Friday, April 11, 2014 2:14:42 AM UTC+5:30, Marko Rauhamaa wrote:
 Rustom Mody:
 
  What you are saying is that what the OS is doing, you can do better.
  Analogous to said C programmer saying that what (data structures) the
  python programmer can make he can do better.
 
 
 
 I'm sorry, but I don't quite follow you there.

Ok let me try again (Please note I am speaking more analogically than logically)

There was a time -- say 1990 -- when there was this choice
 - use C -- a production language with half-assed data structures support
 - use Lisp -- strong support for data structures but otherwise unrealistic

From this world and its world view its natural to conclude that to choose 
a strong data structure supporting language is to choose an unrealistic language

I was in the thick of this debate then
http://www.the-magus.in/Publications/chor.pdf

This argument is seen to be fallacious once we have languages like python
(and Ruby and Java and Perl and Haskell and ...)

Today we are in the same position vis-a-vis concurrency as we were with 
data structures in 1990.

We have mainstream languages -- Java,C,C++,Python -- with half-assed 
concurrency support. And we have languages like Erlang, Go, Cloud Haskell which 
make concurrency center-stage but are otherwise lacking and unrealistic.

I disagree with you in saying We cant do better (than stay within the options
offered by mainstream languages

As an individual you are probably right.
From a larger systemic pov (hopefully!) not!

I disagree with Sturla in what is considered invariant and what is under one's 
control.

He (seems?) to take hardware as under control, programming paradigm as not.
I believe that the mileage that can be achieved by working on both is more than
can be achieved by either alone.


 I see the regular multithreaded approach as
  (2) inviting race conditions carelessly--no mortal is immune.

This I understand and concur with

 
  (1) oversimplification which makes it difficult to extend the design
  and handle all of the real-world contingencies

This I dont...
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-10 Thread Steven D'Aprano
On Fri, 11 Apr 2014 01:51:41 +0200, Sturla Molden wrote:

 On 10/04/14 21:44, Marko Rauhamaa wrote:
 I'm happy that asyncio is
 happening after all these long years. It would be nice if it supported
 edge-triggered wakeups, but I suppose that isn't supported in all
 operating systems.
 
 I have an issue with the use of coroutines. I think they are to evil.

They are to evil ... as what? To evil as chalk is to cheese? Black is to 
white? Bees are to honey? 

I think coroutines are awesome, but like all advanced concepts, sometimes 
they can be abused, and sometimes they are hard to understand not because 
they are hard to understand in and of themselves, but because they are 
being used to do something inherently complicated.


 When a man like David Beazley says this
 
 https://twitter.com/dabeaz/status/440214755764994048
 
 there is something crazy going on.

Good lord!!! David Beazley has been consumed by the Dark Side and uses 
Twitter??? There certainly is something crazy going on!



-- 
Steven D'Aprano
http://import-that.dreamwidth.org/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-09 Thread Frank Millman

Marko Rauhamaa ma...@pacujo.net wrote in message 
news:877g70wg8p@elektro.pacujo.net...
 Dennis Lee Bieber wlfr...@ix.netcom.com:

 That's been my experience too... Threading works for me... My
 attempts at so called asyncio (whatever language) have always led to
 my having to worry about losing data if some handler takes too long to
 return.

 To me, asyncio is closer to a polling interrupt handler, and I
 still need a thread to handle the main processing.

 Yes, asynchronous processing results in complex, event-driven state
 machines that can be hard to get right. However, my experience is that
 that's the lesser evil.

 About a handler taking too long: you need to guard each state with a
 timer. Also, you need then to handle the belated handler after the timer
 has expired.


Can I ask a newbie question here?

I understand that, if one uses threading, each thread *can* block without 
affecting other threads, whereas if one uses the async approach, a request 
handler must *not* block, otherwise it will hold up the entire process and 
not allow other requests to be handled.

How does one distinguish betwen 'blocking' and 'non-blocking'? Is it 
either/or, or is it some arbitrary timeout - if a handler returns within 
that time it is non-blocking, but if it exceeds it it is blocking?

In my environment, most requests involve a database lookup. I endeavour to 
ensure that a response is returned quickly (however one defines quickly) but 
I cannot guarantee it if the database server is under stress. Is this a good 
candidate for async, or not?

Thanks for any insights.

Frank Millman



-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-09 Thread Mark Lawrence

On 08/04/2014 17:38, Paul Rubin wrote:

Sturla Molden sturla.mol...@gmail.com writes:

As it turns out, if you try hard enough, you can always construct a race
condition, deadlock or a livelock. If you need to guard against it, there
is paradigms like BSP, but not everything fits in. a BSP design.


Software transactional memory (STM) may also be of interest, though
it's not that great a fit with Python.



The pypy folks have been looking at this see 
http://pypy.readthedocs.org/en/latest/stm.html


--
My fellow Pythonistas, ask not what our language can do for you, ask 
what you can do for our language.


Mark Lawrence

---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com


--
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-09 Thread Neil D. Cerutti

On 4/8/2014 9:09 PM, Rick Johnson wrote:

I warn you that not only will it impede the interpretation
of your ideas, it will also degrade your ability to think
clearly when expressing yourself and slow (or completely
halt) your linguistic evolution.

 HAVE YOU NOTICED THAT YOUR INNER MONOLOGUE NEVER USES IT?

Indeed!

That's because it is a habitual viral infestation of the
human communication interface.


It strikes me that that's not superior to it. It's ironic that that 
would be used in place of it in your rant.


Plus Rufus Xavier Sasparilla disagrees with it.

--
Neil Cerutti

--
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-09 Thread Chris Angelico
On Wed, Apr 9, 2014 at 11:23 PM, Frank Millman fr...@chagford.com wrote:
 Can I ask a newbie question here?

You certainly can!

 I understand that, if one uses threading, each thread *can* block without
 affecting other threads, whereas if one uses the async approach, a request
 handler must *not* block, otherwise it will hold up the entire process and
 not allow other requests to be handled.

That would be correct.

 How does one distinguish betwen 'blocking' and 'non-blocking'? Is it
 either/or, or is it some arbitrary timeout - if a handler returns within
 that time it is non-blocking, but if it exceeds it it is blocking?

No; a blocking request is one that waits until it has a response, and
a non-blocking request is one that goes off and does something, and
then comes back to you when it's done. When you turn on the kettle,
you can either stay there and watch until it's ready to make your
coffee (or, in my case, hot chocolate), or you can go away and come
back when it whistles at you to say that it's boiling. A third option,
polling, is when you put a pot of water on the stove, turn it on, and
then come back periodically to see if it's boiling yet. As the old
saying tells us, blocking I/O is a bad idea with pots of water,
because it'll never return.

 In my environment, most requests involve a database lookup. I endeavour to
 ensure that a response is returned quickly (however one defines quickly) but
 I cannot guarantee it if the database server is under stress. Is this a good
 candidate for async, or not?

No, that's a bad idea, because you have blocking I/O. If you have
multiple threads, it's fine, because the thread that's waiting for the
database will be blocked, and other threads can run (you may need to
ensure that you have separate database connections for your separate
threads); but in an asynchronous system, you want to be able to go and
do something else while you're waiting. Something like this:

def blocking_database_query(id):
print(Finding out who employee #%d is...%id)
res = db.query(select name from emp where id=12345)
print(Employee #%d is %s.%(id,res[0].name))

def nonblocking_query(id):
print(Finding out who employee #%d is...%id)
def nextstep(res):
print(Employee #%d is %s.%(id,res[0].name))
db.asyncquery(nextstep, select name from emp where id=12345)

This is a common way to do asynchronous I/O. Instead of saying Do
this and give me a result, you say Do this, and when you have a
result, call this function. Then as soon as you've done that, you
return (to some main loop, probably). It's usually a bit more
complicated than this (eg you might need multiple callbacks or
additional arguments in case it times out or otherwise fails - there's
no way to throw an exception into a callback, the way the blocking
query could throw something instead of returning), but that's the
basic concept.

You may be able to get away with doing blocking operations in
asynchronous mode, if you're confident they'll be fairly fast. But you
have to be really REALLY confident, and it does create assumptions
that can be wrong. For instance, the above code assumes that print()
won't block. You might think Duh, how can printing to the screen
block?!?, but if your program's output is being piped into something
else, it most certainly can :) If that were writing to a remote
socket, though, it'd be better to perform those operations
asynchronously too: attempt to write to the socket; once that's done,
start the database query; when the database result arrives, write the
response to the socket; when that's done, go back to some main loop.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-09 Thread Marko Rauhamaa
Frank Millman fr...@chagford.com:

 I understand that, if one uses threading, each thread *can* block
 without affecting other threads, whereas if one uses the async
 approach, a request handler must *not* block, otherwise it will hold
 up the entire process and not allow other requests to be handled.

Yes.

 How does one distinguish betwen 'blocking' and 'non-blocking'? Is it 
 either/or, or is it some arbitrary timeout - if a handler returns within 
 that time it is non-blocking, but if it exceeds it it is blocking?

Old-school I/O primitives are blocking by default. Nonblocking I/O is
enabled with the setblocking() method.

In the new asyncio package, I/O is nonblocking by default (I'm sure, but
didn't verify).

 In my environment, most requests involve a database lookup. I
 endeavour to ensure that a response is returned quickly (however one
 defines quickly) but I cannot guarantee it if the database server is
 under stress. Is this a good candidate for async, or not?

Database libraries are notoriously bad for nonblocking I/O. It's nothing
fundamental; it's only that the library writers couldn't appreciate the
worth of async communication. For that, asyncio provides special
support:

   URL: https://docs.python.org/3.4/library/asyncio-dev.html#
   handle-blocking-functions-correctly



Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading

2014-04-09 Thread Frank Millman

Chris Angelico ros...@gmail.com wrote in message 
news:captjjmqwhb8o8vq84mmtv+-rkc3ff1aqdxe5cs8y5gy02kh...@mail.gmail.com...
 On Wed, Apr 9, 2014 at 11:23 PM, Frank Millman fr...@chagford.com wrote:

 How does one distinguish betwen 'blocking' and 'non-blocking'? Is it
 either/or, or is it some arbitrary timeout - if a handler returns within
 that time it is non-blocking, but if it exceeds it it is blocking?

 No; a blocking request is one that waits until it has a response, and
 a non-blocking request is one that goes off and does something, and
 then comes back to you when it's done.

Does reading from disk count as blocking? Strictly speaking I would have 
thought 'yes'.

In other words, non-blocking implies that everything required to pass off 
the request to a handler and be ready to deal with the next one must already 
be in memory, and it must not rely on communicating with any outside 
resource at all. Is this correct?


 def blocking_database_query(id):
print(Finding out who employee #%d is...%id)
res = db.query(select name from emp where id=12345)
print(Employee #%d is %s.%(id,res[0].name))

 def nonblocking_query(id):
print(Finding out who employee #%d is...%id)
def nextstep(res):
print(Employee #%d is %s.%(id,res[0].name))
db.asyncquery(nextstep, select name from emp where id=12345)


In this example, what is 'db.asyncquery'?

If you mean that you have a separate thread to handle database queries, and 
you use a queue or other message-passing mechanism to hand it the query and 
get the result, then I understand it. If not, can you explain in more 
detail.

If I have understood correctly, then is there any benefit at all in my going 
async? I might as well just stick with threads for the request handling as 
well as the database handling.

Frank



-- 
https://mail.python.org/mailman/listinfo/python-list


  1   2   3   4   5   >