Re: Multithreading? How?

2023-05-12 Thread Diego Souza
Hi there,

I hope this e-mail is still on time for you. I have implemented this
architecture a few times, and they all work fine nowadays. However, your
question made me review it and create a small gist.

I suggest you create a thread for every output and input connection. This
makes it easier to focus on reading or writing inside a given object. For
example, in your project, I would separate it into IotReader, IotWriter,
MqttReader, and MqttWriter. Another thing I do to avoid manipulating
multiple locks, semaphores, and so on is to create a central event loop.
For every event that comes from IotReader or MqttReader, I would pack it
into an event and send it to a central thread. This is the main gateway
thread, and I would call it Gateway.

I don't know if you have ever programmed in Android. But the Android
framework uses a similar approach to processing data. Whenever you need to
process something, you start a new 'Thread', and when you need to present
the result in the interface you dispatch events until the main thread is
notified and updates the corresponding Views. The point here is: never do
any extensive processing in the main thread as it is going delay. You will
probably not do it now, but if you ever need it, make a pool of workers to
process this and keep the Gateway free. Replace the Threads with
multiprocessing.Process, as well, as Python lacks true multithreading.

Regarding thread/process communication, I like to implement this using
Queues. The Gateway class would have a main_queue to receive events from
IotReader and MqttReader. IotWriter and MqttWriter have a particular queue
as well. Whenever the Gateway needs to send something to either of them, it
just needs to reference their respective queues, which I wrap inside a
method, for simplicity.

Another benefit of this architecture is the ability to scale to more
connections easily. In the past, I have used this strategy to schedule
tasks for up to about 20 devices (each with an input and output thread). I
believe it could go higher, but I haven't needed to. There are fully
distributed architectures more suitable for hundreds and thousands of
connections, but this is likely not what you need now.

The following is a possible implementation for the IotReader. You need to
replace the AnySerialReader class and its read method with the
initialization of your own Bus wrapper. The read method must have a timeout
parameter if you want to cancel the operation properly. The terminate
method is used to terminate the program properly.


































*class IotReader(Thread):def __init__(self, queue_master,
name='IotReader'):super().__init__()
self.queue_master = queue_masterself.queue = Queue()
self.done  = Falseself.name   = name
  self.start()def terminate(self):self.done = True
def run(self):log.info (f"Starting thread for
{self.name }")serial_reader =
AnySerialReader('Serial' + self.name )log.info
(f"Serial reader for {self.name }
initialized")while not self.done:try:
  data = serial_reader.read(timeout=1)
if data is None:continue
self.queue_master.put(('on_iot_event', data))except:
traceback.print_exc(file=sys.stdout)
log.warning("Terminating IotReader")serial_reader.terminate()*


The following is a possible implementation for IotWriter. It adds a method
named send that adds new tasks to the queue. The main loop, running inside
the thread, waits for these events and calls write in AnySerialWriter. This
may be a slow operation, the connection may be down, and we need to
reconnect, etc. This is why we need a thread for the output message as well.









































*class IotWriter(Thread):def __init__(self, name='IotWriter'):
super().__init__()self.queue  = Queue()self.done
= Falseself.name    = name
self.start()def terminate(self):self.done = True
self.queue.put( ('terminate', None) )def send(self, data):
self.queue.put( ('write_message', data) )def run(self):
log.info (f"Starting thread for {self.name
}")serial_writer = AnySerialWriter('Serial' +
self.name )log.info (f"Serial
writer for {self.name } initialized")
while not self.done:try:action, data =
self.queue.get()if action == 'terminate':
  breakelif action ==
'write_message':serial_writer.write(data)
  else:log.error(f'Unknown action for
IotWriter - action={action}, data={data}')except:
  traceback.print_exc(file=sys.stdout)

Re: Multithreading python,two tkinter windows

2015-11-01 Thread Chris Angelico
On Mon, Nov 2, 2015 at 1:05 AM, Vindhyachal Takniki
 wrote:
> #get reading at every 1 second
> def get_analog_1(thread_name):
> global read_ok_1, current_time_1,analog_1
> while True:
> if((time.time() - current_time_1) > 1):
> if(0 == read_ok_1):
> current_time_1 = time.time();
> read_ok_1 = 1;
> analog_1 = randint(0,100)

Never do this. It's called a "busy-wait", and not only does it
saturate your Python thread, it also saturates your entire system -
this is going to keep one CPU core permanently busy going "are we
there yet? are we there yet?" about the one-second delay. Instead, use
time.sleep(), which will delay your thread by one second, allowing
other threads to run.

Better still, think about your code in terms of events. Most GUI
libraries these days are built around an event-driven model, and the
notion of "make this event happen 1 second from now" or "10 seconds
from now" is a very common one.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Multithreading python,two tkinter windows

2015-11-01 Thread Laura Creighton
In a message of Sun, 01 Nov 2015 06:05:58 -0800, Vindhyachal Takniki writes:
>I have made a python code & using multithreading in it. this is very basic 
>code, not using queues & other stuff.

This is your problem.
The code that uses queues is more basic.
For tkinter you cannot use threads like you do.
You must have one controlling thread, and several worker threads.
The best way to do this is using a queue.

http://code.activestate.com/recipes/82965-threads-tkinter-and-asynchronous-io/

(written by a freind of mine, wandering around in the next room)

outlines how to set this up  for all your tkinter tasks.

Laura


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Multithreading python,two tkinter windows

2015-11-01 Thread Terry Reedy

On 11/1/2015 9:05 AM, Vindhyachal Takniki wrote:

I have made a python code & using multithreading in it. this is very basic code, 
not using queues & other stuff.


You can run multiple windows, or one window with multiple panes, in one 
thread with one event loop.  Best to do gui stuff in the main thread and 
only use other threads for things that would block.  I see that you 
already know how to use .after to loop.  You can have multiple .after 
loops, with different delays, in one thread.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: multithreading in python

2013-08-13 Thread Steven D'Aprano
On Tue, 13 Aug 2013 01:06:01 -0700, samaneh.yahyapour wrote:

 hi
 my program work by 4 thread but when i use more thread it terminates

Is that a problem? Isn't it supposed to terminate, when it has finished?

If it raises an exception, or crashes, you should tell us.



 i use opencv in my image_process.so
 
 my code is :
 
 
 #!/usr/bin/python
 import sys
 import os
 import io
 import time
 import copy
 import threading
 import ctypes
 
 class MyClass():
 
 def __init__(self):
 i = 0
 while i10:
 thread1 = threading.Thread(target=self.item_thread)
 thread1.start()
 i = i+1
 time.sleep(0.01)


This is better written as:

def __init__(self):
self.threads = []
for i in range(10):
thread = threading.Thread(target=self.item_thread)
thread.start()
self.threads.append(thread)
time.sleep(0.01)  # not sure this helps for anything


I think it will also help if you keep references to the threads. That 
will stop them from being garbage collected unexpectedly, and you can 
check their status before exiting the main thread.


 def item_thread(self):
 imageAnalyzer=ctypes.CDLL(../so/image_process.so)
 imageAnalyzer.aref_img_score_init(/opt/amniran/etc/face.xml,
 /opt/amniran/etc/porn.xml) for filename in
 os.listdir(../script/images/):
 if filename[-4:] == .jpg or filename[-4:] == .png or
 filename[-4:] == .gif or filename[-5:] == .jpeg:
 
 path = ../script/images/%s%filename
 
 fo = file(path, r)
 content = fo.read()
 score = imageAnalyzer.score_image(content, len(content))
 print %d : %s  %(score, path)
 print EN
 

 x = MyClass()


I suspect that when the main thread exits, and your other threads are 
still running, you may be in trouble. But I'm not a threading expert, so 
I could be wrong. However, I would put something like this at the end:

for thread in x.threads:
x.join()


that way the main thread cannot finish until each of the subthreads are.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading in python

2013-08-13 Thread Dave Angel
samaneh.yahyap...@gmail.com wrote:

 hi 
 my program work by 4 thread but when i use more thread it terminates

 

I simplified your code so anybody could run it, and tested it inside
Komodo IDE, on Python 2.7

#!/usr/bin/env python


import sys
import os
import time
import threading

class MyClass():

def __init__(self):
i = 0
while i10:
work = WorkClass(i)
thread1 = threading.Thread(target=work.item_thread)
thread1.start()
i = i+1
time.sleep(0.01)

class WorkClass():
def __init__(self, parm):
self.parm = str(parm)
def item_thread(self):
print beginning thread, self.parm
for filename in os.listdir(.):
data =  thread  +  self.parm +  filename  + filename + \n
print data
time.sleep(0.5)
print EN  + self.parm

x = MyClass() 
print Finishing main thread

When the
-- 
Signature file not found

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading in python

2013-08-13 Thread Dave Angel
samaneh.yahyap...@gmail.com wrote:

 hi 
 my program work by 4 thread but when i use more thread it terminates
snip
 how can i solve this problem 

I simplified the code so I could actually run it, and tested it in
Python 2.7, both under Komodo IDE and in the terminal.

The code:

#!/usr/bin/env python


import sys
import os
import time
import threading

class MyClass():

def __init__(self):
self.threads = []
i = 0
while i10:
work = WorkClass(i)
thread1 = threading.Thread(target=work.item_thread)
self.threads.append(thread1)
thread1.start()
i = i+1
time.sleep(0.01)
for thread in self.threads:   #wait for all the threads to end
thread.join()

class WorkClass():
def __init__(self, parm):
self.parm = str(parm)
def item_thread(self):
print beginning thread, self.parm
for filename in os.listdir(.):
data =  thread  +  self.parm +  filename  + filename + \n
print data
time.sleep(0.5)
print EN  + self.parm

x = MyClass() 
print All done with main thread

The original code couldn't tell us what threads were running, so the
only clue you got was how many times it printed EN  So I arranged
that each thread had a unique parm value.  Typically you must do
something like this so that all the threads aren't doing exactly the
same work.

Another thing i did was to atomicize the print statements.  As it
originally was, partial lines from different threads could be
intermixed in the output.

The IDE showed the error message:


ERROR: dbgp.client: 
The main thread of this application is exiting while there are still threads
alive. When the main thread exits, it is system defined whether the other
threads survive.

See Caveats at http://docs.python.org/lib/module-thread.html


which tells you what's wrong.  You need to do a join on the threads
before exiting.

it's traditional (and better) to derive your own class from
threading.Thread, and that's where you can store any additional
attributes that each thread will need. I demonstrated something I
figured was simpler, by making the item_thread() method part of a
separate class.

-- 
DaveA


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading in python

2013-08-13 Thread Terry Reedy

On 8/13/2013 4:06 AM, samaneh.yahyap...@gmail.com wrote:

Aside from the other comments...


 def item_thread(self):
 imageAnalyzer=ctypes.CDLL(../so/image_process.so)
 imageAnalyzer.aref_img_score_init(/opt/amniran/etc/face.xml, 
/opt/amniran/etc/porn.xml)
 for filename in os.listdir(../script/images/):
 if filename[-4:] == .jpg or filename[-4:] == .png or filename[-4:] == 
.gif or filename[-5:] == .jpeg:

 path = ../script/images/%s%filename

 fo = file(path, r)


Use 'open' instead of the deprecated 'file' (removed in 3.x) and use it 
with a 'with' statement (this is now standard). This closes the file at 
the end of the block. Not doing so can cause problems on other 
implementations.


with open(path, 'rb') as fo:

 content = fo.read()



 score = imageAnalyzer.score_image(content, len(content))
 print %d : %s  %(score, path)
 print EN


Do you know how to use queue.Queue to spread work to multiple worker 
threads so each processes different files (and to collect results from 
multiple threads).? (If not, read doc.)


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading

2012-04-08 Thread Kiuhnm

On 4/8/2012 7:04, Bryan wrote:

Kiuhnm wrote:

My question is this: can I use 'threading' without interfering with the
program which will import my module?


Yes. The things to avoid are described at the bottom of:
http://docs.python.org/library/threading.html

On platforms without threads, 'import threading' will fail. There's a
standard library module dummy_threading which offers fake versions of
the facilities in threading. It suggests:

try:
 import threading as _threading
except ImportError:
 import dummy_threading as _threading


I have a decorator which takes an optional argument that tells me 
whether I should use locks.
Maybe I could import 'threading' only if needed. If the user wants to 
use locks I'll assume that 'threading' is available on his/her system. 
By the way, should I avoid to 'import threading' more than once?


Thank you so much for answering my question.

Kiuhnm
--
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading

2012-04-08 Thread Bryan
Kiuhnm wrote:
 I have a decorator which takes an optional argument that tells me
 whether I should use locks.
 Maybe I could import 'threading' only if needed. If the user wants to
 use locks I'll assume that 'threading' is available on his/her system.

Use of dummy_threading might be cleaner. It has things with the same
names as the locks in real threading, and that you can create and call
just like locks, but they don't actually do anything.

 By the way, should I avoid to 'import threading' more than once?

No; threading imports like any other competent module. The tricky part
doesn't start until you actually use its facilities.

-Bryan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading

2012-04-07 Thread Bryan
Kiuhnm wrote:
 I'm about to write my first module and I don't know how I should handle
 multithreading/-processing.
 I'm not doing multi-threading inside my module. I'm just trying to make
 it thread-safe so that users *can* do multi-threading.

There are a couple conventions to follow. Trying to anticipate the
threading needs of users and then locking everything for the worst
case is a bad idea.

So what are the conventions? Unless documented otherwise, classes
don't guarantee that each instance can be used by more then one
thread. Most of the classes in Python's standard library are not one-
instance-multiple-threads safe. An important documented exception is
queue.queue.

Classes should be safe for instance-per-thread multi-threading, unless
documented otherwise. Likewise, functions should be thread safe under
the assumption that their arguments are not shared between threads,
which brings us to your example:

 For instance, let's say I want to make this code thread-safe:

 ---
 myDict = {}

 def f(name, val):
      if name not in myDict:
          myDict[name] = val
      return myDict[name]
 ---

First, don't re-code Python's built-ins. The example is a job for
dict.setdefault(). Language built-ins are already thread-safe (at
least in CPython), though not meant as thread synchronization
primitives.

Second, the example suggests no obvious reason for the single global
variabel. It could be a class for which users can make any number of
instances.

Third, there are cases where you want a single global. Most of the
time I'd recommend warning users about threading assumptions.

-Bryan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading

2012-04-07 Thread Kiuhnm

On 4/7/2012 22:09, Bryan wrote:

For instance, let's say I want to make this code thread-safe:

---
myDict = {}

def f(name, val):
  if name not in myDict:
  myDict[name] = val
  return myDict[name]
---


First, don't re-code Python's built-ins. The example is a job for
dict.setdefault().

[...]

That was just an example for the sake of the discussion.
My question is this: can I use 'threading' without interfering with the 
program which will import my module?


Kiuhnm
--
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading

2012-04-07 Thread Adam Skutt
On Apr 7, 5:06 pm, Kiuhnm kiuhnm03.4t.yahoo.it wrote:
 On 4/7/2012 22:09, Bryan wrote: For instance, let's say I want to make this 
 code thread-safe:

  ---
  myDict = {}

  def f(name, val):
        if name not in myDict:
            myDict[name] = val
        return myDict[name]
  ---

  First, don't re-code Python's built-ins. The example is a job for
  dict.setdefault().

 [...]

 That was just an example for the sake of the discussion.
 My question is this: can I use 'threading' without interfering with the
 program which will import my module?

'import threading' ought to work everywhere, but that's not enough to
tell you whether whatever you're trying to do will actually work.
However, you shouldn't need to do it unless your application is meant
to /only/ be used in applications that have done 'import threading'
elsewhere.  Otherwise, you probably have a pretty serious design
issue.

Global state is bad.  TLS state is little better, even if it's common
in a lot of python modules.  Non-thread-safe object instances is
usually fine.  Object construction needs to be thread-safe, but that's
also the default behavior.  You need not worry about it unless you're
doing very unusual things.

Plainly, most of the time you shouldn't need to do anything to support
multiples threads beyond avoiding global state.  In fact, you should
stop and give some serious thought to your design if you need to do
anything else.

Adam
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading

2012-04-07 Thread Bryan
Kiuhnm wrote:
 My question is this: can I use 'threading' without interfering with the
 program which will import my module?

Yes. The things to avoid are described at the bottom of:
http://docs.python.org/library/threading.html

On platforms without threads, 'import threading' will fail. There's a
standard library module dummy_threading which offers fake versions of
the facilities in threading. It suggests:

try:
import threading as _threading
except ImportError:
import dummy_threading as _threading


--Bryan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multithreading

2011-12-26 Thread Ian Kelly
On Mon, Dec 26, 2011 at 11:31 AM, Yigit Turgut y.tur...@gmail.com wrote:
 I have a loop as following ;

 start = time.time()
 end = time.time() - start
  while(endN):
          data1 = self.chan1.getWaveform()
          end = time.time() - start
          timer.tick(10)  #FPS
          screen.fill((255,255,255) if white else(0,0,0))
          white = not white
          pygame.display.update()
          for i in range(self.size):
              end = time.time() - start
              f.write(%3.8f\t%f\n%(end,data1[i]))

 Roughly speaking, this loop displays something at 10 frames per second
 and writes data1 to a file with timestamps.

 At first loop data1 is grabbed but to grab the second value (second
 loop) it needs to wait for timer.tick to complete. When I change FPS
 value [timer.tick()], capturing period (time interval between loops)
 of data1 also changes. What I need is to run ;

          timer.tick(10)  #FPS
          screen.fill((255,255,255) if white else(0,0,0))
          white = not white
          pygame.display.update()

 for N seconds but this shouldn't effect the interval between loops
 thus I will be able to continuously grab data while displaying
 something at X fps.

 What would be an effective workaround for this situation ?

You essentially have two completely independent loops that need to run
simultaneously with different timings.  Sounds like a good case for
multiple threads (or processes if you prefer, but these aren:

def write_data(self, f, N):
start = time.time()
while self.has_more_data():
data1 = self.chan1.getWaveform()
time.sleep(N)
for i in range(self.size):
end = time.time() - start
f.write(%3.8f\t%f\n % (end, data[i]))

def write_data_with_display(self, f, N, X):
thread = threading.Thread(target=self.write_data, args=(f, N))
thread.start()
white = False
while thread.is_alive():
timer.tick(X)
screen.fill((255, 255, 255) if white else (0, 0, 0))
white = not white
pygame.display.update()
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multithreading

2011-12-26 Thread Ian Kelly
On Mon, Dec 26, 2011 at 1:01 PM, Ian Kelly ian.g.ke...@gmail.com wrote:
 You essentially have two completely independent loops that need to run
 simultaneously with different timings.  Sounds like a good case for
 multiple threads (or processes if you prefer, but these aren:

I accidentally sent before I was finished.  I was saying or processes
if you prefer, but these aren't CPU-bound, so why complicate things?

Cheers,
Ian
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multithreading

2011-12-26 Thread Yigit Turgut
On Dec 26, 10:03 pm, Ian Kelly ian.g.ke...@gmail.com wrote:
 On Mon, Dec 26, 2011 at 1:01 PM, Ian Kelly ian.g.ke...@gmail.com wrote:
  You essentially have two completely independent loops that need to run
  simultaneously with different timings.  Sounds like a good case for
  multiple threads (or processes if you prefer, but these aren:

 I accidentally sent before I was finished.  I was saying or processes
 if you prefer, but these aren't CPU-bound, so why complicate things?

 Cheers,
 Ian

I had thought the same workaround but unfortunately loop is already
under a def ;

def writeWaveform(self, fo, header=''):
data1 = numpy.zeros(self.size)
screen = pygame.display.set_mode((0, 0), pygame.FULLSCREEN)
timer = pygame.time.Clock()
white = True
fo.write(header)
start = time.time()
end = time.time() - start
while(end10):
  data1 = self.chan1.getWaveform()
  end = time.time() - start
  timer.tick(10) #FPS
  screen.fill((255,255,255) if white else(0,0,0))
  white = not white
  pygame.display.update()
  for i in range(self.size):
  end = time.time() - start
  f.write(%3.8f\t%f\n%(end,data1[i]))
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multithreading

2011-12-26 Thread Ian Kelly
On Mon, Dec 26, 2011 at 1:13 PM, Yigit Turgut y.tur...@gmail.com wrote:
 I had thought the same workaround but unfortunately loop is already
 under a def ;

So nest the functions, or refactor it.  Either way, that shouldn't be
a significant obstacle.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multithreading

2011-12-26 Thread Yigit Turgut
On Dec 26, 10:01 pm, Ian Kelly ian.g.ke...@gmail.com wrote:
 On Mon, Dec 26, 2011 at 11:31 AM, Yigit Turgut y.tur...@gmail.com wrote:
  I have a loop as following ;

  start = time.time()
  end = time.time() - start
   while(endN):
           data1 = self.chan1.getWaveform()
           end = time.time() - start
           timer.tick(10)  #FPS
           screen.fill((255,255,255) if white else(0,0,0))
           white = not white
           pygame.display.update()
           for i in range(self.size):
               end = time.time() - start
               f.write(%3.8f\t%f\n%(end,data1[i]))

  Roughly speaking, this loop displays something at 10 frames per second
  and writes data1 to a file with timestamps.

  At first loop data1 is grabbed but to grab the second value (second
  loop) it needs to wait for timer.tick to complete. When I change FPS
  value [timer.tick()], capturing period (time interval between loops)
  of data1 also changes. What I need is to run ;

           timer.tick(10)  #FPS
           screen.fill((255,255,255) if white else(0,0,0))
           white = not white
           pygame.display.update()

  for N seconds but this shouldn't effect the interval between loops
  thus I will be able to continuously grab data while displaying
  something at X fps.

  What would be an effective workaround for this situation ?

 You essentially have two completely independent loops that need to run
 simultaneously with different timings.  Sounds like a good case for
 multiple threads (or processes if you prefer, but these aren:

 def write_data(self, f, N):
     start = time.time()
     while self.has_more_data():
         data1 = self.chan1.getWaveform()
         time.sleep(N)
         for i in range(self.size):
             end = time.time() - start
             f.write(%3.8f\t%f\n % (end, data[i]))

Why is there N variable in write_data function ? N is related to
timer.tick(N) which is related to display function ? time.sleep(N)
will pause writing to file for specified amount of time which is
exactly what I am trying to avoid.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multithreading

2011-12-26 Thread Ian Kelly
On Dec 26, 2011 4:13 PM, Yigit Turgut y.tur...@gmail.com wrote:
 Why is there N variable in write_data function ? N is related to
 timer.tick(N) which is related to display function ? time.sleep(N)
 will pause writing to file for specified amount of time which is
 exactly what I am trying to avoid.

My understanding from your first post was that the pygame loop was supposed
to run at X FPS while the other loop was supposed to run every N seconds.
If that is not the case and the data loop is supposed to run as fast as
possible, then you don't need threads for this.  Just stream the data in a
tight loop, and on each iteration check the timer for how much time has
elapsed to determine whether to run the pygame code without using
timer.tick.  If not enough time has elapsed for your target framerate, just
skip that part.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading, performance, again...

2010-01-18 Thread Oktaka Com
On 30 Aralık 2009, 17:44, mk mrk...@gmail.com wrote:
 Hello everyone,

 I have figured out (sort of) how to do profiling of multithreaded
 programs with cProfile, it goes something like this:

 #!/usr/local/bin/python

 import cProfile
 import threading

 class TestProf(threading.Thread):
      def __init__(self, ip):

          threading.Thread.__init__(self)
          self.ip = ip

      def run(self):
          prof = cProfile.Profile()
          retval = prof.runcall(self.runmethod)
          prof.dump_stats('tprof' + self.ip)

      def runmethod(self):
          pass

 tp = TestProf('10.0.10.10')

 tp.start()
 tp.join()

 The problem is, now that I've done profiling in the actual program
 (profiled version here:http://python.domeny.com/cssh_profiled.py) with
 9 threads and added up stats (using pstats.Stats.add()), the times I get
 are trivial:

   p.strip_dirs().sort_stats('cumulative').print_stats(10)
 Wed Dec 30 16:23:59 2009    csshprof9.156.44.113
 Wed Dec 30 16:23:59 2009    csshprof9.156.46.243
 Wed Dec 30 16:23:59 2009    csshprof9.156.46.89
 Wed Dec 30 16:24:00 2009    csshprof9.156.47.125
 Wed Dec 30 16:24:00 2009    csshprof9.156.47.17
 Wed Dec 30 16:24:00 2009    csshprof9.156.47.29
 Wed Dec 30 16:24:01 2009    csshprof9.167.41.241
 Wed Dec 30 16:24:02 2009    csshprof9.168.119.15
 Wed Dec 30 16:24:02 2009    csshprof9.168.119.218

           39123 function calls (38988 primitive calls) in 6.004 CPU seconds

     Ordered by: cumulative time
     List reduced from 224 to 10 due to restriction 10

     ncalls  tottime  percall  cumtime  percall filename:lineno(function)
          9    0.000    0.000    6.004    0.667 cssh.py:696(runmethod)
        100    0.004    0.000    5.467    0.055 threading.py:389(wait)
         82    0.025    0.000    5.460    0.067 threading.py:228(wait)
        400    5.400    0.013    5.400    0.013 {time.sleep}
          9    0.000    0.000    5.263    0.585 cssh.py:452(ssh_connect)
          9    0.003    0.000    5.262    0.585 client.py:226(connect)
          9    0.001    0.000    2.804    0.312
 transport.py:394(start_client)
          9    0.005    0.001    2.254    0.250 client.py:391(_auth)
         18    0.001    0.000    2.115    0.117
 transport.py:1169(auth_publickey)
         18    0.001    0.000    2.030    0.113
 auth_handler.py:156(wait_for_response)

 pstats.Stats instance at 0xb7ebde8c

 It's not burning CPU time in the main thread (profiling with cProfile
 indicated smth similar to the above), it's not burning it in the
 individual worker threads - so where the heck it is burning this CPU
 time? bc 'top' shows heavy CPU load during most of the time of the
 program run.

 help...

 regards,
 mk

See http://code.google.com/p/yappi/ if you want to profile
multithreaded python app.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading, performance, again...

2009-12-31 Thread Antoine Pitrou

   39123 function calls (38988 primitive calls) in 6.004 CPU
   seconds
 
[...]
 
 It's not burning CPU time in the main thread (profiling with cProfile
 indicated smth similar to the above), it's not burning it in the
 individual worker threads

What do you mean, it's not burning CPU time? The profile output above 
seems to suggest 6 CPU seconds were burnt.

By the way, I don't think running several concurrent profile sessions 
dumping *to the same stats file* is supported, consider using a separate 
stats file for each thread or the results may very well be bogus.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading in python

2009-05-12 Thread Almar Klein
See the standard help on the threading and thread module.

Almar

2009/5/12 shruti surve sasu...@gmail.com:
 hi,
  how to do multithreading in python??? Like running dialog box and running
 xml rpc calls simultaneously???


 regards
 shruti

 --
 http://mail.python.org/mailman/listinfo/python-list


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multithreading / multiprocess

2009-04-12 Thread TennesseeLeeuwenburg
On Apr 11, 10:39 pm, Diez B. Roggisch de...@nospam.web.de wrote:
 tleeuwenb...@gmail.com schrieb:

  Is there anyway to begin a thread and execute a finite number of lines
  of code, or a finite amount of time within it?

  For example, say I create three child threads and I want to guarantee
  equal timeshare between them, can I specify a quanta (say 400 LOC
  although I know that is pretty small) to execute in each one in turn?

 Not as such, no. You can play tricks with the trace-module, but these
 ultimately fail when the code in question runs inside C - which puts a
 stop to any python interpreter scheduling anyway, thus native threads
 are used which can't be controlled on that level.

 Diez

Mmmm good point. 1 line of Python != 1 line of C. That's probably not
very important for what I have in mind, but I hadn't really been
considering that angle.

Cheers,
-T
--
http://mail.python.org/mailman/listinfo/python-list


Re: Multithreading / multiprocess

2009-04-12 Thread TennesseeLeeuwenburg
On Apr 12, 2:59 am, a...@pythoncraft.com (Aahz) wrote:
 In article 
 57065c62-2024-47b5-a07e-1d60ff85b...@y10g2000prc.googlegroups.com,

 tleeuwenb...@gmail.com tleeuwenb...@gmail.com wrote:

 Is there anyway to begin a thread and execute a finite number of lines
 of code, or a finite amount of time within it?

 For example, say I create three child threads and I want to guarantee
 equal timeshare between them, can I specify a quanta (say 400 LOC
 although I know that is pretty small) to execute in each one in turn?

 You have extremely coarse-grained control with sys.setcheckinterval().
 However, there is no guarantee which thread will pick up control after
 each context switch, so one thread might get more than its share.

Thanks for your reply! Will add that to my reading list.

Cheers,
-T
--
http://mail.python.org/mailman/listinfo/python-list


Re: Multithreading / multiprocess

2009-04-11 Thread Diez B. Roggisch

tleeuwenb...@gmail.com schrieb:

Is there anyway to begin a thread and execute a finite number of lines
of code, or a finite amount of time within it?

For example, say I create three child threads and I want to guarantee
equal timeshare between them, can I specify a quanta (say 400 LOC
although I know that is pretty small) to execute in each one in turn?


Not as such, no. You can play tricks with the trace-module, but these 
ultimately fail when the code in question runs inside C - which puts a 
stop to any python interpreter scheduling anyway, thus native threads 
are used which can't be controlled on that level.



Diez
--
http://mail.python.org/mailman/listinfo/python-list


Re: Multithreading / multiprocess

2009-04-11 Thread Aahz
In article 57065c62-2024-47b5-a07e-1d60ff85b...@y10g2000prc.googlegroups.com,
tleeuwenb...@gmail.com tleeuwenb...@gmail.com wrote:

Is there anyway to begin a thread and execute a finite number of lines
of code, or a finite amount of time within it?

For example, say I create three child threads and I want to guarantee
equal timeshare between them, can I specify a quanta (say 400 LOC
although I know that is pretty small) to execute in each one in turn?

You have extremely coarse-grained control with sys.setcheckinterval().
However, there is no guarantee which thread will pick up control after
each context switch, so one thread might get more than its share.
-- 
Aahz (a...@pythoncraft.com)   * http://www.pythoncraft.com/

Why is this newsgroup different from all other newsgroups?
--
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading in python ???

2008-07-10 Thread Kris Kennaway

Laszlo Nagy wrote:

Abhishek Asthana wrote:


Hi all ,

I  have large set of data computation and I want to break it into 
small batches and assign it to different threads .I am implementing it 
in python only. Kindly help what all libraries should I refer to 
implement the multithreading in python.


You should not do this. Python can handle multiple threads but they 
always use the same processor. (at least in CPython.) In order to take 
advantage of multiple processors, use different processes.


Only partly true.  Threads executing in the python interpreter are 
serialized and only run on a single CPU at a time.  Depending on what 
modules you use they may be able to operate independently on multiple 
CPUs.  The term to research is GIL (Global Interpreter Lock).  There 
are many webpages discussing it, and the alternative strategies you can use.


Kris
--
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading in python ???

2008-07-03 Thread Laszlo Nagy

Abhishek Asthana wrote:


Hi all ,

I  have large set of data computation and I want to break it into 
small batches and assign it to different threads .I am implementing it 
in python only. Kindly help what all libraries should I refer to 
implement the multithreading in python.


You should not do this. Python can handle multiple threads but they 
always use the same processor. (at least in CPython.) In order to take 
advantage of multiple processors, use different processes.


 L

--
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading in python ???

2008-07-03 Thread mimi.vx
On Jul 3, 12:40 pm, Laszlo Nagy [EMAIL PROTECTED] wrote:
 Abhishek Asthana wrote:

  Hi all ,

  I  have large set of data computation and I want to break it into
  small batches and assign it to different threads .I am implementing it
  in python only. Kindly help what all libraries should I refer to
  implement the multithreading in python.

 You should not do this. Python can handle multiple threads but they
 always use the same processor. (at least in CPython.) In order to take
 advantage of multiple processors, use different processes.

   L

or use parallelpython module, very good in multi processor|
multimachine prog. in python
--
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading concept

2007-03-08 Thread Bryan Olson
sturlamolden wrote:
[...]
 If you want to utilize the computing power of multiple CPUs, you
 should use multiple processes instead of threads. On Python this is
 mandatory due to the GIL. In any other language it it highly
 recommended. The de-factor standard for parallel multiprocessing (MPI)
 uses multiple processes, even on SMPs.

That doesn't really work in Python. There have been projects to
allow Pythonic coordination of processes -- POSH had some good
ideas -- but none have reached fruition.

There's nothing like a close thing to a good defacto standard in
the area. Microsoft's Win32 threads can claim to get as close as
anything.


-- 
--Bryan


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading concept

2007-03-08 Thread Paul Boddie
On 8 Mar, 10:48, Bryan Olson [EMAIL PROTECTED] wrote:

 That doesn't really work in Python. There have been projects to
 allow Pythonic coordination of processes -- POSH had some good
 ideas -- but none have reached fruition.

What makes all of the following not Pythonic...?

http://wiki.python.org/moin/ParallelProcessing

Things like the CSP paradigm have sort of made their way into the
Python language itself, via enhancements to the yield keyword, which
has the dubious distinction of being a keyword which appears to return
a value. I'm sure one could define Pythonic as being you can write
code like you do now (but not like any of the ways encouraged by the
aforementioned solutions) and it just works over multiple processors/
cores, but that's a view which is somewhat detached from the
practicalities (and favoured practices) of concurrent programming,
especially given the few guarantees Python would be able to provide to
make such a thing work effectively.

Paul

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading concept

2007-03-08 Thread Paul Rubin
Paul Boddie [EMAIL PROTECTED] writes:
 What makes all of the following not Pythonic...?
 http://wiki.python.org/moin/ParallelProcessing

I'd say mainly that they don't allow sharing data between processes
except through expensive IPC mechanisms involving system calls.

 I'm sure one could define Pythonic as being you can write
 code like you do now (but not like any of the ways encouraged by the
 aforementioned solutions) and it just works over multiple processors/
 cores, but that's a view which is somewhat detached from the
 practicalities (and favoured practices) of concurrent programming,
 especially given the few guarantees Python would be able to provide to
 make such a thing work effectively.

Really, the existence of the GIL comes as an unpleasant surprise to
progrmamers used to multi-threaded programming in other languages
whose synchronization features outwardly look about the same as
Python's.  Somehow those other languages manage to use multiple CPU's
based on those features, without needing a GIL.  We are looking at a
Python implementation wart, not practicalities inherent in the
nature of concurrency.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading concept

2007-02-10 Thread Bjoern Schliessmann
Carl J. Van Arsdall wrote:

 Not necessarily, if he's on a full duplex ethernet connection,
 then there is some parallelity he can take advantage of.  He has
 upstream and downstream.

Partly agreed. There is one bus to the network device, and CPU
should be very much faster than the network device itself, so I
estimate there'll be no gain.

Regards,


Björn

-- 
BOFH excuse #353:

Second-system effect.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading concept

2007-02-09 Thread S.Mohideen
I am sorry if I sound foolish.
Suppose I split my Net application code using parallel python into several 
processes based upon the number of CPU available. That means a single socket 
descriptor is distributed across all processes. Is parallelity can be 
acheived using the processes send/recv on the single socket multiplexed 
across all the processes.. I haven't tried it yet - would like to have any 
past experience related to this.

- Original Message - 
From: Carl J. Van Arsdall [EMAIL PROTECTED]
To: python-list@python.org
Sent: Thursday, February 08, 2007 3:44 PM
Subject: Re: multithreading concept


 Bjoern Schliessmann wrote:
 [snip]
 What makes you think that'll be faster?

 Remember:
 - If you have one CPU, there is no parallelity at all.
 - If you do have multiple CPUs but only one network device, there is
 no parallel networking.


 Not necessarily, if he's on a full duplex ethernet connection, then
 there is some parallelity he can take advantage of.  He has upstream and
 downstream.

 -c

 -- 

 Carl J. Van Arsdall
 [EMAIL PROTECTED]
 Build and Release
 MontaVista Software

 -- 
 http://mail.python.org/mailman/listinfo/python-list 

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading concept

2007-02-09 Thread Paul Rubin
S.Mohideen [EMAIL PROTECTED] writes:
 Suppose I split my Net application code using parallel python into
 several processes based upon the number of CPU available. That means a
 single socket descriptor is distributed across all processes. Is
 parallelity can be acheived using the processes send/recv on the
 single socket multiplexed across all the processes.. I haven't tried
 it yet - would like to have any past experience related to this.

In Linux, you can open the socket before forking and then use it in
the child processes; there is also a way to pass open sockets from one
process to another, but the Python lib currently does not support that
feature.  It's worth adding and there's an open RFE for it, but it
hasn't been important enough that anyone's bothered coding it so far.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading concept

2007-02-09 Thread sturlamolden
On Feb 9, 4:00 pm, S.Mohideen [EMAIL PROTECTED]
wrote:

 I am sorry if I sound foolish.
 Suppose I split my Net application code using parallel python into several
 processes based upon the number of CPU available. That means a single socket
 descriptor is distributed across all processes. Is parallelity can be
 acheived using the processes send/recv on the single socket multiplexed
 across all the processes.. I haven't tried it yet - would like to have any
 past experience related to this.

Is CPU or network the speed limiting factor in your application? There
are two kinds of problems: You have a 'CPU-bound problem' if you need
to worry about 'flops'. You have an 'I/O bound' problem if you worry
about 'bits per second'.

If your application is I/O bound, adding more CPUs to the task will
not help. The network connection does not become any faster just
because two CPUs share the few computations that need to be performed.
Python releases the GIL around all i/o operations in the standard
library, such as reading from a socket or writing to socket. If this
is what you need to 'parallelize', you can just use threads and ignore
the GIL. Python's threads can handle concurrent I/O perfectly well.
Remember that Google and YouTube use Python, and the GIL is not a show
stopper for them.

The GIL locks the process to one CPU. You need to get around this if
the power of one CPU or CPU core limits the speed of the application.
This can be the case in e.g. digital image processing, certain
computer games, and scientific programming. I have yet to see a CPU-
bound 'Net application', though.

If you are running Windows: take a look at the CPU usage in the task
manager. Does it say that one of the CPUs is running at full speed for
longer periods of time? If not, there is noting to gained from using
multiple CPUs.







-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading concept

2007-02-08 Thread Carl J. Van Arsdall
Bjoern Schliessmann wrote:
 [snip]
 What makes you think that'll be faster?

 Remember:
 - If you have one CPU, there is no parallelity at all.
 - If you do have multiple CPUs but only one network device, there is
 no parallel networking.

   
Not necessarily, if he's on a full duplex ethernet connection, then 
there is some parallelity he can take advantage of.  He has upstream and 
downstream.

-c

-- 

Carl J. Van Arsdall
[EMAIL PROTECTED]
Build and Release
MontaVista Software

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading concept

2007-02-07 Thread sturlamolden
On Feb 7, 2:53 am, S.Mohideen [EMAIL PROTECTED]
wrote:

 Python is praised about - me too. But at one instance it fails. It fails to
 behave as a true multi-threaded application. That means utilizing all the
 CPUs parallely in the SMP efficiently stays as a dream for a Python
 Programmer.

This has been discussed to death before. Win32 threads and pthreads
(which is what Python normally uses, depending on the platform) are
designed to stay idle most of the time. They are therefore not a tool
for utilizing the power of multiple CPUs, but rather make certain kind
of programming tasks easier to program (i.e. non-blocking I/O,
responsive UIs). The GIL is not a problem in this context. If threads
stay idle most of the time, the GIL does not harm.

If you want to utilize the computing power of multiple CPUs, you
should use multiple processes instead of threads. On Python this is
mandatory due to the GIL. In any other language it it highly
recommended. The de-factor standard for parallel multiprocessing (MPI)
uses multiple processes, even on SMPs. Anyone with serious intentions
of using multiple processors for parallel computing should use
multiple processes and fast IPC - not multiple threads, shared memory
and synchronization objects - even if the language is plain C. With
multiple threads, a lot of time is wasted doing context switches and
book keeping for the  thread synchronization. In addition, obscure and
often very difficult to identify bugs are introduced.

There are a Python binding for MPI (mpi4py) and a similar pure Python
project (Parallel Python) that will take care of all these details for
you.


 Discussion threads say its due to GIL - global interpreter lock. But nobody
 has mentioned any alternative to that apart from suggestions like Code it
 in C and POSH (http://poshmodule.sf.net). Is there any other way we can
 make Python programs really multithreaded in real sense.

As mentioned, use MPI or Parallel Python. MPI is by far the more
mature, but Parallel Python could be easier for a pythoneer.
Multithreading has different use.











-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading concept

2007-02-07 Thread Paul Boddie
On 7 Feb, 02:53, S.Mohideen [EMAIL PROTECTED]
wrote:

 Python is praised about - me too. But at one instance it fails. It fails to
 behave as a true multi-threaded application. That means utilizing all the
 CPUs parallely in the SMP efficiently stays as a dream for a Python
 Programmer.

Take a look at the Python Wiki for information on parallel processing
with Python:

http://wiki.python.org/moin/ParallelProcessing

Pure CPython code may not be able to use more than one CPU merely
through the use of threads (Jython and IronPython are different,
though), but using all the CPUs or cores in an SMP system is not
exactly a mere dream, as many of the projects listed on the above page
demonstrate.

Paul

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading concept

2007-02-07 Thread John Nagle
sturlamolden wrote:
 On Feb 7, 2:53 am, S.Mohideen [EMAIL PROTECTED]
 wrote:
 This has been discussed to death before. Win32 threads and pthreads
 (which is what Python normally uses, depending on the platform) are
 designed to stay idle most of the time. They are therefore not a tool
 for utilizing the power of multiple CPUs, but rather make certain kind
 of programming tasks easier to program (i.e. non-blocking I/O,
 responsive UIs). 

Multithread compute-bound programs on multiple CPUs are
how you get heavy number-crunching work done on multiprocessors.
Of course, that's not something you use Python for, at least not
until it gets a real compiler.

It's also the direction games are going.  The XBox 360 forced
game developers to go that way, since it's a 3-CPU shared memory
multiprocessor.  That translates directly to multicore desktops
and laptops.

I went to a talk at Stanford last week by one of Intel's
CPU architects, and he said we're going have hundreds of
CPUs per chip reasonably soon.  Python needs to get ready.

John Nagle
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading concept

2007-02-07 Thread Steve Holden
John Nagle wrote:
 sturlamolden wrote:
 On Feb 7, 2:53 am, S.Mohideen [EMAIL PROTECTED]
 wrote:
 This has been discussed to death before. Win32 threads and pthreads
 (which is what Python normally uses, depending on the platform) are
 designed to stay idle most of the time. They are therefore not a tool
 for utilizing the power of multiple CPUs, but rather make certain kind
 of programming tasks easier to program (i.e. non-blocking I/O,
 responsive UIs). 
 
 Multithread compute-bound programs on multiple CPUs are
 how you get heavy number-crunching work done on multiprocessors.
 Of course, that's not something you use Python for, at least not
 until it gets a real compiler.
 
 It's also the direction games are going.  The XBox 360 forced
 game developers to go that way, since it's a 3-CPU shared memory
 multiprocessor.  That translates directly to multicore desktops
 and laptops.
 
 I went to a talk at Stanford last week by one of Intel's
 CPU architects, and he said we're going have hundreds of
 CPUs per chip reasonably soon.  Python needs to get ready.
 

Define Python. Does it include you? What does it need to do to get 
ready. How do you plan to help?

regards
  Steve
-- 
Steve Holden   +44 150 684 7255  +1 800 494 3119
Holden Web LLC/Ltd  http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
Blog of Note:  http://holdenweb.blogspot.com
See you at PyCon? http://us.pycon.org/TX2007
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading concept

2007-02-07 Thread sturlamolden
On Feb 7, 6:17 pm, John Nagle [EMAIL PROTECTED] wrote:

 Multithread compute-bound programs on multiple CPUs are
 how you get heavy number-crunching work done on multiprocessors.

In the scientific community, heavy CPU-bound tasks are either
parallelized using MPI and/or written in Fortran 90/95 and
parallelized using an expensive vectorizing compiler.

 Of course, that's not something you use Python for, at least not
 until it gets a real compiler.

That is also not correct:

1. Using Python does not change the complexity of the algorithm. Big-O
is still the same, and Big-O is still the main determinant of
performance.

2. I value my own time more than extra CPU cycles (and so does those
who pay my salary). If Python is to slow, it is less expensive to
compensate by using more CPUs than using a less productive language
like Java or C++.

3. Only isolated bottlenecks really gain from being statically
compiled. These are usually very small parts of the program. They can
be identified with a profiler (intuition usually do not work very well
here) and rewritten in Pyrex, Fortran 95, C or assembly.

4. There is NumPy and SciPy, which can make Python fast enough for
most CPU-bound tasks. http://www.scipy.org/PerformancePython

5. Premature optimization is the root of all evil in computer
science. (Donald Knuth)

6. Pyrex (the compiler you asked for) does actually exist.


C and Fortran compilers can produce efficient code because they know
the type of each variable. We have do a Python compiler that can do
the same thing. It is called 'Pyrex' and extends Python with static
types. Pyrex can therefore produce code that are just as efficient as
hand-tuned C (see the link above). One can take the bad-performing
Python code, add type declarations to the variables that Pyrex needs
to generate efficient code (but all variables need not be declared),
and leave the rest to the compiler. But this is only required for very
small portions of the code. Transforming time-critical Python code to
Pyrex is child's play. First make it work, then make it fast.

At the University of Oslo, the HPC centre has been running Python
courses for its clients. Python does not perform any worse than C or
Fortran, one just has to know (1) how to use it, (2) when to use it,
and (3) when not to use it.

99% of benchmarks showing bad performance with Python is due to
programmers not understanding which operations are expensive in
interpreted languages, and trying to use Python as if it were C++. The
typical example would be code that use a loop instead of using the
built-in function 'map' or a vectorized array expression with NumPy.


 It's also the direction games are going.

I believe that is due to ignorance. Threads are implemented to be in
an idle blocking state 99% of the time.


 The XBox 360 forced
 game developers to go that way, since it's a 3-CPU shared memory
 multiprocessor.  That translates directly to multicore desktops
 and laptops.

MPI works on SMPs.

MPI does not use threads on SMPs because it performs worse than using
multiple processes.







-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading concept

2007-02-07 Thread Sergei Organov
sturlamolden [EMAIL PROTECTED] writes:
 On Feb 7, 6:17 pm, John Nagle [EMAIL PROTECTED] wrote:
[...]
 MPI does not use threads on SMPs because it performs worse than using
 multiple processes.

I fail to see how threads in general could perform worse than
processes. I do understand that processes are inherently more
safe/secure, but when it comes to speed I really can't imagine why it
could happen that threads perform worse (poor threads implementation and
programming practices aside). Care to give some references?

-- Sergei.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading concept

2007-02-07 Thread sturlamolden
On Feb 7, 8:03 pm, Sergei Organov [EMAIL PROTECTED] wrote:

 I fail to see how threads in general could perform worse than
 processes. I do understand that processes are inherently more
 safe/secure, but when it comes to speed I really can't imagine why it
 could happen that threads perform worse (poor threads implementation and
 programming practices aside). Care to give some references?

I believe Nick Maclaren explained that to you (and me) on January 10
and 11 this group. As far as I have understood the issue, it has to do
with poor threads implementations. Look that up on Google groups and
re-read the discussion (or ask Nick Maclaren as he is far more
competent than me).

http://groups.google.com/group/comp.lang.python/browse_frm/thread/332083cdc8bc44b

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading concept

2007-02-07 Thread Carl J. Van Arsdall
Paul Boddie wrote:
 [snip]

 Take a look at the Python Wiki for information on parallel processing
 with Python:

 http://wiki.python.org/moin/ParallelProcessing
   
What a great resource!  That one is book marked for sure.  I was 
wondering if anyone here had any opinions on some of the technologies 
listed in there.  I've used a couple, but there are some that I've never 
seen before.  In particular, has anyone used rthread before?  It looks 
like something I may use (now orwhen it matures), are there opinions on it?

Under the cluster computing section, has anyone tried any of the other 
technologies?  I've only used Pyro and i love it, but I'd like opinions 
and experiences with other technologies if anyone has anything to say.

-c


-- 

Carl J. Van Arsdall
[EMAIL PROTECTED]
Build and Release
MontaVista Software

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading concept

2007-02-07 Thread S.Mohideen
I would like to add my problem in this thread.
I have a network application in Python which sends and recv using a single 
socket.
There is a dictionary on which I store/read data values. I want to seperate 
the send and recv functionality on two different processes so that the 
parallel execution becomes fast. Is there any way to do so, so that the 
Dict's consitency is not lost(able to read  write) and also the performance 
improves. I am looking upon the MPI4Py module to see if it does the job for 
me. Any ideas would be appreciated.

- Original Message - 
From: Sergei Organov [EMAIL PROTECTED]
To: python-list@python.org
Sent: Wednesday, February 07, 2007 1:03 PM
Subject: Re: multithreading concept


 sturlamolden [EMAIL PROTECTED] writes:
 On Feb 7, 6:17 pm, John Nagle [EMAIL PROTECTED] wrote:
 [...]
 MPI does not use threads on SMPs because it performs worse than using
 multiple processes.

 I fail to see how threads in general could perform worse than
 processes. I do understand that processes are inherently more
 safe/secure, but when it comes to speed I really can't imagine why it
 could happen that threads perform worse (poor threads implementation and
 programming practices aside). Care to give some references?

 -- Sergei.

 -- 
 http://mail.python.org/mailman/listinfo/python-list 

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading concept

2007-02-07 Thread Carl J. Van Arsdall
S.Mohideen wrote:
 I would like to add my problem in this thread.
 I have a network application in Python which sends and recv using a single 
 socket.
 There is a dictionary on which I store/read data values. I want to seperate 
 the send and recv functionality on two different processes so that the 
 parallel execution becomes fast. Is there any way to do so, so that the 
 Dict's consitency is not lost(able to read  write) and also the performance 
 improves. I am looking upon the MPI4Py module to see if it does the job for 
 me. Any ideas would be appreciated.
   
Well, from your description so far I think that MPI is going to be a bit 
of overkill.  I think you should consider threads or processors with 
shared memory/objects (POSH).  Then take a look at a producer/consumer 
program to see how it works, that should get you to where you need to go.

HTH

-carl

-- 

Carl J. Van Arsdall
[EMAIL PROTECTED]
Build and Release
MontaVista Software

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading concept

2007-02-07 Thread Bjoern Schliessmann
S.Mohideen wrote:

 There is a dictionary on which I store/read data values. I want to
 seperate the send and recv functionality on two different
 processes so that the parallel execution becomes fast.

What makes you think that'll be faster?

Remember:
- If you have one CPU, there is no parallelity at all.
- If you do have multiple CPUs but only one network device, there is
no parallel networking.

Regards,


Björn

-- 
BOFH excuse #188:

..disk or the processor is on fire.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading concept

2007-02-06 Thread Paddy
On Feb 7, 1:53 am, S.Mohideen [EMAIL PROTECTED]
wrote:
 Hi Folks,

 Python is praised about - me too. But at one instance it fails. It fails to
 behave as a true multi-threaded application. That means utilizing all the
 CPUs parallely in the SMP efficiently stays as a dream for a Python
 Programmer.

 Discussion threads say its due to GIL - global interpreter lock. But nobody
 has mentioned any alternative to that apart from suggestions like Code it
 in C and POSH (http://poshmodule.sf.net). Is there any other way we can
 make Python programs really multithreaded in real sense.

 Moin

Actually their are a *lot* more suggestions  discussions to be found.
I myself move towards the parallel processing is difficult. If you
think it's easy then your either lucky or theorising. Whilst it would
be nice to have threads==native threads for completeness sake, I'm
quit happy to run concurrent communicating processes, as on my
machines the OS helps me to see what's happening to the processes, and
stops processes trampling over shared data.

-Paddy.



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading windows and embedding python

2006-07-10 Thread freesteel

freesteel wrote:
...
   pThread[ih] = AfxBeginThread(MyThread, mainThreadState,
 THREAD_PRIORITY_NORMAL, CREATE_SUSPENDED);

...

Here the call to AfxBeginThread is wrong, there is one argument
missing, it should be:

pThread[ih] = AfxBeginThread(MyThread, mainThreadState,
THREAD_PRIORITY_NORMAL, 0, CREATE_SUSPENDED);

Because there are so many default arguments of similar types the
compiler did not notice that I passed 'CREATE_SUSPENDED' as a stack
size, and use the default 'creation' state of 'start right away' for
the thread.

Don't you love default args and types like 'void*' ?

Martin

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading and shared dictionary

2006-07-08 Thread placid

Alex Martelli wrote:
 Istvan Albert [EMAIL PROTECTED] wrote:

  Stéphane Ninin wrote:
 
   Is a lock required in such a case ?
 
  I believe that assignment is atomic and would not need a lock.

 Wrong, alas: each assignment *could* cause the dictionary's internal
 structures to be reorganized (rehashed) and impact another assignment
 (or even 'get'-access).


Oh yeah you are correct, i forgot about possible rehashing after
assignment operation, so to that means you do need to use a lock to
restrict only one thread at time from accessing the dictionary. Again i
recommend using a Queue to do this. If you need help with usage of
Queue object first see
http://www.google.com/notebook/public/14017391689116447001/BDUsxIgoQkcSO3bQh
or contact me

-Cheers

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading and shared dictionary

2006-07-08 Thread K.S.Sreeram
Alex Martelli wrote:
 Wrong, alas: each assignment *could* cause the dictionary's internal
 structures to be reorganized (rehashed) and impact another assignment
 (or even 'get'-access).

but wont the GIL be locked when the rehash occurs?

Regards
Sreeram



signature.asc
Description: OpenPGP digital signature
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: multithreading and shared dictionary

2006-07-08 Thread Marc 'BlackJack' Rintsch
In [EMAIL PROTECTED], K.S.Sreeram
wrote:

 Alex Martelli wrote:
 Wrong, alas: each assignment *could* cause the dictionary's internal
 structures to be reorganized (rehashed) and impact another assignment
 (or even 'get'-access).
 
 but wont the GIL be locked when the rehash occurs?

If there is a GIL then maybe yes.  But the GIL is an implementation
detail.  It's not in Jython nor IronPython and maybe not forever in
CPython.  Don't know about PyPy.

Ciao,
Marc 'BlackJack' Rintsch
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading and shared dictionary

2006-07-08 Thread K.S.Sreeram
Marc 'BlackJack' Rintsch wrote:
 Wrong, alas: each assignment *could* cause the dictionary's internal
 structures to be reorganized (rehashed) and impact another assignment
 (or even 'get'-access).
 but wont the GIL be locked when the rehash occurs?
 
 If there is a GIL then maybe yes.  But the GIL is an implementation
 detail.  It's not in Jython nor IronPython and maybe not forever in
 CPython.  Don't know about PyPy.

Just wondering.. Is this simply a case of defensive programming or is it
an error? (say, we're targeting only CPython).
For instance, what happens when there's dictionary key with a custom
__hash__ or __eq__ method?

Its probably wise to simply use a lock and relieve ourselves of the
burden of thinking about these cases. But it still is worth knowing...

Regards
Sreeram



signature.asc
Description: OpenPGP digital signature
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: multithreading and shared dictionary

2006-07-08 Thread K.S.Sreeram
Alex Martelli wrote:
 Wrong, alas: each assignment *could* cause the dictionary's internal
 structures to be reorganized (rehashed) and impact another assignment
 (or even 'get'-access).

(been thinking about this further...)

Dictionary get/set operations *must* be atomic, because Python makes
extensive internal use of dicts.

Consider two threads A and B, which are independent except for the fact
that they reside in the same module.

def thread_A() :
global foo
foo = 1

def thread_B() :
global bar
bar = 2

These threads create entries in the same module's dict, and they *might*
execute at the same time. Requiring a lock in this case is very
non-intuitive, and my conclusion is that dict get/set operations are
indeed atomic (thanks to the GIL).

Regards
Sreeram



signature.asc
Description: OpenPGP digital signature
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: multithreading and shared dictionary

2006-07-08 Thread Istvan Albert
Marc 'BlackJack' Rintsch wrote:

 It's not in Jython nor IronPython and maybe not forever in
 CPython.

Whether or not a feature is present in Jython or IronPython does not
seem relevant, after all these languages emulate Python, one could
argue that it only means that this emulation is incomplete.  Same for
changes way out in the future that may or may not materialize. 

i.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading and shared dictionary

2006-07-08 Thread Alex Martelli
K.S.Sreeram [EMAIL PROTECTED] wrote:
   ...
 Consider two threads A and B, which are independent except for the fact
 that they reside in the same module.
 
 def thread_A() :
 global foo
 foo = 1
 
 def thread_B() :
 global bar
 bar = 2
 
 These threads create entries in the same module's dict, and they *might*
 execute at the same time. Requiring a lock in this case is very
 non-intuitive, and my conclusion is that dict get/set operations are
 indeed atomic (thanks to the GIL).

Well then, feel free to code under such assumptions (as long as you're
not working on any project in which I have any say:-) -- depending on
subtle (and entirely version-dependent) considerations connected to
string interning, dict size, etc, you _might_ never run into failing
cases (as long as, say, every key in play is a short internable string
[never an instance of a _subtype_ of str of course], every value an int
small enough to be kept immortally in the global small-ints cache, etc,
etc)... why, dicts that forever remain below N entries for sufficiently
small (and version-dependent) N may in fact never be resized at all.

Most of us prefer to write code that will keep working a bit more
robustly (e.g. when the Python interpreter is upgraded from 2.5 to 2.6,
which might change some of these internal implementation details), and
relying on subtle reasoning about what might be very non-intuitive is
definitely counterproductive; alas, testing is not a great way to
discover race conditions, deadlocks, etc, so threading-related errors
must be avoided ``beforehand'', by taking a very conservative stance
towards what operations might or might not happen to be
atomic/threadsafe unless specifically guaranteed by the Language
Reference (or the specific bits of the Library Reference).


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading and shared dictionary

2006-07-08 Thread Alex Martelli
Istvan Albert [EMAIL PROTECTED] wrote:

 Marc 'BlackJack' Rintsch wrote:
 
  It's not in Jython nor IronPython and maybe not forever in
  CPython.
 
 Whether or not a feature is present in Jython or IronPython does not
 seem relevant, after all these languages emulate Python, one could

Not at all, but rather: these _implementations_ are implementations of
the language Python; no emulation is involved at all.

 argue that it only means that this emulation is incomplete.  Same for
 changes way out in the future that may or may not materialize. 

What Python semantics' are is supposedly defined by the (normative)
Language Reference.  One could argue anything one wishes, of course,
but the existence of freedom of speech does not necessarily make such
arguments sensible nor at all useful.


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading and shared dictionary

2006-07-08 Thread K.S.Sreeram
Alex Martelli wrote:
 Well then, feel free to code under such assumptions (as long as you're
 not working on any project in which I have any say:-)

Hey, I would *never* write code which depends on such intricate
implementation details! Nonetheless, its good to *know* whats going on
inside. As they say.. Knowledge is Power!

Regards
Sreeram



signature.asc
Description: OpenPGP digital signature
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: multithreading and shared dictionary

2006-07-08 Thread Alex Martelli
K.S.Sreeram [EMAIL PROTECTED] wrote:
   ...
 Alex Martelli wrote:
  Well then, feel free to code under such assumptions (as long as you're
  not working on any project in which I have any say:-)
 
 Hey, I would *never* write code which depends on such intricate
 implementation details! Nonetheless, its good to *know* whats going on
 inside. As they say.. Knowledge is Power!

Since all abstractions leak (Spolski), it's indeed worthwhile knowing
what goes on below the abstraction (to be wary in advance about such
possible leaks) -- but studying the sources (and the rich notes that
accompany them) is my favorite approach towards such knowledge!-)


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading and shared dictionary

2006-07-08 Thread Aahz
In article [EMAIL PROTECTED],
Istvan Albert [EMAIL PROTECTED] wrote:
Marc 'BlackJack' Rintsch wrote:

 It's not in Jython nor IronPython and maybe not forever in
 CPython.

Whether or not a feature is present in Jython or IronPython does not
seem relevant, after all these languages emulate Python, one could
argue that it only means that this emulation is incomplete.  Same for
changes way out in the future that may or may not materialize. 

The GIL is *NOT* part of the Python language spec; it is considered a
CPython implementation detail.  Other implementations are free to use
other mechanisms -- and they do.
-- 
Aahz ([EMAIL PROTECTED])   * http://www.pythoncraft.com/

I saw `cout' being shifted Hello world times to the left and stopped
right there.  --Steve Gonedes
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading and shared dictionary

2006-07-07 Thread placid

Stéphane Ninin wrote:
 Hello,

 Probably a stupid question, but I am not a multithreading expert...

 I want to share a dictionary between several threads.
 Actually, I will wrap the dictionary in a class
 and want to protect the sensitive accesses with locks.

 The problem is I am not sure which concurrent access to the dictionary
 could cause a problem.

 I assume that two write's on the same location would be,
 but what if one thread does

 mydict['a'] = something

  and another thread:

 mydict['b'] = something else

 Is a lock required in such a case ?

i dont think you need to use a lock for these cases because mydict['a']
refers to mydict['b'] memory locations (i may be wrong because i dont
know Python implementation).

Just a recomendation, you could also use a Queue object to control
write access to the dictionary.

-Cheers

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multithreading and shared dictionary

2006-07-07 Thread Alex Martelli
Istvan Albert [EMAIL PROTECTED] wrote:

 Stéphane Ninin wrote:
 
  Is a lock required in such a case ?
 
 I believe that assignment is atomic and would not need a lock.

Wrong, alas: each assignment *could* cause the dictionary's internal
structures to be reorganized (rehashed) and impact another assignment
(or even 'get'-access).


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multithreading and Queue

2006-04-25 Thread Tim Peters
[Jeffrey Barish]
 Several methods in Queue.Queue have warnings in their doc strings that they
 are not reliable (e.g., qsize).  I note that the code in all these methods
 is bracketed with lock acquire/release.  These locks are intended to
 protect the enclosed code from collisions with other threads.  I am
 wondering whether I understand correctly that the reason these methods are
 still not reliable is that from the point where a thread calls qsize (for
 example) to the point in Queue where a thread acquires a lock there is a
 bunch of code, none of which is protected by a lock, (and moreover there is
 another bunch of code between the point where a thread releases a lock and
 then actually returns to the calling program) and so despite the locks in
 Queue it is still possible for values to change before a thread acts on
 them.

Pretty much, yes.  qsize() knows perfectly well what the exact size of
the queue is at the time it computes it, but by the time qsize's
_caller_ gets the result, any number of other threads may have run for
any amount of time, so the actual size of the queue at the instant the
caller uses the result may be anything whatsoever.  A specific
application may be able to get stronger guarantees by disciplining its
use of threads in exploitable ways, but nothing stronger can be said
in the _general_ case.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multithreading and Queue

2006-04-25 Thread robert
Jeffrey Barish wrote:
 Several methods in Queue.Queue have warnings in their doc strings that they
 are not reliable (e.g., qsize).  I note that the code in all these methods
 is bracketed with lock acquire/release.  These locks are intended to
 protect the enclosed code from collisions with other threads.  I am
 wondering whether I understand correctly that the reason these methods are
 still not reliable is that from the point where a thread calls qsize (for
 example) to the point in Queue where a thread acquires a lock there is a
 bunch of code, none of which is protected by a lock, (and moreover there is
 another bunch of code between the point where a thread releases a lock and
 then actually returns to the calling program) and so despite the locks in
 Queue it is still possible for values to change before a thread acts on
 them.

That warnings are quite meaningless: If you get a certain q size, it 
might be changed the next moment/OP anyway.

simply len() is the same as that

 def qsize(self):
 Return the approximate size of the queue (not reliable!).
 self.mutex.acquire()
 n = self._qsize()
 self.mutex.release()
 return n
 ...
 def _qsize(self):
 return len(self.queue)


All that time-consuming locking in Queue is quite unnecessary. As 
assumed in many other locations in the std lib, list.append() / .pop() / 
len() are atomic. Thus doing in a single location a IndexError-catch on 
a .pop() race, and all care is done.

(the only additional guarantee through that lock's ( at .append() time) 
is that the q size is never one over the exact maximum - but thats 
pedantic in a threaded multi-producer mess.)

The whole CallQueue in 
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/491281 for 
example doesn't use a single lock - though its an inter-thread 
communication hot spot.

-robert
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multithreading tkinter question

2004-12-17 Thread Eric Brunel
Oleg Paraschenko wrote:
[snip]
In my case Hello works and Quit doesn't (GUI stays frozen).
Linux, Python 2.3.3, pygtk-0.6.9.
That's not a multithreading issue, but just the way the quit method works. 
Try:
-
import time
from Tkinter import *
root = Tk()
b = Button(root, text='Quit')
b.configure(command=root.quit)
b.pack()
root.mainloop()
time.sleep(2)
-
When you click the Quit button, the GUI stays there until the time.sleep ends. 
root.quit just goes out of the mainloop; it doesn't destroy the widgets. To do 
that, you have to add an explicit root.destroy() after root.mainloop()
--
- Eric Brunel eric (underscore) brunel (at) despammed (dot) com -
PragmaDev : Real Time Software Development Tools - http://www.pragmadev.com
--
http://mail.python.org/mailman/listinfo/python-list


Re: Multithreading tkinter question

2004-12-16 Thread John Pote

Mark English [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
Is there a safe way to run tkinter in a multithreaded app where the
mainloop runs in a background thread ?


Mark,

I tried your code snippet with Python 2.3.4. Worked fine. Only problem was 
that the program fell off the end and terminated before the second thread 
could open the Tkinter window. So I added these lines at the end to make the 
main thread wait:-

from msvcrt import kbhit, getch
print \n\nPress key to end
while not kbhit(): pass
getch()

Both your Hello and Quit buttons worked.

However, I have found that tkinter crashes if any components, labels text 
box etc, are accessed directly from another thread. Below is a posting I did 
some time ago. My solution to the problem. I'm still interested to know if 
this is a good/best way to solve this problem.

It is not optimal in that 'otherThread' runs continuously even when the 
label is not being updated. What a waste of cpu cycles! This shows up in 
that other windows apps slow right down. What is needed is a comms method 
between threads that causes a thread to block while it's waiting for data 
rather than my continuous polling approach. Would a Queue help here?

John Pote

Martin Franklin [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
 On Tue, 09 Nov 2004 17:41:56 GMT, John Pote [EMAIL PROTECTED]
 wrote:

 Running my programme in Python 2.3.4 I received the following msg in the
 consol :-
 (Pent III running W2K prof)

 
 Exception in Tkinter callback
 Traceback (most recent call last):
   File c:\apps\python\234\lib\lib-tk\Tkinter.py, line 1345, in __call__
 return self.func(*args)
   File c:\apps\python\234\lib\lib-tk\Tkinter.py, line 459, in callit
 self.deletecommand(tmp[0])
 AttributeError: 'str' object has no attribute 'deletecommand'
 UpdateStringProc should not be invoked for type option

 abnormal program termination
 
 There was no other traceback information.

 Could this be related to lines of the ilk:-
   self.infoSpd.config(text=%d.%01d%spd)
 where infoSpd is a Tkinter Label object placed using the grid manager.

 Thousands of these updates were performed so the labels displayed
 progress
 through a memory dump of a system accessed through a serial port.

 I had trouble before with Python versions 2.2.1 and 2.2.3 where
 commenting
 out these Label updates stopped the system crashing and it was happy to
 run
 for hours performing tests on the external hardware. (an embedded data
 logger I'm developing)

 Anyone any thoughts?

 John


 Only one (thought that is)  Are you updating thses Label widgets from
 other
 threads? and could you possibly post an example?

 Martin


A  --  Experience had already taught me that lesson about tkinter. On
checking my code guess what I found I'd done - called the widget.config
method from the other thread. So I put in a list to queue the label updates
from the other thread to the tkinter thread and it's now been running for
several hours without problem.

Thanks for the reminder.

BTW the program structure I've been using is:-

def otherThread():
while TRUE:
if updatelabel:
labelQ = new label text

def guiLoop():
if labelQ:
myLabel.config(text=labelQ)
labelQ = None
 #re-register this fn to run again
 rootWin.after(10, guiLoop) #strangely .after_idle(guiLoop) is slower!
.
.
rootWin = Tk(className= tester)

#rest of GUI set up. then:-

 thread.start_new( otherThread, () )
 rootWin.after(50, guiLoop)
 rootWin.mainloop()

It works but is it the best way to do this sort of thing? The point is that
I need moderately fast serial comms, which I do in 'otherThread' and found
the 'after' and 'after_idle' call backs were too slow. The timing I did on
py2.2.1 indicated that 'after_idle' could not do better than ~70ms and
'after(10, )' was faster, 30-40 ms, but still too slow for my app.

Any more thoughts appreciated.

John



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multithreading tkinter question

2004-12-16 Thread Mark English
 Date: Thu, 16 Dec 2004 11:59:53 GMT
 From: John Pote [EMAIL PROTECTED]
 
 Mark English [EMAIL PROTECTED] wrote in message 
 news:[EMAIL PROTECTED]
 Is there a safe way to run tkinter in a multithreaded app 
 where the mainloop runs in a background thread ?
 
 
 I tried your code snippet with Python 2.3.4. Worked fine. 
 Only problem was 
 that the program fell off the end and terminated before the 
 second thread 
 could open the Tkinter window.
John,
Thanks for trying that out. I hadn't thought to try another version of
python. I should have said that the example code snippet was for entry
at the command prompt rather than putting in a module or script.

I've tried with activestate Python 2.3.2 and it worked fine.

When I submitted the code I was using my own build of Python 2.4.
On another machine I installed Python2.4 and pywin32-203.win32-py2.4 to
ensure I hadn't messed my build up.
Running a normal Tkinter test works fine. Running the test code I
submitted at the python prompt (or rather putting the function in a
module and calling it from the python prompt as indicated in the test
code) caused the Tkinter window to come up on the taskbar, but it was
invisible. I can only minimize and attempt to close it - close reports
that I will have to kill the program. In other words, it appears to be
broken.

Has anyone else seen any similar problems with Python 2.4 or,
conversely, does it all work ok ? Having now seen the code work with
activestate Python 2.3.2 there's this tantalising prospect that it all
might just work after all.

Cheers,
mE


---
The information contained in this e-mail is confidential and solely 
for the intended addressee(s). Unauthorised reproduction, disclosure, 
modification, and/or distribution of this email may be unlawful. If you 
have received this email in error, please notify the sender immediately 
and delete it from your system. The views expressed in this message 
do not necessarily reflect those of LIFFE Holdings Plc or any of its subsidiary 
companies.
---

--
http://mail.python.org/mailman/listinfo/python-list


Re: Multithreading tkinter question

2004-12-16 Thread Oleg Paraschenko
Hello John,

 Mark,

 I tried your code snippet with Python 2.3.4. Worked fine. Only
problem was
 that the program fell off the end and terminated before the second
thread
 could open the Tkinter window. So I added these lines at the end to
make the
 main thread wait:-

 from msvcrt import kbhit, getch
 print \n\nPress key to end
 while not kbhit(): pass
 getch()

And I added

print \n\nPress key to end
l = sys.stdin.readline()

 Both your Hello and Quit buttons worked.

In my case Hello works and Quit doesn't (GUI stays frozen).
Linux, Python 2.3.3, pygtk-0.6.9.

 ...

 It is not optimal in that 'otherThread' runs continuously even when
the
 label is not being updated. What a waste of cpu cycles! This shows up
in
 that other windows apps slow right down. What is needed is a comms
method
 between threads that causes a thread to block while it's waiting for
data
 rather than my continuous polling approach. Would a Queue help here?


Yes, it should help. A time ago I tried to write a tkinter
application,
and my test code is available:

A complete Python Tkinter sample application for a long operation
http://uucode.com/texts/pylongopgui/pyguiapp.html
Maybe you find it interesting.

--
Oleg

-- 
http://mail.python.org/mailman/listinfo/python-list