Re: How to limit CPU usage in Python

2012-09-27 Thread Jerry Hill
On Thu, Sep 27, 2012 at 12:58 PM, Prasad, Ramit
 wrote:
> On *nix you should just set the appropriate nice-ness and then
> let the OS handle CPU scheduling. Not sure what you would do
> for Windows--I assume OS X is the same as *nix for this context.

On windows, you can also set the priority of a process, though it's a
little different from the *nix niceness level.  See
http://code.activestate.com/recipes/496767/ for a recipe using
pywin32.  I believe the psutil module handles this too, but I don't
think it manages to abstract away the platform differences.

-- 
Jerry
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: How to limit CPU usage in Python

2012-09-27 Thread Prasad, Ramit
Paul Rubin wrote:
> Rolando Cañer Roblejo  writes:
> > Is it possible for me to put a limit in the amount of processor usage
> > (% CPU) that my current python script is using? Is there any module
> > useful for this task?
> 
> One way is check your cpu usage once in a while, compare with elapsed
> time, and if your % usage is above what you want, sleep for a suitable
> interval before proceeding.
> 
> Tim Roberts: reasons to want to do this might involve a shared host
> where excessive cpu usage affects other users; or a computer with
> limited power consumption, where prolonged high cpu activity causes
> thermal or other problems.

The problem is that checking the CPU usage is fairly misleading 
if you are worried about contention. If your process takes up 
100% of CPU and nothing else needs the resource, does it matter? 
I would not want to sleep *unless* something else needs the 
resource. Of course, there might be a good/easy way of checking
usage + contention, but I am unaware of any off the top of my
head.

On *nix you should just set the appropriate nice-ness and then 
let the OS handle CPU scheduling. Not sure what you would do 
for Windows--I assume OS X is the same as *nix for this context.



This email is confidential and subject to important disclaimers and
conditions including on offers for the purchase or sale of
securities, accuracy and completeness of information, viruses,
confidentiality, legal privilege, and legal entity disclaimers,
available at http://www.jpmorgan.com/pages/disclosures/email.  
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to limit CPU usage in Python

2012-09-25 Thread 88888 Dihedral
DPalao於 2012年9月25日星期二UTC+8下午11時13分54秒寫道:
> On Jueves septiembre 20 2012 11:12:44 Rolando Cañer Roblejo escribió:
> 
> > Hi all,
> 
> > 
> 
> > Is it possible for me to put a limit in the amount of processor usage (%
> 
> > CPU) that my current python script is using? Is there any module useful
> 
> > for this task? I saw Resource module but I think it is not the module I
> 
> > am looking for. Some people recommend to use nice and cpulimit unix
> 
> > tools, but those are external to python and I prefer a python solution.
> 
> > I am working with Linux (Ubuntu 10.04).
> 
> > 
> 
> > Best regards.
> 
> 
> 
> Hola,
> 
> Sometimes a stupid solution like the following does the trick:
> 
> 
> 
> > import time
> 
> > for t in tasks:
> 
> > do_something(t)
> 
> > time.sleep(some_seconds)
> 
> 
> 
> where "some_seconds" is a number related to the typical time-scale of the 
> 
> tasks you are doing.
> 
> 
> 
> Hope it helps,
> 
> 
> 
> Regards
> 
> 
> 
> 
> 
> -- 
> 
> Miller's Slogan:
> 
>   Lose a few, lose a few.

I think I'll prefer to use a generator of my object in python
to replace the sleep from the unix world. The reason is that I am not paid 
from selling or buying  work-stations in some business unit directly and 
immediately.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to limit CPU usage in Python

2012-09-25 Thread DPalao
On Jueves septiembre 20 2012 11:12:44 Rolando Cañer Roblejo escribió:
> Hi all,
> 
> Is it possible for me to put a limit in the amount of processor usage (%
> CPU) that my current python script is using? Is there any module useful
> for this task? I saw Resource module but I think it is not the module I
> am looking for. Some people recommend to use nice and cpulimit unix
> tools, but those are external to python and I prefer a python solution.
> I am working with Linux (Ubuntu 10.04).
> 
> Best regards.

Hola,
Sometimes a stupid solution like the following does the trick:

> import time
> for t in tasks:
> do_something(t)
> time.sleep(some_seconds)

where "some_seconds" is a number related to the typical time-scale of the 
tasks you are doing.

Hope it helps,

Regards


-- 
Miller's Slogan:
Lose a few, lose a few.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to limit CPU usage in Python

2012-09-24 Thread Tim Roberts
Paul Rubin  wrote:
>
>Tim Roberts: reasons to want to do this might involve a shared host
>where excessive cpu usage affects other users;

That's what priorities are for.

>...or a computer with
>limited power consumption, where prolonged high cpu activity causes
>thermal or other problems.

OK, I grant that.  However, statistically speaking, it is much more likely
that the OP merely has a misunderstanding.
-- 
Tim Roberts, t...@probo.com
Providenza & Boekelheide, Inc.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to limit CPU usage in Python

2012-09-22 Thread Dwight Hutto
Now also, just thinking theoretically with the knowledge I have,
you could underclock(as opposed to overclocking, which is what gamers
do), but have never seen that option in BIOS.

And maybe there is an option in your OS, google search term 'limiting
processes activity cpu usage':

https://www.google.com/search?client=ubuntu&channel=fs&q=limiting+processes+activity+cpu+usage&ie=utf-8&oe=utf-8

This seemed good for what you want from a brief overview:

http://www.cyberciti.biz/faq/cpu-usage-limiter-for-linux/


Best Regards,
David Hutto
CEO: http://www.hitwebdevelopment.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to limit CPU usage in Python

2012-09-22 Thread Dwight Hutto
rites:
>> Is it possible for me to put a limit in the amount of processor usage
>> (% CPU) that my current python script is using? Is there any module
>> useful for this task?
>
> One way is check your cpu usage once in a while, compare with elapsed
> time, and if your % usage is above what you want, sleep for a suitable
> interval before proceeding.
>

The script in constant runtime, unless it's in relation to other
processes, could be put on a % based sleep constant variable.

If the script is constantly running the same processes, and the OP
wants to limit it statistically, then at a critical portion in the
script sleep for a constant, or maybe, dynamic variable.

The only other is to create an app, disassemble it, and then refine
the asm instructions being used at the assembly level, but I'm just
scratching the surface of those enhancements.



-- 
Best Regards,
David Hutto
CEO: http://www.hitwebdevelopment.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to limit CPU usage in Python

2012-09-22 Thread Paul Rubin
Rolando Cañer Roblejo  writes:
> Is it possible for me to put a limit in the amount of processor usage
> (% CPU) that my current python script is using? Is there any module
> useful for this task? 

One way is check your cpu usage once in a while, compare with elapsed
time, and if your % usage is above what you want, sleep for a suitable
interval before proceeding.

Tim Roberts: reasons to want to do this might involve a shared host
where excessive cpu usage affects other users; or a computer with
limited power consumption, where prolonged high cpu activity causes
thermal or other problems.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to limit CPU usage in Python

2012-09-22 Thread Tim Roberts
Rolando Cañer Roblejo  wrote:
>
>Is it possible for me to put a limit in the amount of processor usage (% 
>CPU) that my current python script is using?

Why?  That's an odd request.  It's natural to want to reduce your priority
if you want other processes handled first, but an idle CPU is a wasted
resource.  You want it to be busy all of the time.

>Some people recommend to use nice and cpulimit unix 
>tools, but those are external to python and I prefer a python solution. 

Scheduling and CPU priority are, by their very nature, operating system
concepts.  You will not find generic mechanisms wrapping them.
-- 
Tim Roberts, t...@probo.com
Providenza & Boekelheide, Inc.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to limit CPU usage in Python

2012-09-21 Thread Ramchandra Apte
you

On Saturday, 22 September 2012 05:14:15 UTC+5:30, Cameron Simpson  wrote:
> On 20Sep2012 12:53, Terry Reedy  wrote:
> 
> | On 9/20/2012 12:46 PM, Terry Reedy wrote:
> 
> | > On 9/20/2012 11:12 AM, Rolando Cañer Roblejo wrote:
> 
> | >> Is it possible for me to put a limit in the amount of processor usage (%
> 
> | >> CPU) that my current python script is using? Is there any module useful
> 
> | >> for this task? I saw Resource module but I think it is not the module I
> 
> | >> am looking for. Some people recommend to use nice and cpulimit unix
> 
> | >> tools, but those are external to python and I prefer a python solution.
> 
> | >> I am working with Linux (Ubuntu 10.04).
> 
> | >
> 
> | > Call the external tools with subprocess.open.
> 
> | 
> 
> | I meant to end that with ? as I don't know how easy it is to get the 
> 
> | external id of the calling process that is to be limited. I presume that 
> 
> | can be done by first calling ps (with subprocess) and searching the 
> 
> | piped-back output.
> 
> 
> 
> If you're limiting yourself, os.getpid().
> 
> -- 
> 
> Cameron Simpson 

You could use os.times to compute the CPU usage and then stop the process when 
that happens and then start it after some time using signals.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to limit CPU usage in Python

2012-09-21 Thread Cameron Simpson
On 20Sep2012 12:53, Terry Reedy  wrote:
| On 9/20/2012 12:46 PM, Terry Reedy wrote:
| > On 9/20/2012 11:12 AM, Rolando Cañer Roblejo wrote:
| >> Is it possible for me to put a limit in the amount of processor usage (%
| >> CPU) that my current python script is using? Is there any module useful
| >> for this task? I saw Resource module but I think it is not the module I
| >> am looking for. Some people recommend to use nice and cpulimit unix
| >> tools, but those are external to python and I prefer a python solution.
| >> I am working with Linux (Ubuntu 10.04).
| >
| > Call the external tools with subprocess.open.
| 
| I meant to end that with ? as I don't know how easy it is to get the 
| external id of the calling process that is to be limited. I presume that 
| can be done by first calling ps (with subprocess) and searching the 
| piped-back output.

If you're limiting yourself, os.getpid().
-- 
Cameron Simpson 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to limit CPU usage in Python

2012-09-20 Thread Christian Heimes
Am 20.09.2012 17:12, schrieb Rolando Cañer Roblejo:
> Hi all,
> 
> Is it possible for me to put a limit in the amount of processor usage (%
> CPU) that my current python script is using? Is there any module useful
> for this task? I saw Resource module but I think it is not the module I
> am looking for. Some people recommend to use nice and cpulimit unix
> tools, but those are external to python and I prefer a python solution.
> I am working with Linux (Ubuntu 10.04).

Hello,

you have two options here. You can either limit the total amount of CPU
seconds with the resource module or reduce the priority and scheduling
priority of the process.

The resource module is a wrapper around the setrlimit and getrlimit
feature as described in http://linux.die.net/man/2/setrlimit .

The scheduling priority can be altered with nice, get/setpriority or io
priority. The psutil package http://code.google.com/p/psutil/ wraps all
functions in a nice Python API.

Regards
Christian



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to limit CPU usage in Python

2012-09-20 Thread Jerry Hill
On Thu, Sep 20, 2012 at 11:12 AM, Rolando Cañer Roblejo
 wrote:
> Hi all,
>
> Is it possible for me to put a limit in the amount of processor usage (%
> CPU) that my current python script is using? Is there any module useful for
> this task? I saw Resource module but I think it is not the module I am
> looking for. Some people recommend to use nice and cpulimit unix tools, but
> those are external to python and I prefer a python solution. I am working
> with Linux (Ubuntu 10.04).

Maximum percentage of CPU used isn't normally something you control.
The only way I know of to do it involves having another process
monitor the thing you want to control and sending signals to stop and
start it (e.g., http://cpulimit.sourceforge.net/).

Typically, you instead want to control the priority (so that higher
priority apps can easily take more CPU time).  That's what nice is for
(http://docs.python.org/library/os.html#os.nice).  If you want to
limit a process in the same way that ulimit does, then the resources
module is what you want
(http://docs.python.org/library/resource.html#resource.setrlimit).

Is there a particular reason that you'd rather have your CPU sitting
idle, rather than continuing with whatever code is waiting to be run?
I'm having a hard time understanding what problem you might be having
that some combination of setting the nice level and imposing resource
limits won't handle.

-- 
Jerry
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to limit CPU usage in Python

2012-09-20 Thread Terry Reedy

On 9/20/2012 12:46 PM, Terry Reedy wrote:

On 9/20/2012 11:12 AM, Rolando Cañer Roblejo wrote:

Hi all,

Is it possible for me to put a limit in the amount of processor usage (%
CPU) that my current python script is using? Is there any module useful
for this task? I saw Resource module but I think it is not the module I
am looking for. Some people recommend to use nice and cpulimit unix
tools, but those are external to python and I prefer a python solution.
I am working with Linux (Ubuntu 10.04).


Call the external tools with subprocess.open.


I meant to end that with ? as I don't know how easy it is to get the 
external id of the calling process that is to be limited. I presume that 
can be done by first calling ps (with subprocess) and searching the 
piped-back output.



--
Terry Jan Reedy


--
http://mail.python.org/mailman/listinfo/python-list


Re: How to limit CPU usage in Python

2012-09-20 Thread Terry Reedy

On 9/20/2012 11:12 AM, Rolando Cañer Roblejo wrote:

Hi all,

Is it possible for me to put a limit in the amount of processor usage (%
CPU) that my current python script is using? Is there any module useful
for this task? I saw Resource module but I think it is not the module I
am looking for. Some people recommend to use nice and cpulimit unix
tools, but those are external to python and I prefer a python solution.
I am working with Linux (Ubuntu 10.04).


Call the external tools with subprocess.open.

--
Terry Jan Reedy


--
http://mail.python.org/mailman/listinfo/python-list


How to limit CPU usage in Python

2012-09-20 Thread Rolando Cañer Roblejo

Hi all,

Is it possible for me to put a limit in the amount of processor usage (% 
CPU) that my current python script is using? Is there any module useful 
for this task? I saw Resource module but I think it is not the module I 
am looking for. Some people recommend to use nice and cpulimit unix 
tools, but those are external to python and I prefer a python solution. 
I am working with Linux (Ubuntu 10.04).


Best regards.
--
http://mail.python.org/mailman/listinfo/python-list


python CPU usage 99% on ubuntu aws instance using eventlet

2012-02-02 Thread Teddy Toyama
Okay, I am crossposting this from the eventlet dev mailing list since I am
in urgent need of some help.

I am running eventlet 0.9.16 on a Small (not micro) reserved ubuntu
11.10 aws instance.

I have a socketserver that is similar to the echo server from the examples
in the eventlet documentation. When I first start running the code,
everything seems fine, but I have been noticing that after 10 or 15 hours
the cpu usage goes from about 1% to 99+%. At that point I am unable to make
further connections to the socketserver.

This is the important (hopefully) parts of the code that I'm running:


   # the part of the code that listens for incoming connections
def socket_listener(self, port, socket_type):
L.LOGG(self._CONN, 0, H.func(), 'Action:Starting|SocketType:%s' %
socket_type)
listener = eventlet.listen((self._host, port))
listener.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
pool = eventlet.GreenPool(2)
while True:
connection, address = listener.accept()
connection.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
""" I want this loop to run as fast as possible.

I previously grabbed the first message that a plug/device
sent here
and used that information to add a new object to the
socket_hash.
Instead of doing that here I've relocated that logic to the
spawned
object so that this loop is doing as little work as
possible.
"""
L.LOGG(self._CONN, 0, H.func(),
'IPAddress:%s|GreenthreadsFree:%s|GreenthreadsRunning:%s' %
(str(address[0]), str(pool.free()),str(pool.running(
pool.spawn_n(self.spawn_socketobject, connection, address,
socket_type)
listener.shutdown(socket.SHUT_RDWR)
listener.close()

The L.LOGG method simply logs the supplied parameters to a mysql table.

I am running the socket_listener in a thread like so:


def listen_phones(self):
self.socket_listener(self._port_phone, 'phone')

t_phones = Thread(target = self.listen_phones)
t_phones.start()


>From my initial google searches I thought the issue might be similar to the
bug reported at
https://lists.secondlife.com/pipermail/eventletdev/2008-October/000140.html but
I am using a new version of eventlet so surely that cannot be it?

Is there any additional information I can provide to help further
troubleshoot the issue?

Teddy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: control CPU usage

2009-09-20 Thread Dave Angel

kakarukeys wrote:

On Sep 20, 10:57 pm, Dave Angel  wrote:
  

kakarukeys wrote:


On Sep 20, 6:24 pm, Dave Angel  wrote:
  

Jiang Fung Wong wrote:


Dear All,
  
Thank you for the information. I think I've some idea what the problem is

about after seeing the replies.
  
More information about my system and my script
  
PIII 1Ghz, 512MB RAM, Windows XP SP3
  
The script monitors global input using PyHook,

and calculates on the information collected from the events to output some
numbers. Based on the numbers, the script then performs some automation
using SendKeys module.
  
here is the memory usage:

firefox.exe, 69MB, 109MB
svchost.exe, 26MB, 17MB
pythonw.exe, 22MB, 17MB
searchindexer.exe, 16MB, 19MB
  
My first guess is that the script calculated for too long time after

receiving an event before propagating it to the default handler, resulting
the system to be non-responsive. I will try to implement the calculation
part in another thread.
Then the separate will have 100% CPU usage, hope the task scheduling of
Windows works in my favour.
  

(You top-posted this message, putting the whole stream out of order.  So
I deleted the history.)

All my assumptions about your environment are now invalid.  You don't

have a CPU-bound application, you have a Windows application with event
loop.  Further, you're using SendKeys to generate a keystroke to the
other process.  So there are many things that could be affecting your
latency, and all my previous guesses are useless.

Adding threads to your application will probably slow the system down

much more.  You need to find out what your present problem is before
complicating it.

You haven't really described the problem.  You say the system is

unresponsive, but you made it that way by creating a global hook;  a
notoriously inefficient mechanism.  That global hook inserts code into
every process in the system, and you've got a pretty low-end environment
to begin with.  So what's the real problem, and how severe is it?  And
how will you measure improvement?  The Task manager numbers are probably
irrelevant.

My first question is whether the pyHook event is calling the SendKeys

function directly (after your "lengthy" calculation) or whether there
are other events firing off  in between.  If it's all being done in the
one event, then measure its time, and gather some statistics (min time,
max time, average...).  The task manager has far too simplistic
visibility to be useful for this purpose.

What else is this application doing when it's waiting for a pyHook

call?  Whose event loop implementation are you using?  And the program
you're trying to control -- is there perhaps another way in?

DaveA


Hi,
  
Sorry I wasn't sure how to use Google groups to post a msg to the

newsgroup, I used Gmail to write my previous reply. What you and the
other guy have provided me isn't useless. Now I understand the non-
responsiveness may not be caused by high CPU usage, as the OS, be it
Windows or Linux, has a way to prioritize the tasks. This is a vital
clue to me.
  
By "not responsive", I mean, for some time, the mouse pointer is not

moving smoothly, to such extent that I can't do anything with the
mouse. It's like playing a multi-player game on a connection with a
lot of lag. It's not caused by global hook, because it happens under
certain condition, i.e. when fpa.ProcessEvent(word) is computing.
  
I included my main script for your reference. Comments:

(1) The automation method tc.Auto() is slow, but it doesn't cause any
problem, because the user would wait for the automation to finish,
before he continues to do something.
  
(2) all other methods invoked are fast, except fpa.ProcessEvent(word)

(this information is obtained from profiling). It is this method that
causes 100% CPU usage. I'm planning to move this method to a separate
thread, so that OnEvent(event) can finish executing, while the
separate thread goes on to finish its calculation. Is this a good
idea?
  
import pyHook

import TypingAnalyzer
import GUI
  
def OnEvent(event):

   if hasattr(event, "Key") and event.Ascii =9 and event.Key == "Tab"
and event.Injected =0 and event.Alt == 0:
   tc.Auto()
   return False
   else:
   recognized =k.ProcessEvent(event)
   if recognized:
   tc.MatchChar(recognized)
   paragraph =c.ProcessEvent(recognized)
   if paragraph:
   for word in paragraph:
   fpa.ProcessEvent(word)
  
   return True
  
hm =yHook.HookManager()

hm.MouseAllButtonsDown =nEvent
hm.KeyDown =nEvent
hm.HookMouse()
hm.HookKeyboard()
  
rk =ypingAnalyzer.ReadKey()

rc =ypingAnal

Re: control CPU usage

2009-09-20 Thread kakarukeys
On Sep 20, 10:57 pm, Dave Angel  wrote:
> kakarukeys wrote:
> > On Sep 20, 6:24 pm, Dave Angel  wrote:
>
> >> Jiang Fung Wong wrote:
>
> >>> Dear All,
>
> >>> Thank you for the information. I think I've some idea what the problem is
> >>> about after seeing the replies.
>
> >>> More information about my system and my script
>
> >>> PIII 1Ghz, 512MB RAM, Windows XP SP3
>
> >>> The script monitors global input using PyHook,
> >>> and calculates on the information collected from the events to output some
> >>> numbers. Based on the numbers, the script then performs some automation
> >>> using SendKeys module.
>
> >>> here is the memory usage:
> >>> firefox.exe, 69MB, 109MB
> >>> svchost.exe, 26MB, 17MB
> >>> pythonw.exe, 22MB, 17MB
> >>> searchindexer.exe, 16MB, 19MB
>
> >>> My first guess is that the script calculated for too long time after
> >>> receiving an event before propagating it to the default handler, resulting
> >>> the system to be non-responsive. I will try to implement the calculation
> >>> part in another thread.
> >>> Then the separate will have 100% CPU usage, hope the task scheduling of
> >>> Windows works in my favour.
>
> >> (You top-posted this message, putting the whole stream out of order.  So
> >> I deleted the history.)
>
> >> All my assumptions about your environment are now invalid.  You don't
> >> have a CPU-bound application, you have a Windows application with event
> >> loop.  Further, you're using SendKeys to generate a keystroke to the
> >> other process.  So there are many things that could be affecting your
> >> latency, and all my previous guesses are useless.
>
> >> Adding threads to your application will probably slow the system down
> >> much more.  You need to find out what your present problem is before
> >> complicating it.
>
> >> You haven't really described the problem.  You say the system is
> >> unresponsive, but you made it that way by creating a global hook;  a
> >> notoriously inefficient mechanism.  That global hook inserts code into
> >> every process in the system, and you've got a pretty low-end environment
> >> to begin with.  So what's the real problem, and how severe is it?  And
> >> how will you measure improvement?  The Task manager numbers are probably
> >> irrelevant.
>
> >> My first question is whether the pyHook event is calling the SendKeys
> >> function directly (after your "lengthy" calculation) or whether there
> >> are other events firing off  in between.  If it's all being done in the
> >> one event, then measure its time, and gather some statistics (min time,
> >> max time, average...).  The task manager has far too simplistic
> >> visibility to be useful for this purpose.
>
> >> What else is this application doing when it's waiting for a pyHook
> >> call?  Whose event loop implementation are you using?  And the program
> >> you're trying to control -- is there perhaps another way in?
>
> >> DaveA
>
> > Hi,
>
> > Sorry I wasn't sure how to use Google groups to post a msg to the
> > newsgroup, I used Gmail to write my previous reply. What you and the
> > other guy have provided me isn't useless. Now I understand the non-
> > responsiveness may not be caused by high CPU usage, as the OS, be it
> > Windows or Linux, has a way to prioritize the tasks. This is a vital
> > clue to me.
>
> > By "not responsive", I mean, for some time, the mouse pointer is not
> > moving smoothly, to such extent that I can't do anything with the
> > mouse. It's like playing a multi-player game on a connection with a
> > lot of lag. It's not caused by global hook, because it happens under
> > certain condition, i.e. when fpa.ProcessEvent(word) is computing.
>
> > I included my main script for your reference. Comments:
> > (1) The automation method tc.Auto() is slow, but it doesn't cause any
> > problem, because the user would wait for the automation to finish,
> > before he continues to do something.
>
> > (2) all other methods invoked are fast, except fpa.ProcessEvent(word)
> > (this information is obtained from profiling). It is this method that
> > causes 100% CPU usage. I'm planning to move this method to a separate
> > thread, so that OnEvent(event) can finish executing, while the
> >

Re: control CPU usage

2009-09-20 Thread Dave Angel

kakarukeys wrote:

On Sep 20, 6:24 pm, Dave Angel  wrote:
  

Jiang Fung Wong wrote:


Dear All,
  
Thank you for the information. I think I've some idea what the problem is

about after seeing the replies.
  
More information about my system and my script
  
PIII 1Ghz, 512MB RAM, Windows XP SP3
  
The script monitors global input using PyHook,

and calculates on the information collected from the events to output some
numbers. Based on the numbers, the script then performs some automation
using SendKeys module.
  
here is the memory usage:

firefox.exe, 69MB, 109MB
svchost.exe, 26MB, 17MB
pythonw.exe, 22MB, 17MB
searchindexer.exe, 16MB, 19MB
  
My first guess is that the script calculated for too long time after

receiving an event before propagating it to the default handler, resulting
the system to be non-responsive. I will try to implement the calculation
part in another thread.
Then the separate will have 100% CPU usage, hope the task scheduling of
Windows works in my favour.
  

(You top-posted this message, putting the whole stream out of order.  So
I deleted the history.)

All my assumptions about your environment are now invalid.  You don't
have a CPU-bound application, you have a Windows application with event
loop.  Further, you're using SendKeys to generate a keystroke to the
other process.  So there are many things that could be affecting your
latency, and all my previous guesses are useless.

Adding threads to your application will probably slow the system down
much more.  You need to find out what your present problem is before
complicating it.

You haven't really described the problem.  You say the system is
unresponsive, but you made it that way by creating a global hook;  a
notoriously inefficient mechanism.  That global hook inserts code into
every process in the system, and you've got a pretty low-end environment
to begin with.  So what's the real problem, and how severe is it?  And
how will you measure improvement?  The Task manager numbers are probably
irrelevant.

My first question is whether the pyHook event is calling the SendKeys
function directly (after your "lengthy" calculation) or whether there
are other events firing off  in between.  If it's all being done in the
one event, then measure its time, and gather some statistics (min time,
max time, average...).  The task manager has far too simplistic
visibility to be useful for this purpose.

What else is this application doing when it's waiting for a pyHook
call?  Whose event loop implementation are you using?  And the program
you're trying to control -- is there perhaps another way in?

DaveA



Hi,

Sorry I wasn't sure how to use Google groups to post a msg to the
newsgroup, I used Gmail to write my previous reply. What you and the
other guy have provided me isn't useless. Now I understand the non-
responsiveness may not be caused by high CPU usage, as the OS, be it
Windows or Linux, has a way to prioritize the tasks. This is a vital
clue to me.

By "not responsive", I mean, for some time, the mouse pointer is not
moving smoothly, to such extent that I can't do anything with the
mouse. It's like playing a multi-player game on a connection with a
lot of lag. It's not caused by global hook, because it happens under
certain condition, i.e. when fpa.ProcessEvent(word) is computing.

I included my main script for your reference. Comments:
(1) The automation method tc.Auto() is slow, but it doesn't cause any
problem, because the user would wait for the automation to finish,
before he continues to do something.

(2) all other methods invoked are fast, except fpa.ProcessEvent(word)
(this information is obtained from profiling). It is this method that
causes 100% CPU usage. I'm planning to move this method to a separate
thread, so that OnEvent(event) can finish executing, while the
separate thread goes on to finish its calculation. Is this a good
idea?

import pyHook
import TypingAnalyzer
import GUI

def OnEvent(event):
if hasattr(event, "Key") and event.Ascii == 9 and event.Key == "Tab"
and event.Injected == 0 and event.Alt == 0:
tc.Auto()
return False
else:
recognized = rk.ProcessEvent(event)
if recognized:
tc.MatchChar(recognized)
paragraph = rc.ProcessEvent(recognized)
if paragraph:
for word in paragraph:
fpa.ProcessEvent(word)

return True

hm = pyHook.HookManager()
hm.MouseAllButtonsDown = OnEvent
hm.KeyDown = OnEvent
hm.HookMouse()
hm.HookKeyboard()

rk = TypingAnalyzer.ReadKey()
rc = TypingAnalyzer.ReadChar()
fpa =  TypingAnalyzer.Analysis()
tc = TypingAnalyzer.Automation(fpa)

if __name__ == '__main__':
app = GUI.AW

Re: control CPU usage

2009-09-20 Thread kakarukeys
On Sep 20, 6:24 pm, Dave Angel  wrote:
> Jiang Fung Wong wrote:
> > Dear All,
>
> > Thank you for the information. I think I've some idea what the problem is
> > about after seeing the replies.
>
> > More information about my system and my script
>
> > PIII 1Ghz, 512MB RAM, Windows XP SP3
>
> > The script monitors global input using PyHook,
> > and calculates on the information collected from the events to output some
> > numbers. Based on the numbers, the script then performs some automation
> > using SendKeys module.
>
> > here is the memory usage:
> > firefox.exe, 69MB, 109MB
> > svchost.exe, 26MB, 17MB
> > pythonw.exe, 22MB, 17MB
> > searchindexer.exe, 16MB, 19MB
>
> > My first guess is that the script calculated for too long time after
> > receiving an event before propagating it to the default handler, resulting
> > the system to be non-responsive. I will try to implement the calculation
> > part in another thread.
> > Then the separate will have 100% CPU usage, hope the task scheduling of
> > Windows works in my favour.
>
> (You top-posted this message, putting the whole stream out of order.  So
> I deleted the history.)
>
> All my assumptions about your environment are now invalid.  You don't
> have a CPU-bound application, you have a Windows application with event
> loop.  Further, you're using SendKeys to generate a keystroke to the
> other process.  So there are many things that could be affecting your
> latency, and all my previous guesses are useless.
>
> Adding threads to your application will probably slow the system down
> much more.  You need to find out what your present problem is before
> complicating it.
>
> You haven't really described the problem.  You say the system is
> unresponsive, but you made it that way by creating a global hook;  a
> notoriously inefficient mechanism.  That global hook inserts code into
> every process in the system, and you've got a pretty low-end environment
> to begin with.  So what's the real problem, and how severe is it?  And
> how will you measure improvement?  The Task manager numbers are probably
> irrelevant.
>
> My first question is whether the pyHook event is calling the SendKeys
> function directly (after your "lengthy" calculation) or whether there
> are other events firing off  in between.  If it's all being done in the
> one event, then measure its time, and gather some statistics (min time,
> max time, average...).  The task manager has far too simplistic
> visibility to be useful for this purpose.
>
> What else is this application doing when it's waiting for a pyHook
> call?  Whose event loop implementation are you using?  And the program
> you're trying to control -- is there perhaps another way in?
>
> DaveA

Hi,

Sorry I wasn't sure how to use Google groups to post a msg to the
newsgroup, I used Gmail to write my previous reply. What you and the
other guy have provided me isn't useless. Now I understand the non-
responsiveness may not be caused by high CPU usage, as the OS, be it
Windows or Linux, has a way to prioritize the tasks. This is a vital
clue to me.

By "not responsive", I mean, for some time, the mouse pointer is not
moving smoothly, to such extent that I can't do anything with the
mouse. It's like playing a multi-player game on a connection with a
lot of lag. It's not caused by global hook, because it happens under
certain condition, i.e. when fpa.ProcessEvent(word) is computing.

I included my main script for your reference. Comments:
(1) The automation method tc.Auto() is slow, but it doesn't cause any
problem, because the user would wait for the automation to finish,
before he continues to do something.

(2) all other methods invoked are fast, except fpa.ProcessEvent(word)
(this information is obtained from profiling). It is this method that
causes 100% CPU usage. I'm planning to move this method to a separate
thread, so that OnEvent(event) can finish executing, while the
separate thread goes on to finish its calculation. Is this a good
idea?

import pyHook
import TypingAnalyzer
import GUI

def OnEvent(event):
if hasattr(event, "Key") and event.Ascii == 9 and event.Key == "Tab"
and event.Injected == 0 and event.Alt == 0:
tc.Auto()
return False
else:
recognized = rk.ProcessEvent(event)
if recognized:
tc.MatchChar(recognized)
paragraph = rc.ProcessEvent(recognized)
if paragraph:
for word in paragraph:
fpa.ProcessEvent(word)

r

Re: control CPU usage

2009-09-20 Thread Dave Angel

Jiang Fung Wong wrote:

Dear All,

Thank you for the information. I think I've some idea what the problem is
about after seeing the replies.

More information about my system and my script

PIII 1Ghz, 512MB RAM, Windows XP SP3

The script monitors global input using PyHook,
and calculates on the information collected from the events to output some
numbers. Based on the numbers, the script then performs some automation
using SendKeys module.

here is the memory usage:
firefox.exe, 69MB, 109MB
svchost.exe, 26MB, 17MB
pythonw.exe, 22MB, 17MB
searchindexer.exe, 16MB, 19MB

My first guess is that the script calculated for too long time after
receiving an event before propagating it to the default handler, resulting
the system to be non-responsive. I will try to implement the calculation
part in another thread.
Then the separate will have 100% CPU usage, hope the task scheduling of
Windows works in my favour.

  
(You top-posted this message, putting the whole stream out of order.  So 
I deleted the history.)


All my assumptions about your environment are now invalid.  You don't 
have a CPU-bound application, you have a Windows application with event 
loop.  Further, you're using SendKeys to generate a keystroke to the 
other process.  So there are many things that could be affecting your 
latency, and all my previous guesses are useless.


Adding threads to your application will probably slow the system down 
much more.  You need to find out what your present problem is before 
complicating it.


You haven't really described the problem.  You say the system is 
unresponsive, but you made it that way by creating a global hook;  a 
notoriously inefficient mechanism.  That global hook inserts code into 
every process in the system, and you've got a pretty low-end environment 
to begin with.  So what's the real problem, and how severe is it?  And 
how will you measure improvement?  The Task manager numbers are probably 
irrelevant.


My first question is whether the pyHook event is calling the SendKeys 
function directly (after your "lengthy" calculation) or whether there 
are other events firing off  in between.  If it's all being done in the 
one event, then measure its time, and gather some statistics (min time, 
max time, average...).  The task manager has far too simplistic 
visibility to be useful for this purpose.


What else is this application doing when it's waiting for a pyHook 
call?  Whose event loop implementation are you using?  And the program 
you're trying to control -- is there perhaps another way in?


DaveA

--
http://mail.python.org/mailman/listinfo/python-list


Re: control CPU usage

2009-09-19 Thread Jiang Fung Wong
Dear All,

Thank you for the information. I think I've some idea what the problem is
about after seeing the replies.

More information about my system and my script

PIII 1Ghz, 512MB RAM, Windows XP SP3

The script monitors global input using PyHook,
and calculates on the information collected from the events to output some
numbers. Based on the numbers, the script then performs some automation
using SendKeys module.

here is the memory usage:
firefox.exe, 69MB, 109MB
svchost.exe, 26MB, 17MB
pythonw.exe, 22MB, 17MB
searchindexer.exe, 16MB, 19MB

My first guess is that the script calculated for too long time after
receiving an event before propagating it to the default handler, resulting
the system to be non-responsive. I will try to implement the calculation
part in another thread.
Then the separate will have 100% CPU usage, hope the task scheduling of
Windows works in my favour.

On Sun, Sep 20, 2009 at 5:22 AM, Dave Angel  wrote:

> kakarukeys wrote:
>
>> Hi,
>>
>> When I am running a loop for a long time, calculating heavily, the CPU
>> usage
>> is at 100%, making the comp not so responsive. Is there a way to
>> control the
>> CPU usage at say 80%? putting a time.sleep(0.x) doesn't seem to help
>> although CPU usage level is reduced, but it's unstable.
>>
>> Regards,
>> W.J.F.
>>
>>
>>
> Controlling a task's scheduling is most definitely OS-dependent., so you
> need to say what OS you're running on.  And whether it's a multi-core and or
> duo processor.
>
> In Windows, there is a generic way to tell the system that you want to give
> a boost to whatever task has the user focus (generally the top-window on the
> desktop).  On some versions, that's the default, on others, it's not.  You
> change it from Control Panel.  I'd have to go look to tell you what applet,
> but I don't even know if you're on Windows.
>
> In addition, a program can adjust its own priority, much the way the Unix
> 'nice' command works.  You'd use the Win32 library for that.
>
> And as you already tried, you can add sleep() operations to your
> application.
>
> But if you're looking at the task list in the Windows Task Manager, you
> aren't necessarily going to see what you apparently want.  There's no way to
> programmatically tell the system to use a certain percentage for a given
> task.  If there's nothing else to do, then a low priority task is still
> going to get nearly 100% of the CPU.  Good thing.  But even if there are
> other things to do, the scheduling is a complex interaction between what
> kinds of work the various processes have been doing lately, how much memory
> load they have, and what priority they're assigned.
>
> If you just want other processes to be "responsive" when they've got the
> focus, you may want to make that global setting.  But you may need to better
> define "responsive" and "unstable."
>
> DaveA
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: control CPU usage

2009-09-19 Thread Dave Angel

kakarukeys wrote:

Hi,

When I am running a loop for a long time, calculating heavily, the CPU
usage
is at 100%, making the comp not so responsive. Is there a way to
control the
CPU usage at say 80%? putting a time.sleep(0.x) doesn't seem to help
although CPU usage level is reduced, but it's unstable.

Regards,
W.J.F.

  
Controlling a task's scheduling is most definitely OS-dependent., so you 
need to say what OS you're running on.  And whether it's a multi-core 
and or duo processor.


In Windows, there is a generic way to tell the system that you want to 
give a boost to whatever task has the user focus (generally the 
top-window on the desktop).  On some versions, that's the default, on 
others, it's not.  You change it from Control Panel.  I'd have to go 
look to tell you what applet, but I don't even know if you're on Windows.


In addition, a program can adjust its own priority, much the way the 
Unix 'nice' command works.  You'd use the Win32 library for that.


And as you already tried, you can add sleep() operations to your 
application.


But if you're looking at the task list in the Windows Task Manager, you 
aren't necessarily going to see what you apparently want.  There's no 
way to programmatically tell the system to use a certain percentage for 
a given task.  If there's nothing else to do, then a low priority task 
is still going to get nearly 100% of the CPU.  Good thing.  But even if 
there are other things to do, the scheduling is a complex interaction 
between what kinds of work the various processes have been doing lately, 
how much memory load they have, and what priority they're assigned.


If you just want other processes to be "responsive" when they've got the 
focus, you may want to make that global setting.  But you may need to 
better define "responsive" and "unstable."


DaveA
--
http://mail.python.org/mailman/listinfo/python-list


Re: control CPU usage

2009-09-19 Thread Sean DiZazzo
On Sep 19, 9:17 am, kakarukeys  wrote:
> Hi,
>
> When I am running a loop for a long time, calculating heavily, the CPU
> usage
> is at 100%, making the comp not so responsive. Is there a way to
> control the
> CPU usage at say 80%? putting a time.sleep(0.x) doesn't seem to help
> although CPU usage level is reduced, but it's unstable.
>
> Regards,
> W.J.F.

If you are on linux, you can use the 'nice' command.  It will still
take 100%, but will give it up to other processes that need to use the
cpu.

nice -n 19 

control CPU usage

2009-09-19 Thread kakarukeys
Hi,

When I am running a loop for a long time, calculating heavily, the CPU
usage
is at 100%, making the comp not so responsive. Is there a way to
control the
CPU usage at say 80%? putting a time.sleep(0.x) doesn't seem to help
although CPU usage level is reduced, but it's unstable.

Regards,
W.J.F.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: CPU usage while reading a named pipe

2009-09-13 Thread Nick Craig-Wood
Miguel P  wrote:
>  On Sep 12, 2:54 pm, Ned Deily  wrote:
> > In article
> > ,
> >  Miguel P  wrote:
> > > I've been working on parsing (tailing) a named pipe which is the
> > > syslog output of the traffic for a rather busy haproxy instance. It's
> > > a fair bit of traffic (upto 3k hits/s per server), but I am finding
> > > that simply tailing the file  in python, without any processing, is
> > > taking up 15% of a CPU core. In contrast HAProxy takes 25% and syslogd
> > > takes 5% with the same load. `cat < /named.pipe` takes 0-2%
> >
> > > Am I just doing things horribly wrong or is this normal?
> >
> > > Here is my code:
> >
> > > from collections import deque
> > > import io, sys
> >
> > > WATCHED_PIPE = '/var/log/haproxy.pipe'
> >
> > > if __name__ == '__main__':
> > >     try:
> > >         log_pool = deque([],1)
> > >         fd = io.open(WATCHED_PIPE)
> > >         for line in fd:
> > >             log_pool.append(line)
> > >     except KeyboardInterrupt:
> > >         sys.exit()
> >
> > > Deque appends are O(1) so that's not it. And I am using 2.6's io
> > > module because it's supposed to handle named pipes better. I have
> > > commented the deque appending line and it still takes about the same
> > > CPU.
> >
> > Be aware that the io module in Python 2.6 is written in Python and was
> > viewed as a prototype.  In the current svn trunk, what will be Python
> > 2.7 has a much faster C implementation of the io module backported from
> > Python 3.1.
> 
>  Aha, I will test with trunk and see if the performance is better, if
>  so I'll use 2.6 in production until 2.7 comes out. I will report back
>  when I have made the tests.

Why don't you try just using the builtin open() with bufsize
parameter set big?

Something like this (tested with named pipes).  Tweak BUFFERSIZE and
SLEEP_INTERVAL for maximum performance!


import time

BUFFERSIZE = 1024*1024
SLEEP_INTERVAL = 0.1

def tail(path):
fd = open(path)
buf =  ""
while True:
buf += fd.read(BUFFERSIZE)
if buf:
lines = buf.splitlines(True)
for line in lines[:-1]:
yield line
buf = lines[-1]
if buf.endswith("\n"):
yield buf
buf = ""
else:
time.sleep(SLEEP_INTERVAL)


def main(path):
for line in tail(path):
print "%r:%r" % (len(line), line)

if __name__ == "__main__":
import sys
main(sys.argv[1])


-- 
Nick Craig-Wood  -- http://www.craig-wood.com/nick
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: CPU usage while reading a named pipe

2009-09-12 Thread Miguel P
On Sep 12, 2:54 pm, Ned Deily  wrote:
> In article
> ,
>  Miguel P  wrote:
>
>
>
> > I've been working on parsing (tailing) a named pipe which is the
> > syslog output of the traffic for a rather busy haproxy instance. It's
> > a fair bit of traffic (upto 3k hits/s per server), but I am finding
> > that simply tailing the file  in python, without any processing, is
> > taking up 15% of a CPU core. In contrast HAProxy takes 25% and syslogd
> > takes 5% with the same load. `cat < /named.pipe` takes 0-2%
>
> > Am I just doing things horribly wrong or is this normal?
>
> > Here is my code:
>
> > from collections import deque
> > import io, sys
>
> > WATCHED_PIPE = '/var/log/haproxy.pipe'
>
> > if __name__ == '__main__':
> >     try:
> >         log_pool = deque([],1)
> >         fd = io.open(WATCHED_PIPE)
> >         for line in fd:
> >             log_pool.append(line)
> >     except KeyboardInterrupt:
> >         sys.exit()
>
> > Deque appends are O(1) so that's not it. And I am using 2.6's io
> > module because it's supposed to handle named pipes better. I have
> > commented the deque appending line and it still takes about the same
> > CPU.
>
> Be aware that the io module in Python 2.6 is written in Python and was
> viewed as a prototype.  In the current svn trunk, what will be Python
> 2.7 has a much faster C implementation of the io module backported from
> Python 3.1.
>
> --
>  Ned Deily,
>  n...@acm.org

Aha, I will test with trunk and see if the performance is better, if
so I'll use 2.6 in production until 2.7 comes out. I will report back
when I have made the tests.

Thanks,
Miguel Pilar
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: CPU usage while reading a named pipe

2009-09-12 Thread Ned Deily
In article 
,
 Miguel P  wrote:
> I've been working on parsing (tailing) a named pipe which is the
> syslog output of the traffic for a rather busy haproxy instance. It's
> a fair bit of traffic (upto 3k hits/s per server), but I am finding
> that simply tailing the file  in python, without any processing, is
> taking up 15% of a CPU core. In contrast HAProxy takes 25% and syslogd
> takes 5% with the same load. `cat < /named.pipe` takes 0-2%
> 
> Am I just doing things horribly wrong or is this normal?
> 
> Here is my code:
> 
> from collections import deque
> import io, sys
> 
> WATCHED_PIPE = '/var/log/haproxy.pipe'
> 
> if __name__ == '__main__':
> try:
> log_pool = deque([],1)
> fd = io.open(WATCHED_PIPE)
> for line in fd:
> log_pool.append(line)
> except KeyboardInterrupt:
> sys.exit()
> 
> Deque appends are O(1) so that's not it. And I am using 2.6's io
> module because it's supposed to handle named pipes better. I have
> commented the deque appending line and it still takes about the same
> CPU.

Be aware that the io module in Python 2.6 is written in Python and was 
viewed as a prototype.  In the current svn trunk, what will be Python 
2.7 has a much faster C implementation of the io module backported from 
Python 3.1.

-- 
 Ned Deily,
 n...@acm.org

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: CPU usage while reading a named pipe

2009-09-12 Thread MRAB

Miguel P wrote:

Hey everyone,

I've been working on parsing (tailing) a named pipe which is the
syslog output of the traffic for a rather busy haproxy instance. It's
a fair bit of traffic (upto 3k hits/s per server), but I am finding
that simply tailing the file  in python, without any processing, is
taking up 15% of a CPU core. In contrast HAProxy takes 25% and syslogd
takes 5% with the same load. `cat < /named.pipe` takes 0-2%

Am I just doing things horribly wrong or is this normal?

Here is my code:

from collections import deque
import io, sys

WATCHED_PIPE = '/var/log/haproxy.pipe'

if __name__ == '__main__':
try:
log_pool = deque([],1)
fd = io.open(WATCHED_PIPE)
for line in fd:
log_pool.append(line)
except KeyboardInterrupt:
sys.exit()

Deque appends are O(1) so that's not it. And I am using 2.6's io
module because it's supposed to handle named pipes better. I have
commented the deque appending line and it still takes about the same
CPU.

The system is running Ubuntu 9.04 with kernel 2.6.28 and ext4 (not
sure the FS is relevant).

Any help bringing down the CPU usage would be really appreciated, and
if it can't be done I guess that's ok too, server has 6 cores not
doing much.


Is this any faster?

log_pool.extend(fd)
--
http://mail.python.org/mailman/listinfo/python-list


CPU usage while reading a named pipe

2009-09-12 Thread Miguel P
Hey everyone,

I've been working on parsing (tailing) a named pipe which is the
syslog output of the traffic for a rather busy haproxy instance. It's
a fair bit of traffic (upto 3k hits/s per server), but I am finding
that simply tailing the file  in python, without any processing, is
taking up 15% of a CPU core. In contrast HAProxy takes 25% and syslogd
takes 5% with the same load. `cat < /named.pipe` takes 0-2%

Am I just doing things horribly wrong or is this normal?

Here is my code:

from collections import deque
import io, sys

WATCHED_PIPE = '/var/log/haproxy.pipe'

if __name__ == '__main__':
try:
log_pool = deque([],1)
fd = io.open(WATCHED_PIPE)
for line in fd:
log_pool.append(line)
except KeyboardInterrupt:
sys.exit()

Deque appends are O(1) so that's not it. And I am using 2.6's io
module because it's supposed to handle named pipes better. I have
commented the deque appending line and it still takes about the same
CPU.

The system is running Ubuntu 9.04 with kernel 2.6.28 and ext4 (not
sure the FS is relevant).

Any help bringing down the CPU usage would be really appreciated, and
if it can't be done I guess that's ok too, server has 6 cores not
doing much.
-- 
http://mail.python.org/mailman/listinfo/python-list


poplib 100% cpu usage

2008-07-16 Thread Oli Schacher

Hi all

I wrote a multithreaded script that  polls mails from several pop/imap 
accounts. To fetch the messages I'm using the getmail classes ( 
http://pyropus.ca/software/getmail/ ) , those classes use the poplib for 
the real pop transaction.


When I run my script for a few hours cpu usage goes up to 100%, 
sometimes even 104% according to 'top' :-) This made our test machine 
freeze once. First I thought I maybe didn't stop my threads correctly 
after polling an account but I attached a remote debugger and it showed 
that threads are stopped ok and that the cpu gets eaten in poplib in the 
function "_getline" which states in the description:


---snip---
 # Internal: return one line from the server, stripping CRLF.
# This is where all the CPU time of this module is consumed.
# Raise error_proto('-ERR EOF') if the connection is closed.

def _getline(self):
---snip---


So for testing purposes I changed this function and added:
time.sleep(0.0001)
(googling about similar problems with cpu usage yields this time.sleep() 
trick)


It now looks ok, cpu usage is at about 30% with a few spikes to 80-90%.

Of course I don't feel cozy about changing a standard library as the 
changes will be overwritten by python upgrades.


Did someone else from the list hit a similar problem and maybe has a 
better solution?



Thanks for your hints.

Best regards,
Oli Schacher
--
http://mail.python.org/mailman/listinfo/python-list


Re: finding child cpu usage of a running child

2008-01-28 Thread Matthew_WARREN

had to say, that subject conjoured up an interesting image in my head :)


This message and any attachments (the "message") is
intended solely for the addressees and is confidential. 
If you receive this message in error, please delete it and 
immediately notify the sender. Any use not in accord with 
its purpose, any dissemination or disclosure, either whole 
or partial, is prohibited except formal approval. The internet
can not guarantee the integrity of this message. 
BNP PARIBAS (and its subsidiaries) shall (will) not 
therefore be liable for the message if modified. 
Do not print this message unless it is necessary,
consider the environment.

-

Ce message et toutes les pieces jointes (ci-apres le 
"message") sont etablis a l'intention exclusive de ses 
destinataires et sont confidentiels. Si vous recevez ce 
message par erreur, merci de le detruire et d'en avertir 
immediatement l'expediteur. Toute utilisation de ce 
message non conforme a sa destination, toute diffusion 
ou toute publication, totale ou partielle, est interdite, sauf 
autorisation expresse. L'internet ne permettant pas 
d'assurer l'integrite de ce message, BNP PARIBAS (et ses
filiales) decline(nt) toute responsabilite au titre de ce 
message, dans l'hypothese ou il aurait ete modifie.
N'imprimez ce message que si necessaire,
pensez a l'environnement.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: finding child cpu usage of a running child

2008-01-26 Thread Karthik Gurusamy
On Jan 25, 11:59 pm, Paddy <[EMAIL PROTECTED]> wrote:
> On Jan 26, 5:43 am, Karthik Gurusamy <[EMAIL PROTECTED]> wrote:
>
>
>
> > Hi,
>
> > Wondering if there is a way to measure a child process's cpu usage
> > (sys and user) when the child is still running. I see os.times()
> > working fine in my system (Linux 2.6.9-42.7.ELsmp), but it gives valid
> > data only after the child has exited. When the child is alive,
> > os.times() data for child is zero for both child-sys and child-user
> > cpu.
>
> > My script (process P1) launches child process P2 (using
> > popen2.Popen3). P2 is a long running process (big compilation). Every
> > minute or so, from P1, I want to measure how much cpu P2 has consumed
> > and based on that I can make some estimate on the completion time of
> > P2 (I have a rough idea how much total cpu P2 needs to complete).
>
> > I understand it may be too expensive to update this information to the
> > parent process when any of the child/grand-child completes; but
> > wondering if any there is any way to get this info; the expensive
> > operations is on-demand only when the request is made.
>
> > Thanks,
> > Karthik
>
> I had a similar requirement in December and found:
>  http://lilypond.org/~janneke/software/
>
> proc-time.c and proc-time.py poll /proc/ files whilst command
> is running to get stats.

Great, thanks. From proc-time.py looks like all I want are the fields
13 to 16 of /proc//stat. And I see them updated in real
time (probably the kernel does it on a periodic interrupt).

Thanks,
Karthik

>
> Enjoy,  - Paddy.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: finding child cpu usage of a running child

2008-01-26 Thread Paddy
On Jan 26, 5:43 am, Karthik Gurusamy <[EMAIL PROTECTED]> wrote:
> Hi,
>
> Wondering if there is a way to measure a child process's cpu usage
> (sys and user) when the child is still running. I see os.times()
> working fine in my system (Linux 2.6.9-42.7.ELsmp), but it gives valid
> data only after the child has exited. When the child is alive,
> os.times() data for child is zero for both child-sys and child-user
> cpu.
>
> My script (process P1) launches child process P2 (using
> popen2.Popen3). P2 is a long running process (big compilation). Every
> minute or so, from P1, I want to measure how much cpu P2 has consumed
> and based on that I can make some estimate on the completion time of
> P2 (I have a rough idea how much total cpu P2 needs to complete).
>
> I understand it may be too expensive to update this information to the
> parent process when any of the child/grand-child completes; but
> wondering if any there is any way to get this info; the expensive
> operations is on-demand only when the request is made.
>
> Thanks,
> Karthik

I had a similar requirement in December and found:
  http://lilypond.org/~janneke/software/

proc-time.c and proc-time.py poll /proc/ files whilst command
is running to get stats.

Enjoy,  - Paddy.
-- 
http://mail.python.org/mailman/listinfo/python-list


finding child cpu usage of a running child

2008-01-25 Thread Karthik Gurusamy
Hi,

Wondering if there is a way to measure a child process's cpu usage
(sys and user) when the child is still running. I see os.times()
working fine in my system (Linux 2.6.9-42.7.ELsmp), but it gives valid
data only after the child has exited. When the child is alive,
os.times() data for child is zero for both child-sys and child-user
cpu.

My script (process P1) launches child process P2 (using
popen2.Popen3). P2 is a long running process (big compilation). Every
minute or so, from P1, I want to measure how much cpu P2 has consumed
and based on that I can make some estimate on the completion time of
P2 (I have a rough idea how much total cpu P2 needs to complete).

I understand it may be too expensive to update this information to the
parent process when any of the child/grand-child completes; but
wondering if any there is any way to get this info; the expensive
operations is on-demand only when the request is made.

Thanks,
Karthik
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: 100% CPU Usage when a tcp client is disconnected

2007-11-22 Thread Scott David Daniels
Aaron Watters wrote:
> On Nov 22, 9:53 am, Tzury Bar Yochay <[EMAIL PROTECTED]> wrote:
>> The following is a code I am using for a simple tcp echo server.
>> When I run it and then connect to it (with Telnet for example) if I
>> shout down the telnet the CPU tops 100% of usage and saty there
>> forever
>> def handle(self):
>> while 1:
>> data = self.request.recv(1024)
>> self.request.send(data)
>> if data.strip() == 'bye':
>> return
> ... Try changing it to ...
> data = "dummy"
> while data:
> ...

Even better:
 from functools import partial

 def handle(self):
 for data in iter(partial(self.request.recv, 1024), ''):
 self.request.send(data)
 if data.strip() == 'bye':
 break
 else:
 raise ValueError('Gone w/o a "bye"')  # or IOError

-Scott

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: 100% CPU Usage when a tcp client is disconnected

2007-11-22 Thread Tzury Bar Yochay
Thank Hrvoje as well
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: 100% CPU Usage when a tcp client is disconnected

2007-11-22 Thread Hrvoje Niksic
Tzury Bar Yochay <[EMAIL PROTECTED]> writes:

> The following is a code I am using for a simple tcp echo server.
> When I run it and then connect to it (with Telnet for example) if I
> shout down the telnet the CPU tops 100% of usage and saty there
> forever.  Can one tell what am I doing wrong?

If you shut down telnet, self.request.recv(1024) returns an empty
string, meaning EOF, and you start inflooping.

> def handle(self):
> while 1:
> data = self.request.recv(1024)
> self.request.send(data)
> if data.strip() == 'bye':  # add: or data == ''
> return
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: 100% CPU Usage when a tcp client is disconnected

2007-11-22 Thread Tzury Bar Yochay
> data = "dummy"
> while data:
> ...

Thanks Alot
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: 100% CPU Usage when a tcp client is disconnected

2007-11-22 Thread Aaron Watters
On Nov 22, 9:53 am, Tzury Bar Yochay <[EMAIL PROTECTED]> wrote:
> The following is a code I am using for a simple tcp echo server.
> When I run it and then connect to it (with Telnet for example) if I
> shout down the telnet the CPU tops 100% of usage and saty there
> forever
> def handle(self):
> while 1:
> data = self.request.recv(1024)
> self.request.send(data)
> if data.strip() == 'bye':
> return

I forget exactly how the superclass works, but
that while 1 looks suspicious.  Try chaning it
to

data = "dummy"
while data:
...

  -- Aaron Watters

===
http://www.xfeedme.com/nucular/pydistro.py/go?FREETEXT=help+infinite+loop
-- 
http://mail.python.org/mailman/listinfo/python-list


100% CPU Usage when a tcp client is disconnected

2007-11-22 Thread Tzury Bar Yochay
The following is a code I am using for a simple tcp echo server.
When I run it and then connect to it (with Telnet for example) if I
shout down the telnet the CPU tops 100% of usage and saty there
forever.
Can one tell what am I doing wrong?

#code.py

import SocketServer

class MyServer(SocketServer.BaseRequestHandler ):
def setup(self):
print self.client_address, 'connected!'
self.request.send('hi ' + str(self.client_address) + '\n')

def handle(self):
while 1:
data = self.request.recv(1024)
self.request.send(data)
if data.strip() == 'bye':
return

def finish(self):
print self.client_address, 'disconnected!'
self.request.send('bye ' + str(self.client_address) + '\n')

#server host is a tuple ('host', port)
server = SocketServer.ThreadingTCPServer(('', 50008), MyServer)
server.serve_forever()
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: track cpu usage of linux application

2007-05-15 Thread Fabian Braennstroem
Hi,
thanks to both! I will take a look at the proc files!

* James T. Dennis <[EMAIL PROTECTED]> wrote:
> Fabian Braennstroem <[EMAIL PROTECTED]> wrote:
>> Hi,
>
>>    I would like to track the cpu usage of a couple of
>>programs using python. Maybe it works somehow with
>>piping 'top' to python read the cpu load for a greped
>>application and clocking the the first and last
>>appearence. Is that a good approach or does anyone have
>>a more elegant way to do that?
>
>> Greetings!
>> Fabian
>
>  If you're on a Linux system you might be far better accessing
>  the /proc/$PID/stat files directly. The values you'd find therein
>  are documented:
>
>   http://www.die.net/doc/linux/man/man5/proc.5.html
>
>  (among other places).
>
>  Of course you could write you code to look for file and fall back
>  to use the 'ps' command if it fails.  In addition you can supply
>  arguments to the 'ps' command to limit it to reporting just on the
>  process(es) in which you are interested ... and to eliminate the
>  header line and irrelevant columns of output.

Greetings!
 Fabian

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: track cpu usage of linux application

2007-05-14 Thread James T. Dennis
Fabian Braennstroem <[EMAIL PROTECTED]> wrote:
> Hi,

>I would like to track the cpu usage of a couple of
>programs using python. Maybe it works somehow with
>piping 'top' to python read the cpu load for a greped
>application and clocking the the first and last
>appearence. Is that a good approach or does anyone have
>a more elegant way to do that?

> Greetings!
> Fabian

 If you're on a Linux system you might be far better accessing
 the /proc/$PID/stat files directly. The values you'd find therein
 are documented:

http://www.die.net/doc/linux/man/man5/proc.5.html

 (among other places).

 Of course you could write you code to look for file and fall back
 to use the 'ps' command if it fails.  In addition you can supply
 arguments to the 'ps' command to limit it to reporting just on the
 process(es) in which you are interested ... and to eliminate the
 header line and irrelevant columns of output.


-- 
Jim Dennis,
Starshine: Signed, Sealed, Delivered

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: track cpu usage of linux application

2007-05-14 Thread Zed A. Shaw
On Mon, 14 May 2007 20:56:20 +
Fabian Braennstroem <[EMAIL PROTECTED]> wrote:

> Hi,
> 
> I would like to track the cpu usage of a couple of
> programs using python. Maybe it works somehow with
> piping 'top' to python read the cpu load for a greped
> application and clocking the the first and last
> appearence. Is that a good approach or does anyone have
> a more elegant way to do that?

Look at the /proc filesystem instead.  For example, you can do this:

cat /proc/49595/status

To get information about that process.  Using this you can find out
anything you need with just basic file operations.

Use: man proc to find our more.

-- 
Zed A. Shaw
- Hate: http://savingtheinternetwithhate.com/
- Good: http://www.zedshaw.com/
- Evil: http://yearofevil.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


track cpu usage of linux application

2007-05-14 Thread Fabian Braennstroem
Hi,

I would like to track the cpu usage of a couple of
programs using python. Maybe it works somehow with
piping 'top' to python read the cpu load for a greped
application and clocking the the first and last
appearence. Is that a good approach or does anyone have
a more elegant way to do that?

Greetings!
 Fabian

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: CPU usage.

2007-05-09 Thread Gabriel Genellina
En Wed, 09 May 2007 02:58:45 -0300, Navid Parvini  
<[EMAIL PROTECTED]> escribió:

>  I want to get the CPU usage in my code.
>  Is there any module in Python to get it?
>   Also I want to get in on Windows and Linux.

On Windows you can use WMI; Tim Golden made an excellent library that  
let's you query WMI using Python:
http://tgolden.sc.sabren.com/python/wmi.html
Then you need to know *what* to query; google for "WMI CPU usage".

Since WMI is just Microsoft's own implementation of WBEM, you could find a  
Linux version, I don't know.

-- 
Gabriel Genellina

-- 
http://mail.python.org/mailman/listinfo/python-list


CPU usage.

2007-05-08 Thread Navid Parvini
Dear All,
   
  I want to get the CPU usage in my code. 
 Is there any module in Python to get it?
  Also I want to get in on Windows and Linux.
   
  Thank you in advance.
  Navid

   
-
Ahhh...imagining that irresistible "new car" smell?
 Check outnew cars at Yahoo! Autos.-- 
http://mail.python.org/mailman/listinfo/python-list

Re: CPU usage

2007-05-08 Thread Tim Golden
Navid Parvini wrote:
>   I want to get the CPU usage in my code. 
 >   Is there any module in Python to get it?

What Operating System are you on?

TJG
-- 
http://mail.python.org/mailman/listinfo/python-list


CPU usage

2007-05-08 Thread Navid Parvini
Dear All,
   
  I want to get the CPU usage in my code. Is there any module in Python to get 
it?
   
  Would you please help me?
   
  Thank you in advance.
  Navid

 
-
Bored stiff? Loosen up...
Download and play hundreds of games for free on Yahoo! Games.-- 
http://mail.python.org/mailman/listinfo/python-list

a question about MS Windows Clipboard to decrease cpu usage.

2006-10-22 Thread [EMAIL PROTECTED]
hello, I want to record the content of windows'clipboad,
after search c.l.p. I got some practical answer such as
http://groups.google.com/group/comp.lang.python/browse_thread/thread/57318b87e33e79b0/a7c5d5fcbd4eb58a
I have create my small script, it can get clipboard preliminary. but
now i had a trouble,
I use win32gui.PumpWaitingMessages() in while True: so the script use
9x% cpu. what should i do?
the code was post below.

##
import win32ui, win32clipboard, win32con, win32api, win32gui
def paste():
win32clipboard.OpenClipboard(0)
data = win32clipboard.GetClipboardData()
win32clipboard.CloseClipboard()
return data
class ClipRecord(object):
def __init__(self):
self.hPrev = 0
self.first   = True
self.win = win32ui.CreateFrame()
self.win.CreateWindow(None,'',win32con.WS_OVERLAPPEDWINDOW)

self.win.HookMessage(self.OnDrawClipboard,win32con.WM_DRAWCLIPBOARD)

self.win.HookMessage(self.OnChangeCBChain,win32con.WM_CHANGECBCHAIN)
self.win.HookMessage(self.OnDestroy,win32con.WM_DESTROY)
try:

self.hPrev=win32clipboard.SetClipboardViewer(self.win.GetSafeHwnd())
except win32api.error, err:
if win32api.GetLastError () == 0:
# information that there is no other window in chain
pass
else:
raise
while True:
win32gui.PumpWaitingMessages()
def OnChangeCBChain(self, *args):
msg, wParam, lParam = args[-1][1:4]
if self.hPrev == wParam:
   # repair the chain
   self.hPrev = lParam
if self.hPrev:
   # pass the message to the next window in chain
   win32api.SendMessage (self.hPrev, msg, wParam, lParam)
def OnDrawClipboard(self, *args):
msg, wParam, lParam = args[-1][1:4]
if self.first:
   self.first = False
else:
   print "clipboard content changed"
   print paste()
if self.hPrev:
   # pass the message to the next window in chain
   win32api.SendMessage (self.hPrev, msg, wParam, lParam)
def OnDestroy(self):
if self.hPrev:
win32clipboard.ChangeClipboardChain(self.win.GetSafeHwnd(),
self.hPrev)
else:
win32clipboard.ChangeClipboardChain(self.win.GetSafeHwnd(),
0)
if __name__ == "__main__":
cr = ClipRecord()

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Get CPU usage of a single process in Windows

2006-09-12 Thread Gerrit Muller
[Tim Golden]


now I only have to find some time to play around...

thanks, Gerrit

-- 
Gaudi systems architecting:

-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Get CPU usage of a single process in Windows

2006-09-12 Thread Tim Golden
[Gerrit Muller]

| If you have a working example of CPU usage could you post the 
| result? I would be interested.

OK. Here's a workingish example, cut down from the link
I posted earlier. This one was designed to work with Win2K
which I was using at the time. For WinXP and later, there's
a new counter with the ungainly name of 

Win32_PerfFormattedData_PerfProc_Process

which should give you the number straight off without having
to do the take-and-diff-and-divide dance. However, it doesn't
seem to do anything useful on my (XP) system. Haven't tried
that hard, I admit.

As ever, if you can find any example around the Web -- and there
are loads -- converting it to Python should be a breeze.

TJG


import time
import wmi

c = wmi.WMI ()

process_info = {}
while True:
  for process in c.Win32_Process ():
id = process.ProcessID
for p in c.Win32_PerfRawData_PerfProc_Process (IDProcess=id):
  n1, d1 = long (p.PercentProcessorTime), long
(p.Timestamp_Sys100NS)
  n0, d0 = process_info.get (id, (0, 0))

  try:
percent_processor_time = (float (n1 - n0) / float (d1 - d0)) *
100.0
  except ZeroDivisionError:
percent_processor_time = 0.0
  process_info[id] = (n1, d1)
  
  if percent_processor_time > 0.01:
print "%20s - %2.3f" % (process.Caption, percent_processor_time)

  print
  time.sleep (5)




This e-mail has been scanned for all viruses by Star. The
service is powered by MessageLabs. For more information on a proactive
anti-virus service working around the clock, around the globe, visit:
http://www.star.net.uk

-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Get CPU usage of a single process in Windows

2006-09-12 Thread Tim Golden
[Gerrit Muller]
| 
| Tim Golden wrote:

| > WMI can probably do the trick. I'm fairly sure I've got an example 
| somewhere, but  I can't lay my hands on it at the mo.

| If you have a working example of CPU usage could you post the 
| result? I 
| would be interested.

I haven't time to revisit it just at the moment, but here's
an earlier thread in which I posted a version:

http://mail.python.org/pipermail/python-win32/2006-April/004536.html

(If it's not readable because of formatting, drop me a private
email and I'll send you a copy)
TJG


This e-mail has been scanned for all viruses by Star. The
service is powered by MessageLabs. For more information on a proactive
anti-virus service working around the clock, around the globe, visit:
http://www.star.net.uk

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Get CPU usage of a single process in Windows

2006-09-12 Thread Gerrit Muller
Tim Golden wrote:
<...snip...>
>>This should be possible as Taskmanager tracks CPU usage for every
>>process... Anyone know how this can be done?
>>
> 
> WMI can probably do the trick. If you can find something on Google
> for wmi cpu usage (or something similar) then translation to Python's
> usually quite easy. I'm fairly sure I've got an example somewhere, but
> I can't lay my hands on it at the mo.
> 
> TJG
> 
Tim Golden's general documentation/examples about WMI are at: 
<http://tgolden.sc.sabren.com/python/wmi.html>.

If you have a working example of CPU usage could you post the result? I 
would be interested.

kind regards, Gerrit Muller

-- 
Gaudi systems architecting:
<http://www.gaudisite.nl/>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Get CPU usage of a single process in Windows

2006-09-08 Thread Tim Roberts
Tor Erik <[EMAIL PROTECTED]> wrote:
>
>This should be possible as Taskmanager tracks CPU usage for every 
>process... Anyone know how this can be done?

I answered this in the python-win32 mailing list.  Task manager and perfmon
do this by using the performance counter APIs.  Python-Win32 includes an
interface for that (import win32pdh), but I've never used it.
-- 
- Tim Roberts, [EMAIL PROTECTED]
  Providenza & Boekelheide, Inc.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Get CPU usage of a single process in Windows

2006-09-08 Thread Tim Golden
Tor Erik wrote:
> Hi,
>
> This should be possible as Taskmanager tracks CPU usage for every
> process... Anyone know how this can be done?
>

WMI can probably do the trick. If you can find something on Google
for wmi cpu usage (or something similar) then translation to Python's
usually quite easy. I'm fairly sure I've got an example somewhere, but
I can't lay my hands on it at the mo.

TJG

-- 
http://mail.python.org/mailman/listinfo/python-list


Get CPU usage of a single process in Windows

2006-09-08 Thread Tor Erik
Hi,

This should be possible as Taskmanager tracks CPU usage for every 
process... Anyone know how this can be done?

regards
-- 
http://mail.python.org/mailman/listinfo/python-list


Long running Script stops responding, CPU usage increases

2006-02-28 Thread Jeff Quandt
Title: Long running Script stops responding, CPU usage increases






This is running Python 2.3 on windows 2003/windows xp.


I have written a script to display and filter the Win32 event log in a scrolling list to the command line (it also does some summary tasks).  It uses the win32evtlog.ReadEventLog to seek or scan the event log based on filter parameters.  It is supposed to run until the user kills it, so there are repeated calls to ReadEventLog.  

The problem is that after the script has run from some time (usually several hours), the call to ReadEventLog seems to hang and the CPU usage increases dramatically (from nil to 30-50%).  The script does not exit this stage, and has to be broken manually with Ctrl-C.  I've tried closing and reopening the event log handle after it has been open for an hour.  I've added explicit calls to the garbage collector.  Neither helped at all.

I realize that this is an API call, so it may not be a python issue.  But I have seen this type of behavior on other scripts (not event loggers), but they were test/debug tools and weren't important enough to track down the issue.

Any help would be appreciated.


code example:


    def OpenLog( self ):

    #

    #open event log

    #

    self.mHandle  = win32evtlog.OpenEventLog( self.mComputer

    , self.mLogType

    )

    if not self.mHandle:

    raise ValueError, "invalid handle"


    self.mLogOpenTmst = time.time()


    def CloseLog( self ):


    win32evtlog.CloseEventLog( self.mHandle )


    def ReadLog( self ):


    self.mFlags = win32evtlog.EVENTLOG_FORWARDS_READ|win32evtlog.EVENTLOG_SEEK_READ

    vEventScon  = win32evtlog.ReadEventLog( self.mHandle

  , self.mFlags

  , self.mLogOffset

  )

    #

    # if not found, try again in 5 seconds

    #

    if not vEventScon:


    #

    # If we've had the log open for more than 1 hour, dump it and reopen

    #

    if ( time.time() > (self.mLogOpenTmst + 3600) ):

    self.CloseLog()

    self.OpenLog()


    time.sleep( 5 )

    return bOk


    #

    # snip...

    # manipulate event records here

    #


#

# main

#

OpenLog

Ok = 1

while Ok:

    Ok = ReadLog()

CloseLog()




-- 
http://mail.python.org/mailman/listinfo/python-list

ReadEventLog doesn't return, CPU usage increases

2006-02-24 Thread Jeff Quandt
Title: ReadEventLog doesn't return, CPU usage increases






This is running Python 2.3 on windows 2003/windows xp.


I have written a script to display and filter the Win32 event log in a scrolling list to the command line (it also does some summary tasks).  It uses the win32evtlog.ReadEventLog to seek or scan the event log based on filter parameters.  It is supposed to run until the user kills it, so there are repeated calls to ReadEventLog.  

The problem is that after the script has run from some time (usually several hours), the call to ReadEventLog seems to hang and the CPU usage increases dramatically (from nil to 30-50%).  The script does not exit this stage, and has to be broken manually with Ctrl-C.  I've tried closing and reopening the event log handle after it has been open for an hour.  I've added explicit calls to the garbage collector.  Neither helped at all.

I realize that this is an API call, so it may not be a python issue.  But I have seen this type of behavior on other scripts (not event loggers), but they were test/debug tools and weren't important enough to track down the issue.

Any help would be appreciated.


code example:


    def OpenLog( self ):

    #

    #open event log

    #

    self.mHandle  = win32evtlog.OpenEventLog( self.mComputer

    , self.mLogType

    )

    if not self.mHandle:

    raise ValueError, "invalid handle"


    self.mLogOpenTmst = time.time()


    def CloseLog( self ):


    win32evtlog.CloseEventLog( self.mHandle )


    def ReadLog( self ):


    self.mFlags = win32evtlog.EVENTLOG_FORWARDS_READ|win32evtlog.EVENTLOG_SEEK_READ

    vEventScon  = win32evtlog.ReadEventLog( self.mHandle

  , self.mFlags

  , self.mLogOffset

  )

    #

    # if not found, try again in 5 seconds

    #

    if not vEventScon:


    #

    # If we've had the log open for more than 1 hour, dump it and reopen

    #

    if ( time.time() > (self.mLogOpenTmst + 3600) ):

    self.CloseLog()

    self.OpenLog()


    time.sleep( 5 )

    return bOk


    #

    # snip...

    # manipulate event records here

    #


#

# main

#

OpenLog

Ok = 1

while Ok:

    Ok = ReadLog()

CloseLog()


Jeff



-- 
http://mail.python.org/mailman/listinfo/python-list

Re: cpu usage limit

2005-05-29 Thread garabik-news-2005-05
[EMAIL PROTECTED] wrote:
> I understand, that what I suggest does not solve the problem you want,
> but..
> 
> Why do you want to restrict CPU usage to 30%? In Windows I run CPU

there might be three reasons:
1) less power consumed (notebooks, PDA's)
2) less heat from CPU
3) (cross platform) scheduling of low priority tasks (e.g. all my
background tasks are already running with lowest priority since I do not
want them to influence my desktop in any way, but still I want some of them to 
be
of higher priority)

generally, modern OS'es do not provide any ways to schedule tasks with
such constrains, which makes the question perfectly legitimate

-- 
 ---
| Radovan Garabík http://kassiopeia.juls.savba.sk/~garabik/ |
| __..--^^^--..__garabik @ kassiopeia.juls.savba.sk |
 ---
Antivirus alert: file .signature infected by signature virus.
Hi! I'm a signature virus! Copy me into your signature file to help me spread!
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: cpu usage limit

2005-05-27 Thread elbertlev
I understand, that what I suggest does not solve the problem you want,
but..

Why do you want to restrict CPU usage to 30%? In Windows I run CPU
intesive therads on IDLE priority, while interfacand/or communication
threads run on normal. This gives me best of two worlds:
1. I use 100% CPU (good) and
2. progam i responcive (very good).

There is no  cross platform way to change the thread priority. But most
OS (as well as thread libraries) support setting priorities.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: cpu usage limit

2005-05-27 Thread Peter Hansen
rbt wrote:
> [EMAIL PROTECTED] wrote:
>> finished = False
>> while not finished:
> 
> Why don't you just write 'while True'??? 'while not false' is like 
> saying 'I am not unemployed by Microsoft' instead of saying 'I am 
> employed by Microsoft'. It's confusing, complex and unnecessary. Lawyers 
> call it circumlocution (talking around the truth).
> 
>>   before = time.time()
>>   do(x) # sets finished if all was computed
>>   after = time.time()
>>   delta = after-before
>>   time.sleep(delta*10/3.)

The answer to your question "why not write 'while True'?" is to be found 
in the helpful comment he put on the line with "do(x)"  Note that 
"finished" is a flag, so "sets finished" sort of explains the whole thing.

-Peter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: cpu usage limit

2005-05-27 Thread Paul Rubin
"mmf" <[EMAIL PROTECTED]> writes:
> How can I make sure that a Python process does not use more that 30% of
> the CPU at any time. I only want that the process never uses more, but
> I don't want the process being killed when it reaches the limit (like
> it can be done with resource module).
> 
> Can you help me?

In general you can only do that with a real-time operating system.
Most other OS's will let you adjust process priorities so that you can
prevent your Python process from hogging cycles away from other
processes.  But if the machine is idle and nobody else wants the
cycles, Python will get all of them.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: cpu usage limit

2005-05-27 Thread Grant Edwards
On 2005-05-27, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

>>> How can I make sure that a Python process does not use more that 30% of
>>> the CPU at any time. I only want that the process never uses more, but
>>> I don't want the process being killed when it reaches the limit (like
>>> it can be done with resource module).

>> Are you looping during a cpu intensive task? If so, make it sleep a bit 
>> like this:
>> 
>> for x in cpu_task:
>> time.sleep(0.5)
>> do(x)
>
> or like this (untested!)
>
> finished = False
> while not finished:
>   before = time.time()
>   do(x) # sets finished if all was computed
>   after = time.time()
>   delta = after-before
>   time.sleep(delta*10/3.)
>
> now the trick: do(x) can be a single piece of code, with
> strategically placed yield's all over

Since you have no way of knowing that your process was the only
one running between the two calls to time.time(), you're
placing an upper bound on how much CPU time you're using, but
the actual usage is unknown and may be much lower on a heavily
loaded machine.

Running for 100ms and sleeping for 333ms results in an upper
limit of 25% rather than 30%.  Sleeping for (delta * 7.0/3.0)
gives a 30% upper bound.

All that aside, it seems to me that this situation is analogous
to when people waste all sorts of effort trying to write clever
applications that cache parts of files or other data structures
in main memory with backing store on disk. They end up with a
big, complicated, buggy app that's slower and requires more
resources than a far simpler app that lets the OS worry about
memory management.

IOW, you're probably better off not trying to write application
code that tries to out-think your OS.  Use whatever prioritizing
scheme your OS kernel provides for setting up a low priority
"background" task, and let _it_ worry about divvying up the CPU.
That's what it's there for, and it's got a far better picture
of resource availability and demand.

-- 
Grant Edwards   grante Yow!  Yow! It's a hole
  at   all the way to downtown
   visi.comBurbank!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: cpu usage limit

2005-05-27 Thread rbt
[EMAIL PROTECTED] wrote:
> rbt <[EMAIL PROTECTED]> wrote:
> 
>>mf wrote:
>>
>>>Hi.
>>>
>>>My problem:
>>>How can I make sure that a Python process does not use more that 30% of
>>>the CPU at any time. I only want that the process never uses more, but
>>>I don't want the process being killed when it reaches the limit (like
>>>it can be done with resource module).
>>>
>>>Can you help me?
>>>
>>>Thanks in advance.
>>>
>>>Best regards,
>>>Markus
>>>
>>
>>Are you looping during a cpu intensive task? If so, make it sleep a bit 
>>like this:
>>
>>for x in cpu_task:
>>time.sleep(0.5)
>>do(x)
> 
> 
> or like this (untested!)
> 
> finished = False
> while not finished:

Why don't you just write 'while True'??? 'while not false' is like 
saying 'I am not unemployed by Microsoft' instead of saying 'I am 
employed by Microsoft'. It's confusing, complex and unnecessary. Lawyers 
call it circumlocution (talking around the truth).

>   before = time.time()
>   do(x) # sets finished if all was computed
>   after = time.time()
>   delta = after-before
>   time.sleep(delta*10/3.)
> 
> now the trick: do(x) can be a single piece of code, with strategically placed 
> yield's
> all over
> 
> 
> 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: cpu usage limit

2005-05-27 Thread Markus Franz
> Are you looping during a cpu intensive task? If so, make it sleep a bit 
> like this:
> 
> for x in cpu_task:
> time.sleep(0.5)
> do(x)

No, I don't use an intensive loop. I have about 1200 lines of code 
inside a process - is there nothing like

xyz.setlimit(xyz.cpu, 0.30)

???

Thank.
Markus
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: cpu usage limit

2005-05-27 Thread garabik-news-2005-05
rbt <[EMAIL PROTECTED]> wrote:
> 
> mf wrote:
>> Hi.
>> 
>> My problem:
>> How can I make sure that a Python process does not use more that 30% of
>> the CPU at any time. I only want that the process never uses more, but
>> I don't want the process being killed when it reaches the limit (like
>> it can be done with resource module).
>> 
>> Can you help me?
>> 
>> Thanks in advance.
>> 
>> Best regards,
>> Markus
>> 
> 
> Are you looping during a cpu intensive task? If so, make it sleep a bit 
> like this:
> 
> for x in cpu_task:
> time.sleep(0.5)
> do(x)

or like this (untested!)

finished = False
while not finished:
  before = time.time()
  do(x) # sets finished if all was computed
  after = time.time()
  delta = after-before
  time.sleep(delta*10/3.)

now the trick: do(x) can be a single piece of code, with strategically placed 
yield's
all over



-- 
 ---
| Radovan Garabík http://kassiopeia.juls.savba.sk/~garabik/ |
| __..--^^^--..__garabik @ kassiopeia.juls.savba.sk |
 ---
Antivirus alert: file .signature infected by signature virus.
Hi! I'm a signature virus! Copy me into your signature file to help me spread!
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: cpu usage limit

2005-05-27 Thread rbt

mf wrote:
> Hi.
> 
> My problem:
> How can I make sure that a Python process does not use more that 30% of
> the CPU at any time. I only want that the process never uses more, but
> I don't want the process being killed when it reaches the limit (like
> it can be done with resource module).
> 
> Can you help me?
> 
> Thanks in advance.
> 
> Best regards,
> Markus
> 

Are you looping during a cpu intensive task? If so, make it sleep a bit 
like this:

for x in cpu_task:
 time.sleep(0.5)
 do(x)
-- 
http://mail.python.org/mailman/listinfo/python-list


cpu usage limit

2005-05-27 Thread mmf
Hi.

My problem:
How can I make sure that a Python process does not use more that 30% of
the CPU at any time. I only want that the process never uses more, but
I don't want the process being killed when it reaches the limit (like
it can be done with resource module).

Can you help me?

Thanks in advance.

Best regards,
Markus

-- 
http://mail.python.org/mailman/listinfo/python-list