Re: getting MemoryError with dicts; suspect memory fragmentation

2010-06-10 Thread Emin.shopper Martinian.shopper
Dear Dmitry, Bryan and Philip,

Thanks for the suggestions. I poked around the dictionary descriptions
and fiddled some more but couldn't find any obvious error. I agree it
does seem odd that a 50 kb dict should fail. Eventually, I tried
Dmitry suggestion of moving over to python 2.6. This took a while
since I had to upgrade a bunch of libraries like numpy and scipy along
the way but once I got everything over to 2.6 it succeeded.

My bet is that it was still some kind of weird memory issue but
considering that it does not seem to exist in python 2.6 I'm guessing
it's not worth the effort to continue to track down.

Thanks again for all your help. I learned a lot more about
dictionaries along the way.

Best,
-Emin

On Fri, Jun 4, 2010 at 4:40 PM, Bryan
 wrote:
> Philip Semanchuk wrote:
>> At PyCon 2010, Brandon Craig Rhodes presented about how dictionaries
>> work under the 
>> hood:http://python.mirocommunity.org/video/1591/pycon-2010-the-mighty-dict...
>>
>> I found that very informative.
>
> That's a fine presentation of hash tables in general and Python's
> choices in particular. Also highly informative, while easily readable,
> is the Objects/dictnotes.txt file in the Python source.
>
> Fine as those resources may be, the issue here stands. Most of my own
> Python issues turn out to be stupid mistakes, and the problem here
> might be on that level, but Emin seems to have worked his problem and
> gotten a bunch of stuff right. There is no good reason why
> constructing a 50 kilobyte dict should fail with a MemoryError while
> constructing 50 megabyte lists succeeds.
>
>
> --
> --Bryan Olson
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: getting MemoryError with dicts; suspect memory fragmentation

2010-06-04 Thread Emin.shopper Martinian.shopper
On Thu, Jun 3, 2010 at 10:00 PM, dmtr  wrote:
> I'm still unconvinced that it is a memory fragmentation problem. It's
> very rare.

You could be right. I'm not an expert on python memory management. But
if it isn't memory fragmentation, then why is it that I can create
lists which use up 600 more MB but if I try to create a dict that uses
a couple more MB it dies? My guess is that python dicts want a
contiguous chunk of memory for their hash table. Is there a reason
that you think memroy fragmentation isn't the problem? What else could
it be?

> Can you give more concrete example that one can actually try to
> execute? Like:
>
> python -c "list([list([0]*xxx)+list([1]*xxx)+list([2]*xxx)
> +list([3]*xxx) for xxx in range(10)])" &

Well the whole point is that this is a long running process which does
lots of allocation and deallocation which I think fragments the
memory. Consequently, I can't give a simple example like that.

Thanks,
-Emin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: getting MemoryError with dicts; suspect memory fragmentation

2010-06-03 Thread Emin.shopper Martinian.shopper
On Thu, Jun 3, 2010 at 7:41 PM, dmtr  wrote:
> On Jun 3, 3:43 pm, "Emin.shopper Martinian.shopper"
>  wrote:
>> Dear Experts,
>>
>
> Are you sure you have enough memory available?
> Dict memory usage can jump x2 during re-balancing.
>

I'm pretty sure. When I did

 p setattr(self,'q',dict([(xxx,xxx) for xxx in range(1300)]))

the memory increased trivially (less than 1 MB) but when I did

 p setattr(self,'q',list([list(xxx)+list(xxx)+list(xxx)+list(xxx) for
xxx in self.data]))

it increased by 600 MB.

After getting back to the original 900 MB memory usage, doing

 p setattr(self,'q',dict([(xxx,xxx) for xxx in range(1400)]))

gave a memory error suggesting it isn't the amount of memory available
that is the problem but something like fragmentation.

Thanks,
-Emin
-- 
http://mail.python.org/mailman/listinfo/python-list


getting MemoryError with dicts; suspect memory fragmentation

2010-06-03 Thread Emin.shopper Martinian.shopper
Dear Experts,

I am getting a MemoryError when creating a dict in a long running
process and suspect this is due to memory fragmentation. Any
suggestions would be welcome. Full details of the problem are below.

I have a long running processing which eventually dies to a
MemoryError exception. When it dies, it is using roughly 900 MB on a 4
GB Windows XP machine running Python 2.5.4. If I do "import pdb;
pdb.pm()" to debug, I see that it is dying inside a method when trying
to create a dict with about 2000 elements. If I instead do something
like

  p setattr(self,'q',list([list(xxx)+list(xxx)+list(xxx)+list(xxx) for
xxx in self.data]))

inside the debugger, I can make the memory increase to about 1.5 GB
WITHOUT getting a memory error.

If instead I do something like

  p setattr(self,'q',dict([(xxx,xxx) for xxx in range(1400)]))

inside the debugger, I get a MemoryError exception. If instead I do
something like

  p setattr(self,'q',dict([(xxx,xxx) for xxx in range(1300)]))

inside the debugger, I get no Exception.

I infer that python is trying to allocate a bunch of contiguous space
for the dict and due to fragmentation it can't find the contiguous
space and therefore it gives a memory error.

  1. Does this sound plausible or could something else be causing the problem?
  2. Does anyone have suggestions on how to fix this?

Some time Googling brings up the following 2004 thread started by Evan Jones:

  http://mail.python.org/pipermail/python-dev/2004-October/049480.html

but I'm unable to find a solution to the problem I'm having.

Any help would be much appreciated.

Thanks,
-E
-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: superpy 1.2.1

2009-11-09 Thread Emin.shopper Martinian.shopper
I am pleased to announce the release of superpy 1.2.1 available from
http://code.google.com/p/superpy.

As this is the first announcement of superpy, any comments and
feedback would be much appreciated.

--

Superpy distributes python programs across a cluster of machines or
across multiple processors on a single machine. This is a
coarse-grained form of parallelism in the sense that remote tasks
generally run in separate processes and do not share memory with the
caller.

Key features of superpy include:

* Send tasks to remote servers or to same machine via XML RPC call
* GUI to launch, monitor, and kill remote tasks
* GUI can automatically launch tasks every day, hour, etc.
* Works on the Microsoft Windows operating system
  o Can run as a windows service
  o Jobs submitted to windows can run as submitting user or as
service user
* Inputs/outputs are python objects via python pickle
* Pure python implementation
* Supports simple load-balancing to send tasks to best servers

The ultimate vision for superpy is that you:

   1. Install it as an always on service on a cloud of machines
   2. Use the superpy scheduler to easily send python jobs into the
cloud as needed
   3. Use the SuperWatch GUI to track progress, kill tasks, etc.

For smaller deployments, you can use superpy to take advantage of
multiple processors on a single machine or multiple machines to
maximize computing power.

What makes superpy different than the many other excellent parallel
processing packages already available for python? The superpy package
is designed to allow sending jobs across a large number of machines
(both Windows and LINUX). This requires the ability to monitor, debug,
and otherwise get information about the status of jobs.

While superpy is currently used in production for a number of
different purposes, there are still many features we want to add. For
a list of future plans and opportunities to help out or add to the
discussion, please visit
http://code.google.com/p/superpy/wiki/HelpImproveSuperpy.

For a quick example of some of the the things superpy can do, check
out http://code.google.com/p/superpy/wiki/Demos or in particular the
demo application PyFog at http://code.google.com/p/superpy/wiki/PyFog.

To install, you can use easy_install to try superpy via "easy_install
superpy" or download a python egg from downloads. Of course, you will
need python installed and if you are using windows, you should also
install the python windows tools from
http://sourceforge.net/projects/pywin32/files. See
http://code.google.com/p/superpy/wiki/InstallFAQ if you have more
questions about installation.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: subprocess and win32security.ImpersonateLoggedOnUser

2009-06-01 Thread Emin.shopper Martinian.shopper
> The source for subprocess just uses CreateProcess. Which means that,
> short of monkey-patching it, you're going to have to roll your own
> subprocess-like code (I think). Basically, you'll need to run
> CreateProcessAsUser or CreateProcessAsLogonW. They're both a bit
> of a pig in terms of getting the right combination of parameters
> and privileges,

Thanks. I tried rolling my own via CreateProcessAsUser but it
complained about needing some special permissions so its probably not
going to work. I'd like to try CreateProcessAsLogonW but can't see how
to access that via python. I will start a new thread on the
python-win32 list about that.

Thanks,
-Emin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: subprocess and win32security.ImpersonateLoggedOnUser

2009-06-01 Thread Emin.shopper Martinian.shopper
Thanks. But how do I fix this so that the subprocess does inherit the
impersonated stuff?

On Mon, Jun 1, 2009 at 9:38 AM, Tim Golden  wrote:
> Emin.shopper Martinian.shopper wrote:
>>
>> Dear Experts,
>>
>> I am having some issues with the subprocess module and how it
>> interacts with win32security.ImpersonateLoggedOnUser. Specifically, I
>> use the latter to change users but the new user does not seem to be
>> properly inherited when I spawn further subprocesses.
>>
>> I am doing something like
>>
>>    import win32security, win32con
>>    handle = win32security.LogonUser(
>>        user,domain,password,win32con.LOGON32_LOGON_INTERACTIVE,
>>        win32con.LOGON32_PROVIDER_DEFAULT)
>>
>>    win32security.ImpersonateLoggedOnUser(handle)
>>
>> Then spawning subprocesses but the subprocesses cannot read the same
>> UNC paths that that the parent could.
>
> http://support.microsoft.com/kb/111545
>
> """
> Even if a thread in the parent process impersonates a client and then
> creates a new process, the new process still runs under the parent's
> original security context and not the under the impersonation token. """
>
> TJG
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


subprocess and win32security.ImpersonateLoggedOnUser

2009-06-01 Thread Emin.shopper Martinian.shopper
Dear Experts,

I am having some issues with the subprocess module and how it
interacts with win32security.ImpersonateLoggedOnUser. Specifically, I
use the latter to change users but the new user does not seem to be
properly inherited when I spawn further subprocesses.

I am doing something like

import win32security, win32con
handle = win32security.LogonUser(
user,domain,password,win32con.LOGON32_LOGON_INTERACTIVE,
win32con.LOGON32_PROVIDER_DEFAULT)

win32security.ImpersonateLoggedOnUser(handle)

Then spawning subprocesses but the subprocesses cannot read the same
UNC paths that that the parent could.

Any advice on either spawning subprocesses which inherit parent user
properly or changing users in a better way on Windows would be greatly
appreciated.

Thanks,
-Emin
-- 
http://mail.python.org/mailman/listinfo/python-list


choosing random dynamic port number

2008-01-03 Thread Emin.shopper Martinian.shopper
Dear Experts,

Is there a good way to choose/assign random dynamic port numbers in python?

I had in mind something like the following, but if multiple programs are
generating random port numbers, is there a way to check if a given port
number is already taken?

def GenerateDynamicPortNumber():
"Generate a random dynamic port number and return it."
# port numbers between 49152 to 65535 are dynamic port numbers
return 49152 + random.randrange(15000)
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: How do you pass compiler option to setup.py install?

2008-01-03 Thread Emin.shopper Martinian.shopper
On Jan 3, 2008 11:24 AM, Emin.shopper Martinian.shopper <
[EMAIL PROTECTED]> wrote:

> How do you pass the -c option to setup.py install?


After some fiddling, I figured out that you can put the following two lines
in setup.cfg:

[build]
compiler=mingw32

It would be nice if you could somehow pass this on the command line or if
some of the help messages mentioned it
-- 
http://mail.python.org/mailman/listinfo/python-list

How do you pass compiler option to setup.py install?

2008-01-03 Thread Emin.shopper Martinian.shopper
Dear Experts,

How do you pass the -c option to setup.py install? Specifically, when I try
to install zope.interfaces version 3.3 from source on a windows machine, I
get a message about using "-c mingw32". That works fine for setup.py build,
but it does not work for "setup.py install".

Note: I would have just used the binary installer for windows but couldn't
find one for version 3.3.

Thanks,
-Emin
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: parallel processing in standard library

2007-12-28 Thread Emin.shopper Martinian.shopper
On Dec 27, 2007 4:13 PM, Robert Kern <[EMAIL PROTECTED]> wrote:

> Emin.shopper Martinian.shopper wrote:
> > If not, is there any hope of something like
> > the db-api for coarse grained parallelism (i.e, a common API that
> > different toolkits can support)?
>
> The problem is that for SQL databases, there is a substantial API that
> they can
> all share. The implementations are primarily differentiated by other
> factors
> like speed, in-memory or on-disk, embedded or server, the flavor of SQL,
> etc.
> and only secondarily differentiated by their extensions to the DB-API.
> With
> parallel processing, the API itself is a key differentiator between
> toolkits and
> approaches. Different problems require different APIs, not just different
> implementations.


I disagree. Most of the implementations of coarse-grained parallelism I have
seen and used share many features. For example, they generally have a notion
of spawning processes/tasks, scheduling/load-balancing, checking tasks on a
server, sending messages to/from tasks, detecting when tasks finish or die,
logging the results for debugging purposes, etc. Sure they all do these
things in slightly different ways, but for coarse-grained parallelism the
API difference rarely matter (although the implementation differences can
matter).

I suspect that one of the smaller implementations like processing.py might
> get
> adopted into the standard library if the author decides to push for it.


That would be great.

My recommendation to you is to pick one of the smaller implementations that
> solves the problems in front of you. Read and understand that module so
> you
> could maintain it yourself if you had to. Post to this list about how you
> use
> it. Blog about it if you blog. Write some Python Cookbook recipes to show
> how
> you solve problems with it.


That is a good suggestion, but for most of the coarse grained parallelism
tasks I've worked on it would be easier to roll my own system than do that.
To put it another way, why spend the effort to use a particular API if I
don't know its going to be around for a while? Since a lot of the value is
in the API as opposed to the implementation, unless there is something
special about the API (e.g., it is an official or at least de factor
standard) the learning curve may not be worth it.


> If there is a lively community around it, that will
> help it get into the standard library. Things get into the standard
> library
> *because* they are supported, not the other way around.


You make a good point and in general I would agree with you. Isn't it
possible, however, that there are cases where inclusion in the standard
library would build a better community? I think this is the argument for
many types of standards. A good example is wireless networking. The
development of a standard like 802.11 provided hardware manufacturers the
incentive to build devices that could communicate with each other and that
made people want to buy the products.

Still, I take your basic point to heart: if I want a good API, I should get
off my but and contribute to it somehow.

How would you or the rest of the community react to a proposal for a generic
parallelism API? I suspect the response would be "show us an implementation
of the code". I could whip up an implementation or adapt one of the existing
systems, but then I worry that the discussion would devolve into an argument
about the pros and cons of the particular implementation instead of the API.
Even worse, it might devolve into an argument of the value of fine-grained
vs. coarse-grained parallelism or the GIL. Considering that these issues
seem to have been discussed quite a bit already and there are already
multiple parallel processing implementations, it seems like the way forward
lies in either a blessing of a particular package that already exists or
adoption of an API instead of a particular implementation.

Thanks for your thoughts,
-Emin
-- 
http://mail.python.org/mailman/listinfo/python-list

parallel processing in standard library

2007-12-27 Thread Emin.shopper Martinian.shopper
Dear Experts,

Is there any hope of a parallel processing toolkit being incorporated into
the python standard library? I've seen a wide variety of toolkits each with
various features and limitations. Unfortunately, each has its own API. For
coarse-grained parallelism, I suspect I'd be pretty happy with many of the
existing toolkits, but if I'm going to pick one API to learn and program to,
I'd rather pick one that I'm confident is going to be supported for a while.

So is there any hope of adoption of a parallel processing system into the
python standard library? If not, is there any hope of something like the
db-api for coarse grained parallelism (i.e, a common API that different
toolkits can support)?

Thanks,
-Emin
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: problem pickling a function

2007-12-12 Thread Emin.shopper Martinian.shopper
On Dec 12, 2007 11:48 AM, Calvin Spealman <[EMAIL PROTECTED]>
wrote:

> On Dec 12, 2007, at 11:01 AM, Emin.shopper Martinian.shopper wrote:
>
> > But is there a way to assign functions to instances of a class
> > without preventing pickleability? It doesn't seem unreasonable to
> > me to want to assign functions to instances of a class (after all
> > functions are first class objects, so why shouldn't I be able to
> > pass them around?) Is there a better way or is this just a
> > limitation of pickle?
>
> Presumably you could do something with __getstate__ and __setstate__
> methods, but this is many cases of "Are you really sure you want to
> do that?"
>
>
Why is it unreasonable to want to pass functions as arguments to classes? If
functions are first class arguments, that seems perfectly reasonable to me.

I guess the best work around is to put the desired function into a
staticmethod as shown below:

>>> class f: pass
>>> g = f()
>>> class j:
@staticmethod
def join(*args,**kw):
 return ','.join(*args,**kw)
>>> g.x=j
>>> import pickle; pickle.dumps(g)
'c__main__\nj\np0\n.

The above provides a way to pickle functions but it seems like a bit of a
hack...
-- 
http://mail.python.org/mailman/listinfo/python-list

problem pickling a function

2007-12-12 Thread Emin.shopper Martinian.shopper
Dear Experts,

I love the pickle module, but I occasionally have problems pickling a
function. For example, if I create an instance g of class f and assign
g.xto a function, then I cannot pickle g (example code below). I know
that I
can pickle f separately if I want to, and I understand why I get the
pickling error.

But is there a way to assign functions to instances of a class without
preventing pickleability? It doesn't seem unreasonable to me to want to
assign functions to instances of a class (after all functions are first
class objects, so why shouldn't I be able to pass them around?) Is there a
better way or is this just a limitation of pickle?

Example code illustrating problem is below:

>>> class f: pass
>>> g = f()
>>> g.x = ','.join
>>> import pickle; pickle.dumps(g)
Traceback (most recent call last):
  File "", line 1, in 
  File "C:\Python25\lib\pickle.py", line 1366, in dumps
Pickler(file, protocol).dump(obj)
  File "C:\Python25\lib\pickle.py", line 224, in dump
self.save(obj)
  File "C:\Python25\lib\pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
  File "C:\Python25\lib\pickle.py", line 725, in save_inst
save(stuff)
  File "C:\Python25\lib\pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
  File "C:\Python25\lib\pickle.py", line 649, in save_dict
self._batch_setitems(obj.iteritems())
  File "C:\Python25\lib\pickle.py", line 663, in _batch_setitems
save(v)
  File "C:\Python25\lib\pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
  File "C:\Python25\lib\pickle.py", line 748, in save_global
(obj, module, name))
pickle.PicklingError: Can't pickle : it's not found as __main__.join
-- 
http://mail.python.org/mailman/listinfo/python-list

How can you make pylint/pychecker "see" setattr

2007-12-11 Thread Emin.shopper Martinian.shopper
Dear Experts,

Does anyone know how you can either make pylint "see" setattr or give it
explicit information when you do a "compile time" call to setattr?

For example, imagine that I have the following block of code

class foo:
def __init__(self):
for i in [1,2,5]:
   setattr(self,'y%i'i,i*i)

def baz(self):
print self.y2


With this code, pylint will complain that y2 does not seem to be a member of
self. Obviously, if the arguments for setattr are not known until run-time,
their is nothing pylint can do. But in the case, the arguments are known at
"compile time" so it would be nice if there was some way to communicate this
to pylint. For example, if I could execute something like
pylint.remember_set(...) that pylint would see, that would be great. I
suspect this probably requires more parsing than pylint does, however.

On a related note, does anyone have a suggestion for a way to create a bunch
of similar properties (e.g., y1, y2, y5, etc.) in a "safe" way that either
pychecker or pylint can check (or at least not complain about)? Obviously I
would use better names than y1, y2, etc., but in the project I'm working on
I often need to many large sets of similar variables. Defining them all "by
hand" is tedious and not very readable. I could define them in a dict or use
setattr, but then I can't get any of the benefits of things like pychecker
or pylint. I suspect that is just a fundamental trade-off but it would be
great if someone has a better idiom.

Thanks,
-Emin
-- 
http://mail.python.org/mailman/listinfo/python-list

which python mode to use in emacs

2007-09-21 Thread Emin.shopper Martinian.shopper
Dear Experts,

There seem to be multiple versions of python modes for emacs? Could someone
point me to the mainterers of either the "official" one or the one that is
being maintained most vigorously?

I've tried both python.el and python-mode.el. Both seem to have various
minor foibles which I'd be happy to help fix and improve but the EmacsWiki
page about python mode seems to imply that one or both or no longer
maintained or acceprting patches.

Thanks,
-Emin
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: How do you debug when a unittest.TestCase fails?

2007-07-19 Thread Emin.shopper Martinian.shopper

After poking around the unittest source code, the best solution I could come
up with was to do


import unittest; unittest.TestCase.run = lambda self,*args,**kw:

unittest.TestCase.debug(self)

before running my tests. That patches things so that I can use pdb.pm() when
a test fails. Still, that seems like an ugly hack and I would think there is
a better solution...

Thanks,
-Emin


On 7/18/07, Emin.shopper Martinian.shopper <[EMAIL PROTECTED]> wrote:


Thanks for the reply, but neither of those work for me. I don't seem to
have the "trial" program installed. Where do you get it?

Also, when I use the try/catch block, I get the following error:

Traceback (most recent call last):
  File "_test.py", line 10, in 
pdb.pm()
  File "c:\python25\lib\pdb.py", line 1148, in pm
post_mortem(sys.last_traceback)
AttributeError: 'module' object has no attribute 'last_traceback'


 On 7/18/07, Jean-Paul Calderone <[EMAIL PROTECTED]> wrote:
>
> On Wed, 18 Jul 2007 16:40:46 -0400, "Emin.shopper Martinian.shopper" <[EMAIL 
PROTECTED]>
> wrote:
> >Dear Experts,
> >
> >How do you use pdb to debug when a TestCase object from the unittest
> module
> >fails? Basically, I'd like to run my unit tests and invoke pdb.pm when
> >something fails.
> >
> >I tried the following with now success:
> >
> >Imagine that I have a module _test.py that looks like the following:
> >
> >---
> >import unittest
> >class MyTest(unittest.TestCase):
> >def testIt(self):
> >raise Exception('boom')
> >if __name__ == '__main__':
> >unittest.main()
> >---
> >
> >If I do
> >>>>import _test; _test.unittest()
> >
> >no tests get run.
> >
> >If I try
> >>>>import _test; t = _test.MyTest()
> >
> >I get
> >
> >Traceback (most recent call last):
> >  File "", line 1, in 
> >  File "c:\python25\lib\unittest.py", line 209, in __init__
> >(self.__class__, methodName)
> >ValueError: no such test method in : runTest
> >
> >If I try
> >>>>import _test; t = _test.MyTest(methodName='testIt'); t.run()
> >
> >nothing happens.
>
> I use `trial -b ', which automatically enables a bunch of nice
>
> debugging functionality. ;)  However, you can try this, if you're not
> interested in using a highly featureful test runner:
>
>try:
>unittest.main()
>except:
>import pdb
>pdb.pm ()
>
> This will "post-mortem" the exception, a commonly useful debugging
> technique.
>
> Jean-Paul
> --
> http://mail.python.org/mailman/listinfo/python-list
>


-- 
http://mail.python.org/mailman/listinfo/python-list

Re: How do you debug when a unittest.TestCase fails?

2007-07-18 Thread Emin.shopper Martinian.shopper

Thanks for the reply, but neither of those work for me. I don't seem to have
the "trial" program installed. Where do you get it?

Also, when I use the try/catch block, I get the following error:

Traceback (most recent call last):
 File "_test.py", line 10, in 
   pdb.pm()
 File "c:\python25\lib\pdb.py", line 1148, in pm
   post_mortem(sys.last_traceback)
AttributeError: 'module' object has no attribute 'last_traceback'


On 7/18/07, Jean-Paul Calderone <[EMAIL PROTECTED]> wrote:


On Wed, 18 Jul 2007 16:40:46 -0400, "Emin.shopper Martinian.shopper" <
[EMAIL PROTECTED]> wrote:
>Dear Experts,
>
>How do you use pdb to debug when a TestCase object from the unittest
module
>fails? Basically, I'd like to run my unit tests and invoke pdb.pm when
>something fails.
>
>I tried the following with now success:
>
>Imagine that I have a module _test.py that looks like the following:
>
>---
>import unittest
>class MyTest(unittest.TestCase):
>def testIt(self):
>raise Exception('boom')
>if __name__ == '__main__':
>unittest.main()
>---
>
>If I do
>>>>import _test; _test.unittest()
>
>no tests get run.
>
>If I try
>>>>import _test; t = _test.MyTest()
>
>I get
>
>Traceback (most recent call last):
>  File "", line 1, in 
>  File "c:\python25\lib\unittest.py", line 209, in __init__
>(self.__class__, methodName)
>ValueError: no such test method in : runTest
>
>If I try
>>>>import _test; t = _test.MyTest(methodName='testIt'); t.run()
>
>nothing happens.

I use `trial -b ', which automatically enables a bunch of nice
debugging functionality. ;)  However, you can try this, if you're not
interested in using a highly featureful test runner:

   try:
   unittest.main()
   except:
   import pdb
   pdb.pm()

This will "post-mortem" the exception, a commonly useful debugging
technique.

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list

How do you debug when a unittest.TestCase fails?

2007-07-18 Thread Emin.shopper Martinian.shopper

Dear Experts,

How do you use pdb to debug when a TestCase object from the unittest module
fails? Basically, I'd like to run my unit tests and invoke pdb.pm when
something fails.

I tried the following with now success:

Imagine that I have a module _test.py that looks like the following:

---
import unittest
class MyTest(unittest.TestCase):
   def testIt(self):
   raise Exception('boom')
if __name__ == '__main__':
   unittest.main()
---

If I do


import _test; _test.unittest()


no tests get run.

If I try


import _test; t = _test.MyTest()


I get

Traceback (most recent call last):
 File "", line 1, in 
 File "c:\python25\lib\unittest.py", line 209, in __init__
   (self.__class__, methodName)
ValueError: no such test method in : runTest

If I try


import _test; t = _test.MyTest(methodName='testIt'); t.run()


nothing happens.

Thanks,
-Emin
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: profiling a C++ python extension

2007-07-11 Thread Emin.shopper Martinian.shopper

Googling for "profiling python extensions" leads to the following link which
worked for me a while ago:
http://plexity.blogspot.com/2006/02/profiling-python-extensions.html

On 7/10/07, rasmus <[EMAIL PROTECTED]> wrote:


I have used gprof to profile stand alone C++ programs.  I am also
aware of pure python profilers.  However, is there a way to get
profile information on my C++ functions when they are compiled in a
shared library (python extension module) and called from python.  From
what I can tell, gmon.out will not be generated unless the entire
executable (python interpreter) was compiled with -pg.  Is my only
solution to recompile the python interpreter with -pg so that my
extension module (also compiled with -pg) produces a gmon.out?

Any suggestions or tips would be helpful.

Matt

--
http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list

What is the preferred doc extraction tool?

2007-07-09 Thread Emin.shopper Martinian.shopper

Dear Experts,

What is the preferred doc extraction tool for python? It seems that there
are many very nice options (e.g., pydoc, epydoc, HappyDoc, and lots of
others), but what is the "standard" tool or at least what is the tool used
to generate the documentation for the python standard library?

Thanks,
-Emin
-- 
http://mail.python.org/mailman/listinfo/python-list

how does one use pdb and doctest.testmod(raise_on_error=True)?

2007-06-19 Thread Emin.shopper Martinian.shopper

Dear Experts,

How does one use pdb and doctest.testmod(raise_on_error=True)? What I would
like to happen is that when a doctest fails (e.g., by raising an exception),
I can do import pdb; pdb.pm() to figure out what went wrong. But when I do
pdb.pm() I end up in doctest.DebugRunner.report_unexpected_exception and
can't see the actual code in my module which failed.

Thanks,
-Emin
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: What is equivalent of *this = that in python?

2007-06-01 Thread Emin.shopper Martinian.shopper

I have a distributed application using xmlrpc and I'd like for a local
object to sync itself to match a remote version of the object. I realize
that I can copy all the attributes from the remote object to the local
object, but that seems like an ugly solution. There must be a better way...

Thanks,
-Emin

On 6/1/07, Carsten Haese <[EMAIL PROTECTED]> wrote:


On Fri, 2007-06-01 at 11:30 -0400, Emin.shopper Martinian.shopper wrote:
> Dear Experts,
>
> How do I reassign self to another object? For example, I want
> something like
>
> class foo:
> def Update(self,other):
> # make this object the same as other or make this object a
> copy of other
> self = other # This won't work. What I really want is *this =
> other in C++ terminology.

There is no such thing in Python. What's the actual problem you're
trying to solve?

--
Carsten Haese
http://informixdb.sourceforge.net


--
http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list

What is equivalent of *this = that in python?

2007-06-01 Thread Emin.shopper Martinian.shopper

Dear Experts,

How do I reassign self to another object? For example, I want something like

class foo:
   def Update(self,other):
   # make this object the same as other or make this object a copy of
other
   self = other # This won't work. What I really want is *this = other
in C++ terminology.


Thanks
-- 
http://mail.python.org/mailman/listinfo/python-list