Python packet capture utility

2010-02-01 Thread VYAS ASHISH M-NTB837
 
Dear All

I want to capture tcp packets in python. I need to do this on both
Windows and Linux on python3.1

I came across the following:
http://sourceforge.net/projects/pycap/
http://sourceforge.net/projects/pylibpcap/
http://code.google.com/p/pypcap/
http://oss.coresecurity.com/projects/pcapy.html


I am not able to evaluate on my own. Which one should I pick up?
Priority is python3.1 support on both windows and Linux. I don't have to
do many/complex operations. If it is not so programmer friendly I am OK.

Also let me know if 2to3 would help here if there is not python3
support.

Regards
Ashish
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: converting XML to hash/dict/CustomTreeCtrl

2010-02-01 Thread Stefan Behnel
Astan Chee, 01.02.2010 23:34:
> I have xml files that I want to convert to a hash/dict and then further
> placed in a wx CustomTreeCtrl based on the structure. The problem I am
> having now is that the XML file is very unusual and there aren't any
> unique identifiers to be put in a dict and because there are no unique
> variables, finding the value of it from a CustromTreeCtrl is abit tricky.
> I would appreciate any help or links to projects similar to this.

What part of the structure and what information are you interested in? That
will determine the key that you should use. Maybe storing more than one
field as key is an option.

Stefan


> The XML file looks something like this:
> 
> http://www.w3.org/2001/XMLSchema-instance";>
> kind="position">
>
>This is the note on
> calculation times
>
> 
>
>609.081574
>2531.972081
>65.119100
>
>
>1772.011230
>
> kind="timers">
>
>72.418861
>
> 
>
>28.285192
>
>
>0.000
>
>
>
>607.432373
>
>
>
>
>
>
>483328
> 
>483328
>
>
>4182777856
>4182777856
>
>1
> 
>1943498
>0
>
>
>
>1640100156
>411307840
>
>
>709596712
>1406752
>
>
>737720720
>0
>
>
>607.432373
>
>
>
>5164184694
>2054715622
>
> 
>
>
> 
> 
> 
-- 
http://mail.python.org/mailman/listinfo/python-list


Logging oddity: handlers mandatory in every single logger?

2010-02-01 Thread Masklinn
When trying to load the following config file, I get an error 
``ConfigParser.NoOptionError: No option 'handlers' in section: 'logger_0'`` (in 
both Python 2.6.4 and  Python 3.1.1 on OSX, obviously ConfigParser is spelled 
configparser in 3.1):

[loggers]
keys=root,0
[handlers]
keys=console
[formatters]
keys=simple
[logger_root]
handlers=console
[logger_0]
level=DEBUG
qualname=0
[handler_console]
class=StreamHandler
formatter=simple
args=()
[formatter_simple]
format=%(asctime)s:%(levelname)-8s:%(name)s::%(message)s

the goal is simply to have a logging level different on ``0`` than it is on 
``root``, but to get it I have to include a handler on ``0`` and stop 
propagation (or messages are displayed on both root and 0).

Do note that this behavior (of mandating handlers) does *not* happen when 
loggers are configured programmatically.

Should this be considered a bug? Worthy of opening a request on the tracker?

And while I'm at it, a few other oddities/annoyances I noticed in logging:

* have to specify the `root` logger in loggers/keys, even though it's mandatory 
to configure the root logger, or I get the following error::

Traceback (most recent call last):
  File "test.py", line 6, in 
logging.config.fileConfig('logging.conf')
  File 
"/opt/local/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/logging/config.py",
 line 82, in fileConfig
_install_loggers(cp, handlers, disable_existing_loggers)
  File 
"/opt/local/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/logging/config.py",
 line 183, in _install_loggers
llist.remove("root")
ValueError: list.remove(x): x not in list

* the ``args`` option is required when defining a handler, even in the example 
above where the handler doesn't take any argument (mandatory ones, anyway)

* Logger.log() doesn't take level names, only numerical levels, even after 
having called ``addLevelName``. This makes logging with custom levels much less 
clear as one has to write something along the lines of ``logging.log(100, 
'Houston, we have a problem')`` instead of the clearer 
``logging.log('PANTS_ON_FIRE', 'Houston, we have a problem')``. Note that since 
the introduction of _checkLevel fixing that is trivial:

diff -r dafc54104884 Lib/logging/__init__.py
--- a/Lib/logging/__init__.py   Sun Jan 31 14:17:25 2010 +0100
+++ b/Lib/logging/__init__.py   Mon Feb 01 22:21:03 2010 +0100
@@ -1146,13 +1146,14 @@
 
 logger.log(level, "We have a %s", "mysterious problem", exc_info=1)
 """
-if not isinstance(level, int):
+try:
+rv = _checkLevel(level)
+except (TypeError, ValueError):
 if raiseExceptions:
-raise TypeError("level must be an integer")
-else:
-return
-if self.isEnabledFor(level):
-self._log(level, msg, args, **kwargs)
+raise
+
+if self.isEnabledFor(rv):
+self._log(rv, msg, args, **kwargs)
 
 def findCaller(self):
 """

-- 
http://mail.python.org/mailman/listinfo/python-list


search engine for python source code

2010-02-01 Thread purui
I developed a source code search engine for python (http://
nullege.com). It helps you find samples from open source projects.
Unlike other code search engines, it really understands python syntax
and generates more relative results.
I'm working on expend the source code collection now. Give it a try if
google can't find a sample for you.

- Purui
-- 
http://mail.python.org/mailman/listinfo/python-list


How to use python-mms to send mms

2010-02-01 Thread guptha
Hi group,
After hours of googling i found out that we can use python-mms to send
mms  from our PC connected with GSM cell phone/Modem.
I could appreciate if some one guide me with a example of sending an
mms using python-mms.
I am using Ubuntu9.10 .I am trying to send a ring tone via MMS.

Thanks
Ganesh Guptha
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding methods from one class to another, dynamically

2010-02-01 Thread Steve Holden
Oltmans wrote:
> On Feb 2, 2:14 am, Steve Holden  wrote:
>> Should not tee be subclassing test, not unittest.TestCase?
>>
> 
> Thank you, Steve. This worked, but I've not clue why? Can you please
> enlighten me why sub-classing 'test' made it happen? Please. Thanks
> again.

unittest.TestCase doesn't have any test* methods (methods whose names
begin with "test"). So neither do your subclasses.

When you subclass test, however, test has everything that
unittest.TestCase does (because it subclasses unittest.TestCase) and it
also has three test* methods. So your subclass of test also has those
three methods as well.

regards
 Steve
-- 
Steve Holden   +1 571 484 6266   +1 800 494 3119
PyCon is coming! Atlanta, Feb 2010  http://us.pycon.org/
Holden Web LLC http://www.holdenweb.com/
UPCOMING EVENTS:http://holdenweb.eventbrite.com/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding methods from one class to another, dynamically

2010-02-01 Thread Oltmans
On Feb 2, 2:14 am, Steve Holden  wrote:
>
> Should not tee be subclassing test, not unittest.TestCase?
>

Thank you, Steve. This worked, but I've not clue why? Can you please
enlighten me why sub-classing 'test' made it happen? Please. Thanks
again.
> regards
>  Steve
> --
> Steve Holden           +1 571 484 6266   +1 800 494 3119
> PyCon is coming! Atlanta, Feb 2010  http://us.pycon.org/
> Holden Web LLC                http://www.holdenweb.com/
> UPCOMING EVENTS:        http://holdenweb.eventbrite.com/- Hide quoted text -
>
> - Show quoted text -

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding methods from one class to another, dynamically

2010-02-01 Thread Michele Simionato
Wanting the same methods to be attached to different classes often is
a code smell (perhaps it is not your case and then use setattr as
others said). Perhaps you can just leave such methods outside any
class. I would suggest you to use a testing framework not based on
inheritance (i.e. use nose or py.test). Perhaps your problem can be
solved with generative tests.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problems embedding python 2.6 in C++

2010-02-01 Thread Paul
Thanks Gabriel,

I've managed to get it working and so far stable...

What wasn't working reliably:

mycppclass
   mycppclass::mycppclass()
   m_mypymodule = PyImport_Import(pModuleName)

  mycppclass::~ mycppclass()
  Py_XDECREF(m_mypymodule)

  mycppclass::callpy(funcname, args...)
  PyTuple_SetItem * args
  PyCallable_Check(func)
  PyObject_CallObject(func)

Current working version:

mycppclass
   mycppclass::mycppclass()
   {}

  mycppclass::~ mycppclass()
  {}

  mycppclass::callpy(funcname, args...)
  m_mypymodule = PyImport_Import(pModuleName)

  pyargs = PyTuple_SetItem * args
  PyCallable_Check(func)
  PyObject_CallObject(func,pyargs)

  Py_XDECREF(m_mypymodule)

So now the module is being imported each function call (luckily I don't have
to worry about performance)

I assume this means that the internal representation of the imported module
is being corrupted by something. I found another person with a similar issue
here:
http://mail.python.org/pipermail/python-dev/2004-March/043306.html - that is
a long time ago but another multi-threaded app.

I'm happy to use the working method but I'd like to understand what is going
on a bit more. Can anyone shed any further light?

Regards,
Paul.

On Tue, Feb 2, 2010 at 11:59 AM, Gabriel Genellina 
 wrote:

> En Mon, 01 Feb 2010 18:21:56 -0300, Paul  escribió:
>
>
> I'm extending some old Visual Studio 6 code to add embedded python
>> scripting. It works fine most of the time but some python function calls
>> do
>> not work as expected.
>>
>> The C++ code is a multithreaded MFC application. I was assuming that it
>> was
>> GIL issues but I have tried using the manual locking (PyEval_SaveThread &
>> PyEval_RestoreThread) and what seems to be the current method
>> (PyGILState_Ensure & PyGILState_Release)
>>
>> Here's the error I'm getting:
>>
>>  Traceback (most recent call last):
>>  File "...scripts\receipt_parser.py", line 296, in
>> get_giftcard_purchase_value
>>details = extract_transaction_details_section(test)
>>  File "...scripts\receipt_parser.py", line 204, in
>> extract_transaction_details_section
>>for line in base_details:
>> TypeError: expected string or Unicode object, NoneType found
>>
>> base_details is a list of strings (I can just define it like
>> 'base_details=["1","2","3"...]' on the line previous) and the code runs
>> fine
>> when run from standard interpreter. Many other function calls work fine
>> from
>> the embedded app.
>> I create and then Py_DECREF the function parameters and the return value
>> after each function call. The module import is created at C++ object
>> constructor and then Py_DECREF'd in the desctuctor
>>
>
> Usually, errors in reference count handling prevent objects from being
> destroyed (a memory leak) or generate a GPF when accessing an now-inexistent
> object. In principle I'd look elsewhere.
>
>
> Anyone else had issues of this kind?
>>
>
> Hard to tell without more info. base_details is built in C++ code? Is it
> actually a list, or a subclass defined by you?
> A common error is to forget to check *every* API function call for errors,
> so errors get unnoticed for a while but are reported on the next check,
> which may happen in an entirely unrelated function.
>
>
> My next try will be to use
>> sub-interpreters per thread.
>>
>
> I would not do that - I'd try to *simplify* the code to test, not make it
> more complicated.
> Does it work in a single-threaded application?
>
> --
> Gabriel Genellina
>
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: HTML Parser which allows low-keyed local changes (upon serialization)

2010-02-01 Thread Tim Arnold

"Robert"  wrote in message 
news:hk729b$na...@news.albasani.net...
> Stefan Behnel wrote:
>> Robert, 01.02.2010 14:36:
>>> Stefan Behnel wrote:
 Robert, 31.01.2010 20:57:
> I tried lxml, but after walking and making changes in the element 
> tree,
> I'm forced to do a full serialization of the whole document
> (etree.tostring(tree)) - which destroys the "human edited" format of 
> the
> original HTML code. makes it rather unreadable.
 What do you mean? Could you give an example? lxml certainly does not
 destroy anything it parsed, unless you tell it to do so.
>>> of course it does not destroy during parsing.(?)
>>

I think I understand what you want, but I don't understand why yet. Do you 
want to view the differences in an IDE or something like that? If so, why 
not pretty-print both and compare that?
--Tim


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Modules failing to add runtime library path info at link time

2010-02-01 Thread cjblaine
On Feb 1, 11:04 pm, cjblaine  wrote:
> On Feb 1, 8:00 pm, Christian Heimes  wrote:
>
>
>
> > cjblaine wrote:
> > > Where/how can I configure the appropriate portion of our Python
> > > install to do 100% the right thing instead of just 50% (-L)?
>
> > Python's distutils doesn't alter the library search path unless you tell
> > it explicitly.
>
> > > A specific example -- note the -L and lack of -R (Solaris build):
>
> > > % python setup.py build
> > > 
> > > gcc -shared build/temp.solaris-2.10-sun4u-2.6/pgmodule.o -L/afs/rcf/
> > > apps/catchall/lib -lpq -o build/lib.solaris-2.10-sun4u-2.6/_pg.so
> > > %
>
> > The Extension() class supports the rpath argument. Simply add
> > rpath="/path/to/library/dir" and you are good.
>
> > Christian
>
> Thanks for the reply.
>
> So, python setup.py build rpath="/whatever/lib" ?

Replying to myself:

No.  Actually, Christian, it looks like it's runtime_library_dirs?

class Extension:
...
  runtime_library_dirs : [string]
list of directories to search for C/C++ libraries at run time
(for shared extensions, this is when the extension is loaded)

So, how does one inject that into, I assume, setup.cfg ?  Ideally it
could be done via the build command-line, but I don't see a way to do
that when browsing python setup.py build --help

( there is no setup.cfg provided in the PyGreSQL source tree, to )
( continue that example case )
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to guard against bugs like this one?

2010-02-01 Thread Carl Banks
On Feb 1, 7:33 pm, Tim Chase  wrote:
> Stephen Hansen wrote:
> > First, I don't shadow built in modules. Its really not very hard to avoid.
>
> Given the comprehensive nature of the batteries-included in
> Python, it's not as hard to accidentally shadow a built-in,
> unknown to you, but yet that is imported by a module you are
> using.  The classic that's stung me enough times (and many others
> on c.l.p and other forums, as a quick google evidences) such that
> I *finally* remember:
>
>    bash$ touch email.py
>    bash$ python
>    ...
>    >>> import smtplib
>    Traceback (most recent call last):
>      File "", line 1, in 
>      File "/usr/lib/python2.5/smtplib.py", line 46, in 
>        import email.Utils
>    ImportError: No module named Utils
>
> Using "email.py" is an innocuous name for a script/module you
> might want to do emailish things, and it's likely you'll use
> smtplib in the same code...and kablooie, things blow up even if
> your code doesn't reference or directly use the built-in email.py.


email.py is not an innocuous name, it's a generic name in a global
namespace, which is a Bad Thing.  Plus what does a script or module
called "email.py" actually do?  Send email?  Parse email?  "email" is
terrible name for a module and you deserve what you got for using it.

Name your modules "send_email.py" or "sort_email.py" or if it's a
library module of related functions, "email_handling.py".  Modules and
scripts do things (usually), they should be given action words as
names.


(**) Questionable though it be, if the Standard Library wants to use
an "innocuous" name, It can.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Modules failing to add runtime library path info at link time

2010-02-01 Thread cjblaine
On Feb 1, 8:00 pm, Christian Heimes  wrote:
> cjblaine wrote:
> > Where/how can I configure the appropriate portion of our Python
> > install to do 100% the right thing instead of just 50% (-L)?
>
> Python's distutils doesn't alter the library search path unless you tell
> it explicitly.
>
> > A specific example -- note the -L and lack of -R (Solaris build):
>
> > % python setup.py build
> > 
> > gcc -shared build/temp.solaris-2.10-sun4u-2.6/pgmodule.o -L/afs/rcf/
> > apps/catchall/lib -lpq -o build/lib.solaris-2.10-sun4u-2.6/_pg.so
> > %
>
> The Extension() class supports the rpath argument. Simply add
> rpath="/path/to/library/dir" and you are good.
>
> Christian

Thanks for the reply.

So, python setup.py build rpath="/whatever/lib" ?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to guard against bugs like this one?

2010-02-01 Thread Carl Banks
On Feb 1, 6:34 pm, kj  wrote:
> Both scripts live in a directory filled with *hundreds* little
> one-off scripts like the two of them.  I'll call this directory
> myscripts in what follows.

[snip]

> How can the average Python programmer guard against this sort of
> time-devouring bug in the future (while remaining a Python programmer)?


Don't put hundreds of little one-off scripts in single directory.
Python can't save you from polluting your own namespace.

Don't choose such generic names for modules.  Keep in mind module
names are potentially globally visible and any sane advice you ever
heard about globals is to use descriptive names.  I instinctively use
adjectives, compound words, and abstract nouns for the names of all my
modules so as to be more descriptive, and to avoid name conflicts with
classes and variables.

Also learn to debug better.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to guard against bugs like this one?

2010-02-01 Thread Stephen Hansen
On Mon, Feb 1, 2010 at 7:33 PM, Tim Chase wrote:

> Stephen Hansen wrote:
>
>> First, I don't shadow built in modules. Its really not very hard to avoid.
>>
>
> Given the comprehensive nature of the batteries-included in Python, it's
> not as hard to accidentally shadow a built-in, unknown to you, but yet that
> is imported by a module you are using.


I get that people run into this problem on occasion, but honestly-- its
*really* not very hard to avoid. If you're creating a module which feels..
generic. As if you may use that same name for a certain kinda topic in a
couple different programs-- chances are it /might/ be used as a generic
provider of support for that kinda topic in the standard library.

"email", "http", "url", anything with a generic .. smell. Assume it /is/ in
the stdlib until you demonstrate otherwise, if you aren't deeply familiar
with the stdlib.

And two seconds later, you can know: 'import numbers' will work. Can't use
that. Yeah, when a new version comes out, you may have to read What's New,
and see a new module, then rename something.

If you're going to use relative imports (and that's #2, I never do-- ever--
even before PEP328 existed), you just have to deal with the flatness of the
top-level namespace and how Python broadly claims the right to put darn near
anything in there.

Although they do google around a bit to try to gauge how likely it is to
clash, but still.

--S
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Optimized bytecode in exec statement

2010-02-01 Thread Steven D'Aprano
On Mon, 01 Feb 2010 22:19:43 -0300, Gabriel Genellina wrote:

> En Mon, 01 Feb 2010 20:05:16 -0300, Hermann Lauer
>  escribió:
> 
>> while trying to optimize some unpack operations with self compiling
>> code I wondered howto invoke the optimization in the exec statement.
>> Even starting with -O or -OO yields in Python 2.5.2 the same dis.dis
>> code, which
>> is appended below.
> 
> There is not much difference with -O or -OO; assert statements are
> removed, __debug__ is False, docstrings are not stored (in -OO). Code
> generation is basically the same.

Also code of the form:

if __debug__:
whatever

is compiled away if __debug__ is false.



> In general, x+0 may execute arbitrary code, depending on the type of x.
> Only if you could determine that x will always be an integer, you could
> optimize +0 away. Python (the current version of CPython) does not
> perform such analysis; you may investigate psyco, Unladen Swallow,
> ShedSkin for such things.

Keep in mind that the peephole optimizer in CPython does constant 
folding: 1+1 compiles to 2. Older versions of Python, and other 
implementations, may not.

But as a general rule, Python does very little optimization at compile 
time. To a first approximation, "remove asserts" is all that it does.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to guard against bugs like this one?

2010-02-01 Thread Tim Chase

Stephen Hansen wrote:

First, I don't shadow built in modules. Its really not very hard to avoid.


Given the comprehensive nature of the batteries-included in 
Python, it's not as hard to accidentally shadow a built-in, 
unknown to you, but yet that is imported by a module you are 
using.  The classic that's stung me enough times (and many others 
on c.l.p and other forums, as a quick google evidences) such that 
I *finally* remember:


  bash$ touch email.py
  bash$ python
  ...
  >>> import smtplib
  Traceback (most recent call last):
File "", line 1, in 
File "/usr/lib/python2.5/smtplib.py", line 46, in 
  import email.Utils
  ImportError: No module named Utils

Using "email.py" is an innocuous name for a script/module you 
might want to do emailish things, and it's likely you'll use 
smtplib in the same code...and kablooie, things blow up even if 
your code doesn't reference or directly use the built-in email.py.


Yes, as Chris mentions, PEP-328 absolute vs. relative imports 
should help ameliorate the problem, but it's not yet commonly 
used (unless you're using Py3, it's only at the request of a 
__future__ import in 2.5+).


-tkc




--
http://mail.python.org/mailman/listinfo/python-list


Re: How to guard against bugs like this one?

2010-02-01 Thread Steven D'Aprano
On Tue, 02 Feb 2010 02:34:07 +, kj wrote:

> I just spent about 1-1/2 hours tracking down a bug.
> 
> An innocuous little script, let's call it buggy.py, only 10 lines long,
> and whose output should have been, at most two lines, was quickly
> dumping tens of megabytes of non-printable characters to my screen (aka
> gobbledygook), and in the process was messing up my terminal *royally*. 
> Here's buggy.py:
[...]
> It turns out that buggy.py imports psycopg2, as you can see, and
> apparently psycopg2 (or something imported by psycopg2) tries to import
> some standard Python module called numbers; instead it ends up importing
> the innocent myscript/numbers.py, resulting in *absolute mayhem*.


There is no module numbers in the standard library, at least not in 2.5. 

>>> import numbers
Traceback (most recent call last):
  File "", line 1, in 
ImportError: No module named numbers

It must be specific to psycopg2.

I would think this is a problem with psycopg2 -- it sounds like it should 
be written as a package, but instead is written as a bunch of loose 
modules. I could be wrong of course, but if it is just a collection of 
modules, I'd definitely call that a poor design decision, if not a bug.


> (This is no mere Python "wart"; this is a suppurating chancre, and the
> fact that it remains unfixed is a neverending source of puzzlement for
> me.)

No, it's a wart. There's no doubt it bites people occasionally, but I've 
been programming in Python for about ten years and I've never been bitten 
by this yet. I'm sure it will happen some day, but not yet.

In this case, the severity of the bug (megabytes of binary crud to the 
screen) is not related to the cause of the bug (shadowing a module).

As for fixing it, unfortunately it's not quite so simple to fix without 
breaking backwards-compatibility. The opportunity to do so for Python 3.0 
was missed. Oh well, life goes on.


> How can the average Python programmer guard against this sort of
> time-devouring bug in the future (while remaining a Python programmer)?
> The only solution I can think of is to avoid like the plague the
> basenames of all the 200 or so /usr/lib/pythonX.XX/xyz.py{,c} files, and
> *pray* that whatever name one chooses for one's script does not suddenly
> pop up in the appropriate /usr/lib/pythonX.XX directory of a future
> release.

Unfortunately, Python makes no guarantee that there won't be some clash 
between modules. You can minimize the risks by using packages, e.g. given 
a package spam containing modules a, b, c, and d, if you refer to spam.a 
etc. then you can't clash with modules a, b, c, d, but only spam. So 
you've cut your risk profile from five potential clashes to only one.

Also, generally most module clashes are far more obvious. If you do this:

import module
x = module.y

and module is shadowed by something else, you're *much* more likely to 
get an AttributeError than megabytes of crud to the screen.

I'm sorry that you got bitten so hard by this, but in practice it's 
uncommon, and relatively mild when it happens.


> What else can one do?  Let's see, one should put every script in its own
> directory, thereby containing the damage.

That's probably a bit extreme, but your situation:

"Both scripts live in a directory filled with *hundreds* little
one-off scripts like the two of them."

is far too chaotic for my liking. You don't need to go to the extreme of 
a separate directory for each file, but you can certainly tidy things up 
a bit. For example, anything that's obsolete should be moved out of the 
way where it can't be accidentally executed or imported.




-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to guard against bugs like this one?

2010-02-01 Thread Roy Smith
In article , kj  
wrote:

> Through a *lot* of trial an error I finally discovered that the
> root cause of the problem was the fact that, in the same directory
> as buggy.py, there is *another* innocuous little script, totally
> unrelated, whose name happens to be numbers.py.
> [...]
> It turns out that buggy.py imports psycopg2, as you can see, and
> apparently psycopg2 (or something imported by psycopg2) tries to
> import some standard Python module called numbers; instead it ends
> up importing the innocent myscript/numbers.py, resulting in *absolute
> mayhem*.

I feel your pain, but this is not a Python problem, per-se.  The general 
pattern is:

1) You have something which refers to a resource by name.

2) There is a sequence of places which are searched for this name.

3) The search finds the wrong one because another resource by the same name 
appears earlier in the search path.

I've gotten bitten like this by shells finding the wrong executable (in 
$PATH).  By dynamic loaders finding the wrong library (in 
$LD_LIBRARY_PATH).  By C compilers finding the wrong #include file.  And so 
on.  This is just Python's import finding the wrong module in your 
$PYTHON_PATH.

The solution is the same in all cases.  You either have to refer to 
resources by some absolute name, or you need to make sure you set up your 
search paths correctly and know what's in them.  In your case, one possible 
solution be to make sure "." (or "") isn't in sys.path (although that might 
cause other issues).
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: HTML Parser which allows low-keyed local changes?

2010-02-01 Thread Nobody
On Sun, 31 Jan 2010 20:57:31 +0100, Robert wrote:

> I tried lxml, but after walking and making changes in the element 
> tree, I'm forced to do a full serialization of the whole document 
> (etree.tostring(tree)) - which destroys the "human edited" format 
> of the original HTML code.
> makes it rather unreadable.
> 
> is there an existing HTML parser which supports tracking/writing 
> back particular changes in a cautious way by just making local 
> changes? or a least tracks the tag start/end positions in the file?

HTMLParser, sgmllib.SGMLParser and htmllib.HTMLParser all allow you to
retrieve the literal text of a start tag (but not an end tag).
Unfortunately, they're only tokenisers, not parsers, so you'll need to
handle minimisation yourself.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Ruby

2010-02-01 Thread John Bokma
Nobody  writes:

> On Mon, 01 Feb 2010 14:35:57 -0800, Jonathan Gardner wrote:
>
>>> If it was common-place to use Curried functions and partial application in
>>> Python, you'd probably prefer "f a b c" to "f(a)(b)(c)" as well.
>> 
>> That's just the point. It isn't common to play with curried functions
>> or monads or anything like that in computer science today. Yes,
>> Haskell exists, and is a great experiment in how such a language could
>> actually work. But at the same time, you have to have a brain the size
>> of the titanic to contain all the little details about the language
>> before you could write any large-scale application.
>
> No, not really. Haskell (and previously ML) are often used as introductory
> languages in Comp.Sci. courses (at least in the UK).

At least in the early 90's this was also the case in the Netherlands, at
the University of Utrecht. We got Miranda/Gofer, and in a different,
more advanced course Linda (Miranda for parallel machines). Also the
inner workings of functional programming languages was a course. (Can't
recall the name of the book that was used, but it was quite good IMO).

I want to start (re)learning Haskell later this year, because I liked
Miranda/Gofer a lot back then.

-- 
John Bokma   j3b

Hacking & Hiking in Mexico -  http://johnbokma.com/
http://castleamber.com/ - Perl & Python Development
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to guard against bugs like this one?

2010-02-01 Thread Chris Rebert
On Mon, Feb 1, 2010 at 6:34 PM, kj  wrote:
> I just spent about 1-1/2 hours tracking down a bug.

> Through a *lot* of trial an error I finally discovered that the
> root cause of the problem was the fact that, in the same directory
> as buggy.py, there is *another* innocuous little script, totally
> unrelated, whose name happens to be numbers.py.  (This second script
> is one I wrote as part of a little Python tutorial I put together
> months ago, and is not much more of a script than hello_world.py;
> it's baby-steps for the absolute beginner.  But apparently, it has
> a killer name!  I had completely forgotten about it.)
>
> Both scripts live in a directory filled with *hundreds* little
> one-off scripts like the two of them.  I'll call this directory
> myscripts in what follows.
>
> It turns out that buggy.py imports psycopg2, as you can see, and
> apparently psycopg2 (or something imported by psycopg2) tries to
> import some standard Python module called numbers; instead it ends
> up importing the innocent myscript/numbers.py, resulting in *absolute
> mayhem*.
>
> (This is no mere Python "wart"; this is a suppurating chancre, and
> the fact that it remains unfixed is a neverending source of puzzlement
> for me.)
>
> How can the average Python programmer guard against this sort of
> time-devouring bug in the future (while remaining a Python programmer)?
> The only solution I can think of is to avoid like the plague the
> basenames of all the 200 or so /usr/lib/pythonX.XX/xyz.py{,c} files,
> and *pray* that whatever name one chooses for one's script does
> not suddenly pop up in the appropriate /usr/lib/pythonX.XX directory
> of a future release.
>
> What else can one do?  Let's see, one should put every script in its
> own directory, thereby containing the damage.
>
> Anything else?
>
> Any suggestion would be appreciated.

I think absolute imports avoid this problem:

from __future__ import absolute_import

For details, see PEP 328:
http://www.python.org/dev/peps/pep-0328/

Cheers,
Chris
--
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Ruby

2010-02-01 Thread Nobody
On Mon, 01 Feb 2010 14:13:38 -0800, Jonathan Gardner wrote:

> I judge a language's simplicity by how long it takes to explain the
> complete language. That is, what minimal set of documentation do you
> need to describe all of the language?

That's not a particularly good metric, IMHO.

A simple "core" language doesn't necessarily make a language simple to
use. You can explain the entirety of pure lambda calculus or combinators
in five minutes, but you wouldn't want to write real code in either (and
you certainly wouldn't want to read such code which was written by someone
else).

For a start, languages with a particularly simple "core" tend to delegate
too much to the library. One thing which puts a lot of people off of
lisp is the lack of infix operators; after all, (* 2 (+ 3 4)) works fine
and doesn't require any additional language syntax. For an alternative,
Tcl provides the "expr" function which essentially provides a sub-language
for arithmetic expressions.

A better metric is whether using N features has O(N) complexity, or O(N^2)
(where you have to understand how each feature relates to each other
feature) or even O(2^N) (where you have to understand every possible
combination of interactions).

> With a handful of statements,
> and a very short list of operators, Python beats out every language in
> the Algol family that I know of.

Not once you move beyond the 10-minute introduction, and have to start
thinking in terms of x + y is x.__add__(y) or maybe y.__radd__(x) and also
that x.__add__(y) is x.__getattribute__('__add__')(y) (but x + y *isn't*
equivalent to the latter due to __slots__), and maybe .__coerce__() gets
involved somewhere, and don't even get me started on __metaclass__ or
__init__ versus __new__ or ...

Yes, the original concept was very nice and clean, but everything since
then has been wedged in there by sheer force with a bloody great hammer.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to guard against bugs like this one?

2010-02-01 Thread Chris Colbert
This is kinda akin to creating your own libc.so in a folder where your
compiling and executing c programs and then wondering why your program bugs
out

On Mon, Feb 1, 2010 at 9:34 PM, kj  wrote:

>
>
> I just spent about 1-1/2 hours tracking down a bug.
>
> An innocuous little script, let's call it buggy.py, only 10 lines
> long, and whose output should have been, at most two lines, was
> quickly dumping tens of megabytes of non-printable characters to
> my screen (aka gobbledygook), and in the process was messing up my
> terminal *royally*.  Here's buggy.py:
>
>
>
> import sys
> import psycopg2
> connection_params = "dbname='%s' user='%s' password='%s'" %
> tuple(sys.argv[1:])
> conn = psycopg2.connect(connection_params)
> cur = conn.cursor()
> cur.execute('SELECT * FROM version;')
> print '\n'.join(x[-1] for x in cur.fetchall())
>
>
> (Of course, buggy.py is pretty useless; I reduced the original,
> more useful, script to this to help me debug it.)
>
> Through a *lot* of trial an error I finally discovered that the
> root cause of the problem was the fact that, in the same directory
> as buggy.py, there is *another* innocuous little script, totally
> unrelated, whose name happens to be numbers.py.  (This second script
> is one I wrote as part of a little Python tutorial I put together
> months ago, and is not much more of a script than hello_world.py;
> it's baby-steps for the absolute beginner.  But apparently, it has
> a killer name!  I had completely forgotten about it.)
>
> Both scripts live in a directory filled with *hundreds* little
> one-off scripts like the two of them.  I'll call this directory
> myscripts in what follows.
>
> It turns out that buggy.py imports psycopg2, as you can see, and
> apparently psycopg2 (or something imported by psycopg2) tries to
> import some standard Python module called numbers; instead it ends
> up importing the innocent myscript/numbers.py, resulting in *absolute
> mayhem*.
>
> (This is no mere Python "wart"; this is a suppurating chancre, and
> the fact that it remains unfixed is a neverending source of puzzlement
> for me.)
>
> How can the average Python programmer guard against this sort of
> time-devouring bug in the future (while remaining a Python programmer)?
> The only solution I can think of is to avoid like the plague the
> basenames of all the 200 or so /usr/lib/pythonX.XX/xyz.py{,c} files,
> and *pray* that whatever name one chooses for one's script does
> not suddenly pop up in the appropriate /usr/lib/pythonX.XX directory
> of a future release.
>
> What else can one do?  Let's see, one should put every script in its
> own directory, thereby containing the damage.
>
> Anything else?
>
> Any suggestion would be appreciated.
>
> TIA!
>
> ~k
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to guard against bugs like this one?

2010-02-01 Thread Stephen Hansen
>
> (This is no mere Python "wart"; this is a suppurating chancre, and
> the fact that it remains unfixed is a neverending source of puzzlement
> for me.)
>
> How can the average Python programmer guard against this sort of
> time-devouring bug in the future (while remaining a Python programmer)?
> The only solution I can think of is to avoid like the plague the
> basenames of all the 200 or so /usr/lib/pythonX.XX/xyz.py{,c} files,
> and *pray* that whatever name one chooses for one's script does
> not suddenly pop up in the appropriate /usr/lib/pythonX.XX directory
> of a future release.
>


First, I don't shadow built in modules. Its really not very hard to avoid.

Secondly, I use packages structuring my libraries, and avoid junk
directories of a hundred some odd 'scripts'.

Third, I don't execute scripts in that directory structure directly, but
instead do python -c 'from package.blah import main; main.main()' or some
such. Usually via some short-cut, or a runner batch file.

Note that the third avoids the above problem entirely, but that's not why I
do it. Its just a side-effect.

--S
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Ruby

2010-02-01 Thread John Bokma
Jonathan Gardner  writes:

> One of the bad things with languages like perl

FYI: the language is called Perl, the program that executes a Perl
program is called perl.

> without parentheses is that getting a function ref is not obvious. You
> need even more syntax to do so. In perl:
>
>  foo();   # Call 'foo' with no args.
>  $bar = foo;  # Call 'foo; with no args, assign to '$bar'
>  $bar = &foo; # Don't call 'foo', but assign a pointer to it to '$bar'
>   # By the way, this '&' is not the bitwise-and '&'

It should be $bar = \&foo 
Your example actually calls foo...

[..]

> One is simple, consistent, and easy to explain. The other one requires
> the introduction of advanced syntax and an entirely new syntax to make
> function calls with references.

The syntax follows that of referencing and dereferencing:

$bar = \...@array;   # bar contains now a reference to array
$bar->[ 0 ];  # first element of array referenced by bar
$bar = \%hash;# bar contains now a reference to a hash
$bar->{ key };# value associated with key of hash ref. by bar
$bar = \&foo; # bar contains now a reference to a sub
$bar->( 45 ); # call sub ref. by bar with 45 as an argument

Consistent: yes. New syntax? No.

Also, it helps to think of

$ as a thing
@ as thingies indexed by numbers
% as thingies indexed by keys

-- 
John Bokma   j3b

Hacking & Hiking in Mexico -  http://johnbokma.com/
http://castleamber.com/ - Perl & Python Development
-- 
http://mail.python.org/mailman/listinfo/python-list


How to guard against bugs like this one?

2010-02-01 Thread kj


I just spent about 1-1/2 hours tracking down a bug.

An innocuous little script, let's call it buggy.py, only 10 lines
long, and whose output should have been, at most two lines, was
quickly dumping tens of megabytes of non-printable characters to
my screen (aka gobbledygook), and in the process was messing up my
terminal *royally*.  Here's buggy.py:



import sys
import psycopg2
connection_params = "dbname='%s' user='%s' password='%s'" % tuple(sys.argv[1:])
conn = psycopg2.connect(connection_params)
cur = conn.cursor()
cur.execute('SELECT * FROM version;')
print '\n'.join(x[-1] for x in cur.fetchall())


(Of course, buggy.py is pretty useless; I reduced the original,
more useful, script to this to help me debug it.)

Through a *lot* of trial an error I finally discovered that the
root cause of the problem was the fact that, in the same directory
as buggy.py, there is *another* innocuous little script, totally
unrelated, whose name happens to be numbers.py.  (This second script
is one I wrote as part of a little Python tutorial I put together
months ago, and is not much more of a script than hello_world.py;
it's baby-steps for the absolute beginner.  But apparently, it has
a killer name!  I had completely forgotten about it.)

Both scripts live in a directory filled with *hundreds* little
one-off scripts like the two of them.  I'll call this directory
myscripts in what follows. 

It turns out that buggy.py imports psycopg2, as you can see, and
apparently psycopg2 (or something imported by psycopg2) tries to
import some standard Python module called numbers; instead it ends
up importing the innocent myscript/numbers.py, resulting in *absolute
mayhem*.

(This is no mere Python "wart"; this is a suppurating chancre, and
the fact that it remains unfixed is a neverending source of puzzlement
for me.)

How can the average Python programmer guard against this sort of
time-devouring bug in the future (while remaining a Python programmer)?
The only solution I can think of is to avoid like the plague the
basenames of all the 200 or so /usr/lib/pythonX.XX/xyz.py{,c} files,
and *pray* that whatever name one chooses for one's script does
not suddenly pop up in the appropriate /usr/lib/pythonX.XX directory
of a future release.

What else can one do?  Let's see, one should put every script in its
own directory, thereby containing the damage.

Anything else?

Any suggestion would be appreciated.

TIA!

~k
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Ruby

2010-02-01 Thread Nobody
On Mon, 01 Feb 2010 14:35:57 -0800, Jonathan Gardner wrote:

>> If it was common-place to use Curried functions and partial application in
>> Python, you'd probably prefer "f a b c" to "f(a)(b)(c)" as well.
> 
> That's just the point. It isn't common to play with curried functions
> or monads or anything like that in computer science today. Yes,
> Haskell exists, and is a great experiment in how such a language could
> actually work. But at the same time, you have to have a brain the size
> of the titanic to contain all the little details about the language
> before you could write any large-scale application.

No, not really. Haskell (and previously ML) are often used as introductory
languages in Comp.Sci. courses (at least in the UK).

You don't need to know the entire language before you can use any of it
(if you did, Python would be deader than a certain parrot; Python's dark
corners are *really* dark).

The lack of mutable state (or at least, the isolation of it within monads)
eliminates a lot of potential problems. How many Python novices get
tripped up by "x = y = [] ; x.append(...); # now y has changed"?

And in spite of the category theory behind monads, Haskell's I/O system
really isn't any more complex than that of any other language, beyond the
constraint that you can only use it in "procedures" (i.e. something
returning an IO instance), not in functions. Which for the most part, is a
net win, as it forces you to maintain a reasonable degree of structure.

Now, if you actually want to use everything the language has to offer, you
can run into some fairly hairy error messages. But then I've found that to
be a common problem with generic programming in general. E.g. error
messages relating to the C++ STL can be quite insanely complex
(particularly when the actual error is "there are so many nested templates
that the mangled function name is longer than the linker can handle" and
it's trying to explain *where* the error is).

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Ruby

2010-02-01 Thread Chris Rebert
On Mon, Feb 1, 2010 at 6:14 PM, MRAB  wrote:
> Nobody wrote:
>> On Sun, 31 Jan 2010 22:36:32 +, Steven D'Aprano wrote:
 for example, in if you have a function 'f' which takes two parameters to
 call the function and get the result you use:

  f 2 3

 If you want the function itself you use:

   f
>>>
>>> How do you call a function of no arguments?
>>
>> There's no such thing. All functions take one argument and return a value.
>>
>> As functions don't have side-effects, there is seldom much point in having
>> a function with no arguments or which doesn't return a value. In cases
>> where it is useful (i.e. a value must have function type), you can use the
>> unit type "()" (essentially a zero-element tuple), e.g.:
>>
>>        f () = 1
>> or:
>>        f x = ()
>>
> A function with no arguments could be used as a lazy constant, generated
> only on demand.

The usefulness of that depends on a language's evaluation strategy.
Haskell, for instance, uses lazy evaluation by default, so your use
case doesn't apply in that instance.

Cheers,
Chris
--
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Ruby

2010-02-01 Thread MRAB

Nobody wrote:

On Sun, 31 Jan 2010 22:36:32 +, Steven D'Aprano wrote:


for example, in if you have a function 'f' which takes two parameters to
call the function and get the result you use:

 f 2 3

If you want the function itself you use:

   f

How do you call a function of no arguments?


There's no such thing. All functions take one argument and return a value.

As functions don't have side-effects, there is seldom much point in having
a function with no arguments or which doesn't return a value. In cases
where it is useful (i.e. a value must have function type), you can use the
unit type "()" (essentially a zero-element tuple), e.g.:

f () = 1
or:
f x = ()


A function with no arguments could be used as a lazy constant, generated
only on demand.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Ruby

2010-02-01 Thread Nobody
On Sun, 31 Jan 2010 22:36:32 +, Steven D'Aprano wrote:

>> for example, in if you have a function 'f' which takes two parameters to
>> call the function and get the result you use:
>> 
>>  f 2 3
>> 
>> If you want the function itself you use:
>> 
>>f
> 
> How do you call a function of no arguments?

There's no such thing. All functions take one argument and return a value.

As functions don't have side-effects, there is seldom much point in having
a function with no arguments or which doesn't return a value. In cases
where it is useful (i.e. a value must have function type), you can use the
unit type "()" (essentially a zero-element tuple), e.g.:

f () = 1
or:
f x = ()

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Optimized bytecode in exec statement

2010-02-01 Thread Gabriel Genellina
En Mon, 01 Feb 2010 20:05:16 -0300, Hermann Lauer  
 escribió:



while trying to optimize some unpack operations with self compiling
code I wondered howto invoke the optimization in the exec statement.
Even starting with -O or -OO yields in Python 2.5.2 the same dis.dis  
code, which

is appended below.


There is not much difference with -O or -OO; assert statements are  
removed, __debug__ is False, docstrings are not stored (in -OO). Code  
generation is basically the same.


My interest would be to reduce the LOAD_FAST ops of "s" with DUP_TOP on  
the stack

(are there hash lookups behind those ?)


No, no lookup is performed at run time. LOAD_FAST takes an integer  
argument, an index into the locals array. The compiler knows (by static  
analysis, at compile time) which names are local; those objects are stored  
in a C array. I've not done such microbenchmark, but using DUP_TOP instead  
might even be slower.



and of course to optimize the integer
arithmetics following below (removing +-0 etc).


Can't you remove it from source beforehand? (or, from  
`r.rra.LAST3.getter(ds=0,row=0)` since it looks like a code generator)


In general, x+0 may execute arbitrary code, depending on the type of x.  
Only if you could determine that x will always be an integer, you could  
optimize +0 away. Python (the current version of CPython) does not perform  
such analysis; you may investigate psyco, Unladen Swallow, ShedSkin for  
such things.


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Optimized bytecode in exec statement

2010-02-01 Thread Christian Heimes
Terry Reedy wrote:
> Currently, as far as I know, -0 just removes asserts.

-O (not -0) also sets __debug__ to False.

Christian
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: For loop searching takes too long!

2010-02-01 Thread Steven D'Aprano
On Mon, 01 Feb 2010 14:04:01 -0800, Jonathan Gardner wrote:

> Having worked with datasets that are truly enormous, whenever I see a
> list that is unbounded (as this appears to be), I freak out because
> unbounded can be a really, really big number. Might as well write
> software that works for the truly enormous case and the really large
> case and be done with it once and for all.


Then write it to work on iterators, if at all possible, and you won't 
care how large the data stream is. Even databases have limits!

Of course you should write your code to deal with your expected data, but 
it is possible to commit overkill as well as underkill. There are trade-
offs between development effort, runtime speed, memory usage, and size of 
data you can deal with, and maximizing the final one is not always the 
best strategy.

Look at it this way... for most applications, you probably don't use 
arbitrary precision floats even though floating point numbers can have an 
unbounded number of decimal places. But in practice, you almost never 
need ten decimal places to pi, let alone an infinite number. If the 
creators of floating point libraries wrote software "that works for the 
truly enormous case", the overhead would be far too high for nearly all 
applications.

Choose your algorithm according to your expected need. If you under-
estimate your need, you will write slow code, and if you over-estimate 
it, you'll write slow code too. Getting that balance right is why they 
pay software architects the big bucks *wink*



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Optimized bytecode in exec statement

2010-02-01 Thread Terry Reedy

On 2/1/2010 6:05 PM, Hermann Lauer wrote:

Dear All,

while trying to optimize some unpack operations with self compiling
code I wondered howto invoke the optimization in the exec statement.
Even starting with -O or -OO yields in Python 2.5.2 the same dis.dis code, which
is appended below.


Currently, as far as I know, -0 just removes asserts.



My interest would be to reduce the LOAD_FAST ops of "s" with DUP_TOP on the 
stack
(are there hash lookups behind those ?)


Load_fast is a locals array lookup. load_fast  0 (s) means "put 
locals[0] on the top of the stack". The '(s)' is added for the human reader.



and of course to optimize the integer
arithmetics following below (removing +-0 etc).


Something + 0 is no necessarily something. It depends on the type of 
something. So the interpreter will not optimize it away. What concretely 
happens with int+0 is a different matter. You can remove it though, if 
you want to get into byte-code hacking, but why write '+0' if you do not 
want it?




Thanks for any ideas,
greetings
   Hermann



import dis
g=r.rra.LAST3.getter(ds=0,row=0)


class fa(struct.Struct):
   def __init__(s):
 super(fa,s).__init__('d')

   def __call__(s,):
   return s.unpack_from(s.buf,5392+(((s.lastrow()-0)%2880)*3+0)*8)[0]


dis.dis(g.__call__)

   7   0 LOAD_FAST0 (s)
   3 LOAD_ATTR0 (unpack_from)
   6 LOAD_FAST0 (s)
   9 LOAD_ATTR1 (buf)
  12 LOAD_CONST   1 (5392)
  15 LOAD_FAST0 (s)
  18 LOAD_ATTR2 (lastrow)
  21 CALL_FUNCTION0
  24 LOAD_CONST   2 (0)
  27 BINARY_SUBTRACT
  28 LOAD_CONST   3 (2880)
  31 BINARY_MODULO
  32 LOAD_CONST   4 (3)
  35 BINARY_MULTIPLY
  36 LOAD_CONST   2 (0)
  39 BINARY_ADD
  40 LOAD_CONST   5 (8)
  43 BINARY_MULTIPLY
  44 BINARY_ADD
  45 CALL_FUNCTION2
  48 LOAD_CONST   2 (0)
  51 BINARY_SUBSCR
  52 RETURN_VALUE


Terry Jan Reedy



--
http://mail.python.org/mailman/listinfo/python-list


Re: Modules failing to add runtime library path info at link time

2010-02-01 Thread Christian Heimes
cjblaine wrote:
> Where/how can I configure the appropriate portion of our Python
> install to do 100% the right thing instead of just 50% (-L)?

Python's distutils doesn't alter the library search path unless you tell
it explicitly.

> A specific example -- note the -L and lack of -R (Solaris build):
> 
> % python setup.py build
> 
> gcc -shared build/temp.solaris-2.10-sun4u-2.6/pgmodule.o -L/afs/rcf/
> apps/catchall/lib -lpq -o build/lib.solaris-2.10-sun4u-2.6/_pg.so
> %

The Extension() class supports the rpath argument. Simply add
rpath="/path/to/library/dir" and you are good.

Christian
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3147 - new .pyc format

2010-02-01 Thread Steven D'Aprano
On Mon, 01 Feb 2010 21:19:52 +0100, Daniel Fetchinson wrote:

>> Personally, I think it is a terribly idea to keep the source file and
>> byte code file in such radically different places. They should be kept
>> together. What you call "clutter" I call having the files that belong
>> together kept together.
> 
> I see why you think so, it's reasonable, however there is compelling
> argument, I think, for the opposite view: namely to keep things
> separate. An average developer definitely wants easy access to .py
> files. However I see no good reason for having access to .pyc files. I
> for one have never inspected a .pyc file. Why would you want to have a
> .pyc file at hand?

If you don't care about access to .pyc files, why do you care where they 
are? If they are in a subdirectory module.pyr, then shrug and ignore the 
subdirectory. 

If you (generic you) are one of those developers who don't care 
about .pyc files, then when you are browsing your source directory and 
see this:


module.py
module.pyc 

you just ignore the .pyc file. Or delete it, and Python will re-create it 
as needed. So if you see

module.pyr/

just ignore that as well.



> If we don't really want to have .pyc files in convenient locations
> because we (almost) never want to access them really, then I'd say it's
> a good idea to keep them totally separate and so make don't get in the
> way.

I like seeing them in the same place as the source file, because when I 
start developing a module, I often end up renaming it multiple times 
before it settles on a final name. When I rename or move it, I delete 
the .pyc file, and that ensures that if I miss changing an import, and 
try to import the old name, it will fail.

By hiding the .pyc file elsewhere, it is easy to miss deleting one, and 
then the import won't fail, it will succeed, but use the old, obsolete 
byte code.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: recv_into(bytearray) complains about a "pinned buffer"

2010-02-01 Thread Andrew Dalke
On Feb 2, 12:12 am, Martin v. Loewis wrote:
> My recommendation would be to not use recv_into in 2.x, but only in 3.x.

> I don't think that's the full solution. The array module should also
> implement the new buffer API, so that it would also fail with the old
> recv_into.

Okay. But recv_into was added in 2.5 and the test case in
2.6's test_socket.py clearly allows an array there:


def testRecvInto(self):
buf = array.array('c', ' '*1024)
nbytes = self.cli_conn.recv_into(buf)
self.assertEqual(nbytes, len(MSG))
msg = buf.tostring()[:len(MSG)]
self.assertEqual(msg, MSG)

Checking koders and Google Code search engines, I found one project
which used recv_into, with the filename bmpreceiver.py . It
uses a array.array("B", [0] * length) .

Clearly it was added to work with an array, and it's
being used with an array. Why shouldn't people use it
with Python 2.x?

Andrew
da...@dalkescientific.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Optimized bytecode in exec statement

2010-02-01 Thread Hermann Lauer
Dear All,

while trying to optimize some unpack operations with self compiling 
code I wondered howto invoke the optimization in the exec statement.
Even starting with -O or -OO yields in Python 2.5.2 the same dis.dis code, which
is appended below.

My interest would be to reduce the LOAD_FAST ops of "s" with DUP_TOP on the 
stack
(are there hash lookups behind those ?) and of course to optimize the integer
arithmetics following below (removing +-0 etc).

Thanks for any ideas,
greetings
  Hermann


>>> import dis
>>> g=r.rra.LAST3.getter(ds=0,row=0)

class fa(struct.Struct):
  def __init__(s):
super(fa,s).__init__('d')

  def __call__(s,):
  return s.unpack_from(s.buf,5392+(((s.lastrow()-0)%2880)*3+0)*8)[0]

>>> dis.dis(g.__call__)
  7   0 LOAD_FAST0 (s)
  3 LOAD_ATTR0 (unpack_from)
  6 LOAD_FAST0 (s)
  9 LOAD_ATTR1 (buf)
 12 LOAD_CONST   1 (5392)
 15 LOAD_FAST0 (s)
 18 LOAD_ATTR2 (lastrow)
 21 CALL_FUNCTION0
 24 LOAD_CONST   2 (0)
 27 BINARY_SUBTRACT 
 28 LOAD_CONST   3 (2880)
 31 BINARY_MODULO   
 32 LOAD_CONST   4 (3)
 35 BINARY_MULTIPLY 
 36 LOAD_CONST   2 (0)
 39 BINARY_ADD  
 40 LOAD_CONST   5 (8)
 43 BINARY_MULTIPLY 
 44 BINARY_ADD  
 45 CALL_FUNCTION2
 48 LOAD_CONST   2 (0)
 51 BINARY_SUBSCR   
 52 RETURN_VALUE


-- 
Netzwerkadministration/Zentrale Dienste, Interdiziplinaeres 
Zentrum fuer wissenschaftliches Rechnen der Universitaet Heidelberg
IWR; INF 368; 69120 Heidelberg; Tel: (06221)54-8236 Fax: -5224
Email: hermann.la...@iwr.uni-heidelberg.de
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: interaction of mode 'r+', file.write(), and file.tell(): a bug or undefined behavior?

2010-02-01 Thread Aahz
In article <4b617f4...@dnews.tpgi.com.au>,
Lie Ryan   wrote:
>
>f = open('input.txt', 'r+')
>for line in f:
>s = line.replace('python', 'PYTHON')
># f.tell()
>f.write(s)
>
>When f.tell() is commented, 'input.txt' does not change; but when
>uncommented, the f.write() succeeded writing into the 'input.txt'
>(surprisingly, but not entirely unexpected, at the end of the file).

Another possible issue is that using a file iterator is generally not
compatible with direct file operations.
-- 
Aahz (a...@pythoncraft.com)   <*> http://www.pythoncraft.com/

import antigravity
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Ruby

2010-02-01 Thread Paul Rubin
Jonathan Gardner  writes:
> I judge a language's simplicity by how long it takes to explain the
> complete language. That is, what minimal set of documentation do you
> need to describe all of the language? With a handful of statements,
> and a very short list of operators, Python beats out every language in
> the Algol family that I know of.

Python may have been like that in the 1.5 era.  By now it's more
complex, and not all that well documented.  Consider the different
subclassing rules for new and old style classes, the interaction of
metaclasses and multiple inheritance, the vagaries of what operations
are thread-safe without locks, the inter-thread signalling mechanism
that can only be invoked through the C API, the mysteries of
generator-based coroutines, etc.  I've never used Ruby and I think its
syntax is ugly, but everyone tells me it's more uniform.

Simplicity is not necessarily such a good thing anyway.  Consider FORTH.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: recv_into(bytearray) complains about a "pinned buffer"

2010-02-01 Thread Martin v. Loewis
Antoine Pitrou wrote:
> Le Mon, 01 Feb 2010 03:30:56 +0100, Martin v. Loewis a écrit :
>>> Is this a bug in Python 2.6 or a deliberate choice regarding
>>> implementation concerns I don't know about?
>> It's actually a bug also that you pass an array; doing so *should* give
>> the very same error.
> 
> Well, if you can give neither an array nor a bytearray to recv_into(), 
> what *could* you give it?

My recommendation would be to not use recv_into in 2.x, but only in 3.x.

> recv_into() should simply be fixed to use the new buffer API, as it does 
> in 3.x.

I don't think that's the full solution. The array module should also
implement the new buffer API, so that it would also fail with the old
recv_into.

Regards,
Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: myths about python 3

2010-02-01 Thread Anssi Saari
Blog  writes:

> Where did you come up with that information? Almost all of the major
> distros ship with 2.6.x - CentOS, OpenSuSe, Ubuntu, Fedora. (Debian
> does ship with 2.5, but the next major release "sid' is due out in Q2)

I don't see Python 2.6 in my CentOS 5.4 installation. All I see is
2.4. Same as RHEL and I'd say that's a fairly major distribution too.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Ruby

2010-02-01 Thread Chris Rebert
On Mon, Feb 1, 2010 at 2:28 PM, Jonathan Gardner
 wrote:
> On Jan 31, 3:01 am, rantingrick  wrote:
>> On Jan 30, 10:43 am, Nobody  wrote:
>> > That's also true for most functional languages, e.g. Haskell and ML, as
>> > well as e.g. Tcl and most shells. Why require "f(x)" or "(f x)" if "f x"
>> > will suffice?
>>
>> yuck! wrapping the arg list with parenthesis (python way) makes the
>> most sense. Its to easy to misread somthing like this
>>
>> onetwothree four five six
>>
>> onetwothree(four, five, six) #ahhh... plain english.
>
> In Lisp-ish languages, you have a list of stuff that represents a
> function call:
>
>  (a b c d)
>
> means: Call "a" with values (b, c, d)
>
> While this certainly doesn't agree with what you learned in Algebra,
> it is a reasonable syntax that exposes the code-data duality of
> programs. There is, however, one fatal flaw. Why is the first element
> so different than the rest? This is inconsistent with what people who
> are unfamiliar with the language would expect. Indeed, in teaching
> Lisp, learners have to be reminded about how the evaluator looks at
> lists and processes them.
>
> I would expect a clear, simple language to have exactly one way to
> call a function. This calling notation would clearly distinguish
> between the function and its parameters. There are quite a few
> options, and it turns out that "function(arg, arg, arg)" is a really
> good compromise.
>
> One of the bad things with languages like perl and Ruby that call
> without parentheses is that getting a function ref is not obvious. You
> need even more syntax to do so. In perl:
>
>  foo();       # Call 'foo' with no args.
>  $bar = foo;  # Call 'foo; with no args, assign to '$bar'
>  $bar = &foo; # Don't call 'foo', but assign a pointer to it to '$bar'
>              # By the way, this '&' is not the bitwise-and '&'
>  $bar->()     # Call whatever '$bar' is pointing at with no args
>
> Compare with python:
>
>  foo()       # Call 'foo' with no args.
>  bar = foo() # 'bar' is now pointing to whatever 'foo()' returned
>  bar = foo   # 'bar' is now pointing to the same thing 'foo' points to
>  bar()       # Call 'bar' with no args
>
> One is simple, consistent, and easy to explain. The other one requires
> the introduction of advanced syntax and an entirely new syntax to make
> function calls with references.

Ruby isn't nearly as bad as Perl in this regard; at least it doesn't
introduce extra syntax (though there are extra method calls):

foo# Call 'foo' with no args.
bar = foo  # Call 'foo; with no args, assign to bar
bar = method(:foo)  # 'bar' is now referencing the 'foo' "function"
bar.call# Call 'bar' (i.e. 'foo') with no args

Cheers,
Chris
--
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Iterating over a function call

2010-02-01 Thread Terry Reedy

On 2/1/2010 3:05 PM, Arnaud Delobelle wrote:


You have itertools.consume which is close to what you want:

 consume(imap(func, iterable)) # 2.x

 consume(map(func, iterable)) # 3.x


Consume is not in itertools. It is one of many recipes in the doc 
(9.7.2). For exhausting an iterator, collections.deque(iterator, maxlen=0).


--
http://mail.python.org/mailman/listinfo/python-list


Re: Problems embedding python 2.6 in C++

2010-02-01 Thread Gabriel Genellina

En Mon, 01 Feb 2010 18:21:56 -0300, Paul  escribió:


I'm extending some old Visual Studio 6 code to add embedded python
scripting. It works fine most of the time but some python function calls  
do

not work as expected.

The C++ code is a multithreaded MFC application. I was assuming that it  
was

GIL issues but I have tried using the manual locking (PyEval_SaveThread &
PyEval_RestoreThread) and what seems to be the current method
(PyGILState_Ensure & PyGILState_Release)

Here's the error I'm getting:

 Traceback (most recent call last):
  File "...scripts\receipt_parser.py", line 296, in
get_giftcard_purchase_value
details = extract_transaction_details_section(test)
  File "...scripts\receipt_parser.py", line 204, in
extract_transaction_details_section
for line in base_details:
TypeError: expected string or Unicode object, NoneType found

base_details is a list of strings (I can just define it like
'base_details=["1","2","3"...]' on the line previous) and the code runs  
fine
when run from standard interpreter. Many other function calls work fine  
from

the embedded app.
I create and then Py_DECREF the function parameters and the return value
after each function call. The module import is created at C++ object
constructor and then Py_DECREF'd in the desctuctor


Usually, errors in reference count handling prevent objects from being  
destroyed (a memory leak) or generate a GPF when accessing an  
now-inexistent object. In principle I'd look elsewhere.



Anyone else had issues of this kind?


Hard to tell without more info. base_details is built in C++ code? Is it  
actually a list, or a subclass defined by you?
A common error is to forget to check *every* API function call for errors,  
so errors get unnoticed for a while but are reported on the next check,  
which may happen in an entirely unrelated function.



My next try will be to use
sub-interpreters per thread.


I would not do that - I'd try to *simplify* the code to test, not make it  
more complicated.

Does it work in a single-threaded application?

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Iterating over a function call

2010-02-01 Thread Terry Reedy

On 2/1/2010 11:50 AM, Gerald Britton wrote:

Hi -- I have many sections of code like this:

 for value in value_iterator:
  value_function(value)

I noticed that this does two things I don't like:

1. looks up "value_function" and "value" for each iteration, but
"value_function" doesn't change.


What you mean by 'looks up'? And why are you bothered by normal Python 
expression evaluation? Within a function, the 'look_up' is a fast 
LOAD_FAST instruction.



2. side effect of (maybe) leaking the iterator variable "value" into
the code following the loop (if the iterator is not empty).


So? it is sometime useful.


I can take care of 2 by explicitly deleting the variable at the end:

del value

but I'd probably forget to do that sometimes.


So? If having 'value' bound breaks your subsequent code, I consider it 
buggy.


  I then realized that,

in the 2.x series, I can accomplish the same thing with:

 map(value_function, value_iterator)

and avoid both problems BUT map() returns a list which is never used.
Not a big deal for small iterables, I guess, but it seems messy.  Upon
conversion to 3.x I have to explicitly list-ify it:

 list(map(value_function, value_iterator))

which works but again the list returned is never used (extra work) and
has to be gc'd I suppose (extra memory).

It's easy to make a little function to take care of this (2.x):

 from itertools import imap
 def apply(function, iterable):
 for item in imap(function, iterable):
 pass


collections.deque(imap(function, iterable), maxlen=0)
will do nearly the same and may be faster.


then later:

apply(value_function, value_iterator)

or something similar thing in 3.x, but that just adds an additional
function def that I have to include whenever I want to do something
like this.

So.I'm wondering if there is any interest in an apply() built-in
function that would work like map() does in 2.x (calls the function
with each value returned by the iterator) but return nothing.  Maybe
"apply" isn't the best name; it's just the first one that occurred to
me.

Or is this just silly and should I forget about it?


In my opinion, forget about it.

Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Modules failing to add runtime library path info at link time

2010-02-01 Thread cjblaine
Hey everyone, this has been driving me crazy for long enough now that
I'm motivated to post and find an answer.

Before I pose my question, let me state that using LD_LIBRARY_PATH is
not the answer :)

We have things installed in odd places, such as very specific versions
of libraries to link against, etc.

When I build a module (let's say PyGreSQL for instance) via python
setup.py build, the link part is including a proper -L argument (-L/
some/weird/lib because /some/weird/bin was in PATH), but is omitting
the additional needed runtime linker arguments.

For Solaris that's -R/some/weird/lib

For Linux that's -Xlinker -rpath -Xlinker /some/weird/lib

Where/how can I configure the appropriate portion of our Python
install to do 100% the right thing instead of just 50% (-L)?

A specific example -- note the -L and lack of -R (Solaris build):

% python setup.py build
...
gcc -shared build/temp.solaris-2.10-sun4u-2.6/pgmodule.o -L/afs/rcf/
apps/catchall/lib -lpq -o build/lib.solaris-2.10-sun4u-2.6/_pg.so
%
-- 
http://mail.python.org/mailman/listinfo/python-list


converting XML to hash/dict/CustomTreeCtrl

2010-02-01 Thread Astan Chee

Hi,
I have xml files that I want to convert to a hash/dict and then further 
placed in a wx CustomTreeCtrl based on the structure. The problem I am 
having now is that the XML file is very unusual and there aren't any 
unique identifiers to be put in a dict and because there are no unique 
variables, finding the value of it from a CustromTreeCtrl is abit tricky.

I would appreciate any help or links to projects similar to this.
Thanks

The XML file looks something like this:

http://www.w3.org/2001/XMLSchema-instance";>
   kind="position">

   
   This is the note 
on calculation times

   

   
   609.081574
   2531.972081
   65.119100
   
   
   1772.011230
   
   kind="timers">

   
   72.418861
   

   
   28.285192
   
   
   0.000
   
   
   
   607.432373
   
   
   
   
   
   
   483328

   483328
   
   
   4182777856
   4182777856
   
   1

   1943498
   0
   
   
   
   1640100156
   411307840
   
   
   709596712
   1406752
   
   
   737720720
   0
   
   
   607.432373
   
   
   

   5164184694
   2054715622
   

   
   



--
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Ruby

2010-02-01 Thread Jonathan Gardner
On Jan 31, 12:43 pm, Nobody  wrote:
>
> If it was common-place to use Curried functions and partial application in
> Python, you'd probably prefer "f a b c" to "f(a)(b)(c)" as well.
>

That's just the point. It isn't common to play with curried functions
or monads or anything like that in computer science today. Yes,
Haskell exists, and is a great experiment in how such a language could
actually work. But at the same time, you have to have a brain the size
of the titanic to contain all the little details about the language
before you could write any large-scale application.

Meanwhile, Python's syntax and language is simple and clean, and
provides tremendous expressiveness without getting in the way of the
programmer.

Comparing Python's syntax to Haskell's syntax, Python is simpler.
Comparing what Python can do to what Haskell can do, Haskell is much
faster at certain tasks and allows the expression of certain things
that are difficult to express in Python. But at the end of the day,
the difference really doesn't matter that much.

Now, compare Python versus Language X along the same lines, and the
end result is that (a) Python is extraordinarily more simple than
Langauge X, and (b) Python is comparable in expressiveness to Language
X. That's the #1 reason why I like Python, and why saying Ruby and
Python are similar isn't correct.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Iterating over a function call

2010-02-01 Thread Vlastimil Brom
2010/2/1 Gerald Britton :
> Hi -- I have many sections of code like this:
>
>    for value in value_iterator:
>         value_function(value)
>
> I noticed that this does two things I don't like:
>...
> --
> Gerald Britton
> --
> http://mail.python.org/mailman/listinfo/python-list
>

Hi,
just to add to the collection of possible approaches (personally, I
find the straightforward versions fine, given the task ... )
If you know, that the called function (used for side effect in any
case) doesn't return any "true" value, you could use
any(...) over a generator expression or imap
(or, inversely, all(...) if there are "true" returns all the time).

Not that it improves clarity, but something like a dummy
reduce(lambda x, y: None, , None) might (kind of) work for
arbitrary returns. Both variants only have a redundant boolean or None
value.
vbr
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ftp.storlines error

2010-02-01 Thread Gabriel Genellina

En Sun, 31 Jan 2010 19:07:44 -0300, Mik0b0  escribió:


Good day/night/etc.
I am rather a newb in Python (learning Python 3). I am trying to
create a small script for FTP file uploads  on my home network. The
script looks like this:

from ftplib import FTP
ftp=FTP('10.0.0.1')
ftp.login('mike','*')
directory='/var/www/blabla/'
ftp.cwd(directory)
ftp.retrlines('LIST')
print('<- - - - - - - - - >')
file_to_change='test'
file=1
file=open(file_to_change,'w')
text='test'
file.write(text)
ftp.storlines('STOR ' + file_to_change,file)
ftp.retrlines('LIST')
file.close()

The output is like this:
Traceback (most recent call last):
  File "ftp.py", line 13, in 
ftp.storlines('STOR ' + file_to_change,i)
  File "/usr/lib/python3.1/ftplib.py", line 474, in storlines
buf = fp.readline()
IOError: not readable


For the ftp client to be able to read and upload the file, it must have  
been opened for reading. But you opened it with mode 'w' a few lines above.
A quick and dirty way would be to close the file right after  
file.write(...) and re-open it with mode 'r':


  ...
  file.write(text)
  file.close()
  file = open(file_to_change,'r')
  ftp.storlines(...)

But I'd separate file-creation from file-uploading. When it's time to  
upload the file, assume it already exists and has the desired contents  
(because it has already been created earlier by the same script, or  
perhaps by some other process), so you just have to open it with mode 'r'.


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Ruby

2010-02-01 Thread Jonathan Gardner
On Jan 31, 3:01 am, rantingrick  wrote:
> On Jan 30, 10:43 am, Nobody  wrote:
>
> > That's also true for most functional languages, e.g. Haskell and ML, as
> > well as e.g. Tcl and most shells. Why require "f(x)" or "(f x)" if "f x"
> > will suffice?
>
> yuck! wrapping the arg list with parenthesis (python way) makes the
> most sense. Its to easy to misread somthing like this
>
> onetwothree four five six
>
> onetwothree(four, five, six) #ahhh... plain english.

In Lisp-ish languages, you have a list of stuff that represents a
function call:

 (a b c d)

means: Call "a" with values (b, c, d)

While this certainly doesn't agree with what you learned in Algebra,
it is a reasonable syntax that exposes the code-data duality of
programs. There is, however, one fatal flaw. Why is the first element
so different than the rest? This is inconsistent with what people who
are unfamiliar with the language would expect. Indeed, in teaching
Lisp, learners have to be reminded about how the evaluator looks at
lists and processes them.

I would expect a clear, simple language to have exactly one way to
call a function. This calling notation would clearly distinguish
between the function and its parameters. There are quite a few
options, and it turns out that "function(arg, arg, arg)" is a really
good compromise.

One of the bad things with languages like perl and Ruby that call
without parentheses is that getting a function ref is not obvious. You
need even more syntax to do so. In perl:

 foo();   # Call 'foo' with no args.
 $bar = foo;  # Call 'foo; with no args, assign to '$bar'
 $bar = &foo; # Don't call 'foo', but assign a pointer to it to '$bar'
  # By the way, this '&' is not the bitwise-and '&'
 $bar->() # Call whatever '$bar' is pointing at with no args

Compare with python:

 foo()   # Call 'foo' with no args.
 bar = foo() # 'bar' is now pointing to whatever 'foo()' returned
 bar = foo   # 'bar' is now pointing to the same thing 'foo' points to
 bar()   # Call 'bar' with no args

One is simple, consistent, and easy to explain. The other one requires
the introduction of advanced syntax and an entirely new syntax to make
function calls with references.

Note that the Algebra notation of functions allows for an obvious,
simple way to refer to functions without calling them, leading to
syntax such as "f o g (x)" and more.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: whassup? builtins? python3000? Naah can't be right?

2010-02-01 Thread Gabriel Genellina
En Sun, 31 Jan 2010 18:17:04 -0300, _wolf   
escribió:



dear pythoneers,

i would be very gladly accept any commentaries about what this
sentence, gleaned from  
http://celabs.com/python-3.1/reference/executionmodel.html,

is meant to mean, or why gods have decided this is the way to go. i
anticipate this guy named Kay Schluehr will have a say on that, or
maybe even the BDFL will care to pronounce ``__builtins__`` the
correct way to his fallovers, followers, and fellownerds::

  The built-in namespace associated with the execution of
  a code block is actually found by looking up the name
  __builtins__ in its global namespace; this should be a
  dictionary or a module (in the latter case the module’s
  dictionary is used). By default, when in the __main__
  module, __builtins__ is the built-in module builtins;
  when in any other module, __builtins__ is an alias for
  the dictionary of the builtins module itself.
  __builtins__ can be set to a user-created dictionary to
  create a weak form of restricted execution.


Short answer: use `import builtins` (spelled __builtin__, no 's' and  
double underscores, in Python 2.x) to access the module containing the  
predefined (built-in) objects.


Everything else is an implementation detail.

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Ruby

2010-02-01 Thread Jonathan Gardner
On Jan 30, 8:43 am, Nobody  wrote:
> On Wed, 27 Jan 2010 15:29:05 -0800, Jonathan Gardner wrote:
> > Python is much, much cleaner. I don't know how anyone can honestly say
> > Ruby is cleaner than Python.
>
> I'm not familiar with Ruby, but most languages are cleaner than Python
> once you get beyond the "10-minute introduction" stage.
>

Probably too little, too late (haven't read all of the replies yet...)

I judge a language's simplicity by how long it takes to explain the
complete language. That is, what minimal set of documentation do you
need to describe all of the language? With a handful of statements,
and a very short list of operators, Python beats out every language in
the Algol family that I know of.

I can think of only one language (or rather, a class of languages)
that can every hope to be shorter than Python. I doubt you've heard of
it based on your comments, but I suggest you look into it.
Unfortunately, to fully appreciate that language, you're going to have
to study a textbook called "SICP". At the end of that textbook, you
are blessed to not only see but understand the complete compiler for
the language, in the language itself.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: OT: Instant Messenger Clients

2010-02-01 Thread Gabriel Genellina
En Sun, 31 Jan 2010 18:15:34 -0300, Victor Subervi  
 escribió:


I need to record my IM conversations. I'm using Gmal's IM client and I  
can't

figure out how to do it, nor do I find any help googling it.


Hard to believe... 

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Processing XML File

2010-02-01 Thread jakecjacobson
On Jan 29, 2:41 pm, Stefan Behnel  wrote:
> Sells, Fred, 29.01.2010 20:31:
>
> > Google is your friend.  Elementtree is one of the better documented
> > IMHO, but there are many modules to do this.
>
> Unless the OP provides some more information, "do this" is rather
> underdefined. And sending someone off to Google who is just learning the
> basics of Python and XML and trying to solve a very specific problem with
> them is not exactly the spirit I'm used to in this newsgroup.
>
> Stefan

Just want to thank everyone for their posts.  I got it working after I
discovered a name space issue with this code.

xmlDoc = libxml2.parseDoc(guts)
# Ignore namespace and just get the Resource
resourceNodes = xmlDoc.xpathEval('//*[local-name()="Resource"]')
for rNode in resourceNodes:
print rNode
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Function name unchanged in error message

2010-02-01 Thread Gabriel Genellina
En Sat, 30 Jan 2010 06:28:19 -0300, Peter Otten <__pete...@web.de>  
escribió:

Gabriel Genellina wrote:

En Fri, 29 Jan 2010 13:09:40 -0300, Michele Simionato
 escribió:

On Jan 29, 2:30 pm, andrew cooke  wrote:



Is there any way to change the name of the function in an error
message?  In the example below I'd like the error to refer to bar(),
for example (the motivation is related function decorators - I'd like
the wrapper function to give the same name)


Use the decorator module which does the right thing:
http://pypi.python.org/pypi/decorator


The decorator module is a very fine addition to anyone's tool set -- but
in this case it is enough to use the wraps() function from the functools
standard module.


I don't know about the decorator module, but functools.wraps() doesn't
affect the error message:


It seems I misunderstood the original request and got it backwards - but  
then I have to question the usefulness of doing so. If an error happens in  
the "decorating" function (as opposed to inside the function being  
decorated) I'd like the error to be reported as such, in the "decorating"  
function (else tracebacks and line numbers would be lying).


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Problems embedding python 2.6 in C++

2010-02-01 Thread Paul
Hi,

I'm extending some old Visual Studio 6 code to add embedded python
scripting. It works fine most of the time but some python function calls do
not work as expected.

The C++ code is a multithreaded MFC application. I was assuming that it was
GIL issues but I have tried using the manual locking (PyEval_SaveThread &
PyEval_RestoreThread) and what seems to be the current method
(PyGILState_Ensure & PyGILState_Release)

Here's the error I'm getting:

 Traceback (most recent call last):
  File "...scripts\receipt_parser.py", line 296, in
get_giftcard_purchase_value
details = extract_transaction_details_section(test)
  File "...scripts\receipt_parser.py", line 204, in
extract_transaction_details_section
for line in base_details:
TypeError: expected string or Unicode object, NoneType found

base_details is a list of strings (I can just define it like
'base_details=["1","2","3"...]' on the line previous) and the code runs fine
when run from standard interpreter. Many other function calls work fine from
the embedded app.

I create and then Py_DECREF the function parameters and the return value
after each function call. The module import is created at C++ object
constructor and then Py_DECREF'd in the desctuctor

Anyone else had issues of this kind? My next try will be to use
sub-interpreters per thread.

Thanks,
Paul.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: starting a thread in a nother thread

2010-02-01 Thread Aahz
In article <4b60a661$0$1598$742ec...@news.sonic.net>,
John Nagle   wrote:
>
>If a C package called from Python crashes, the package is defective.
>Nothing you can do from Python should be able to cause a segmentation
>fault.

...unless you use ctypes.
-- 
Aahz (a...@pythoncraft.com)   <*> http://www.pythoncraft.com/

import antigravity
-- 
http://mail.python.org/mailman/listinfo/python-list


CheddarGetter module for Python - easy recurring billing

2010-02-01 Thread Jason
We just released pychedder, an open source module for integrating
CheddarGetter with Python (and Django):

http://www.feedmagnet.com/blog/cheddargetter-for-python-and-django/

Anyone who's built commercial web app knows that payment processing
can be one of the toughest pieces to put in place - and it can
distract you from working on the core functionality of your app.
CheddarGetter is a web service that abstracts the entire process of
managing credit cards, processing transactions on a recurring basis,
and even more complex setups like free trials, setup fees, and
overage
charges.

We're using CheddarGetter for FeedMagnet.com and we thought the Python
community in general could benefit from the module we wrote to
interact with it. More just just a Python wrapper for CheddarGetter,
pycheddar gives you class objects that work a lot like Django models,
making the whole experience of integrating with CheddarGetter just a
little more awesome and Pythonic.

We're a week or two away from putting pycheddar into production on
FeedMagnet, but we felt like it was close enough that others could
start using it. We also published it to PyPi so you can easy_install
pycheddar and get the latest version (currently 0.9). Hope others can
benefit from it!

- Jason
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding methods from one class to another, dynamically

2010-02-01 Thread Steve Holden
Oltmans wrote:
> Thank you for your help, Chris. Looks like I can now attach methods to
> class 'tee'. However, after attaching methods to 'tee' when I try to
> run them using suite.run() I don't see any of the methods running, I'm
> sorry but I've no clue what's failing this. Any insights will be
> highly appreciated. Here is the sample code
> filename: check.py
> ---
> import inspect
> import unittest
> 
> 
> class result(unittest.TestResult):
> 
> def addSuccess(self,test):
> print str(test) + ' succeeded'
> def addError(self,test,err):
> print 'An error occured while running the test ' + str(test) +
> ' and error is = ' + str(err)
> def addFailure(self,test,err):
> print str(test) + " failed with an error =" + str(err)
> 
> 
> 
> class test(unittest.TestCase):
> def test_first(self):
> print 'first test'
> def test_second(self):
> print 'second test'
> def test_third(self):
> print 'third test'
> 
> import new
> class tee(unittest.TestCase):
> pass
> 
> if __name__=="__main__":
> r = result()
> for name,func in inspect.getmembers(test,inspect.ismethod):
> if name.find('test_')!= -1:
> 
> setattr(tee, name, new.instancemethod(func,None,tee))
> 
> suite = unittest.defaultTestLoader.loadTestsFromName('check.tee')
> suite.run(r)
> ---
> 
> Then line suite.run(r) should have run the methods that we just
> attached, but it's not. I must be missing something here. Please
> enlighten me.
> 
Should not tee be subclassing test, not unittest.TestCase?

regards
 Steve
-- 
Steve Holden   +1 571 484 6266   +1 800 494 3119
PyCon is coming! Atlanta, Feb 2010  http://us.pycon.org/
Holden Web LLC http://www.holdenweb.com/
UPCOMING EVENTS:http://holdenweb.eventbrite.com/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding methods from one class to another, dynamically

2010-02-01 Thread Oltmans
Thank you for your help, Chris. Looks like I can now attach methods to
class 'tee'. However, after attaching methods to 'tee' when I try to
run them using suite.run() I don't see any of the methods running, I'm
sorry but I've no clue what's failing this. Any insights will be
highly appreciated. Here is the sample code
filename: check.py
---
import inspect
import unittest


class result(unittest.TestResult):

def addSuccess(self,test):
print str(test) + ' succeeded'
def addError(self,test,err):
print 'An error occured while running the test ' + str(test) +
' and error is = ' + str(err)
def addFailure(self,test,err):
print str(test) + " failed with an error =" + str(err)



class test(unittest.TestCase):
def test_first(self):
print 'first test'
def test_second(self):
print 'second test'
def test_third(self):
print 'third test'

import new
class tee(unittest.TestCase):
pass

if __name__=="__main__":
r = result()
for name,func in inspect.getmembers(test,inspect.ismethod):
if name.find('test_')!= -1:

setattr(tee, name, new.instancemethod(func,None,tee))

suite = unittest.defaultTestLoader.loadTestsFromName('check.tee')
suite.run(r)
---

Then line suite.run(r) should have run the methods that we just
attached, but it's not. I must be missing something here. Please
enlighten me.

Thanks.
On Feb 2, 1:25 am, Chris Rebert  wrote:
> On Mon, Feb 1, 2010 at 12:06 PM, Oltmans  wrote:
> > Hello Python gurus,
>
> > I'm quite new when it comes to Python so I will appreciate any help.
> > Here is what I'm trying to do. I've two classes like below
>
> > import new
> > import unittest
>
> > class test(unittest.TestCase):
> >    def test_first(self):
> >        print 'first test'
> >    def test_second(self):
> >        print 'second test'
> >    def test_third(self):
> >        print 'third test'
>
> > class tee(unittest.TestCase):
> >    pass
>
> > and I want to attach all test methods of 'test'(i.e. test_first(),
> > test_second() and test_third()) class to 'tee' class. So I'm trying to
> > do something like
>
> > if __name__=="__main__":
> >    for name,func in inspect.getmembers(test,inspect.ismethod):
> >        if name.find('test_')!= -1:
> >            tee.name = new.instancemethod(func,None,tee)
>
> This ends up repeatedly assigning to the attribute "name" of tee; if
> you check dir(tee), you'll see the string "name" as an entry. It does
> *not* assign to the attribute named by the string in the variable
> `name`.
> You want setattr():http://docs.python.org/library/functions.html#setattr
> Assuming the rest of your code chunk is correct:
>
> setattr(tee, name, new.instancemethod(func,None,tee))
>
> Cheers,
> Chris
> --http://blog.rebertia.com- Hide quoted text -
>
> - Show quoted text -

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: libpcap and python

2010-02-01 Thread Terry Reedy

On 2/1/2010 7:47 AM, Mag Gam wrote:

Hello All,

I used tcpdump to capture data on my network. I would like to analyze
the data using python -- currently using ethereal and wireshark.

I would like to get certain type of packets (I can get the hex code
for them), what is the best way to do this? Lets say I want to capture
all events of `ping localhost`


The following is pretty straightforward.

def process(dump, wanted, func):
  for packet in dump:
if packet_type(packet) == wanted:
  func(packet)

Perhaps you can ask a more specific question.

Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: get error install MySQLdb on Mac OS X

2010-02-01 Thread Ned Deily
In article 
<2be17362-8a54-4a04-9671-0a0ff7266...@k2g2000pro.googlegroups.com>,
 "PS.OHM"  wrote:
> On Jan 29, 5:02 am, Sean DiZazzo  wrote:
> > On Jan 28, 12:53 pm, "PS.OHM"  wrote:
> > > I havegetsomeerrorwhen i install MySQLdb on Mac OS X
> >
> > > after i key command $python setup.py build

The build of MySQLdb is not finding the MySQL client database libraries.  
You need to install them first and, for the python you are using, you 
need a version that includes 32-bit.  The easiest options are to 
download them from mysql.com or, if you are comfortable with MacPorts, I 
would recommend installing them from there.

sudo port install mysql5

In fact, with MacPorts you can install a complete Python 2.6.4, MySQLdb, 
and MySQL libraries with just one command.  With the MacPorts base 
installed, this should do it:

sudo port install py26-mysql

You might need to tweak the variants of some of the packages if you want 
32-bit only, etc, depending on what version of OS X you're running on.

-- 
 Ned Deily,
 n...@acm.org

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding methods from one class to another, dynamically

2010-02-01 Thread Gerald Britton
Or you could just do a mixin:

tee.__class__.__bases__ = (test,) +  tee.__class__.__bases__


On Mon, Feb 1, 2010 at 3:25 PM, Chris Rebert  wrote:
> On Mon, Feb 1, 2010 at 12:06 PM, Oltmans  wrote:
>> Hello Python gurus,
>>
>> I'm quite new when it comes to Python so I will appreciate any help.
>> Here is what I'm trying to do. I've two classes like below
>>
>> import new
>> import unittest
>>
>> class test(unittest.TestCase):
>>    def test_first(self):
>>        print 'first test'
>>    def test_second(self):
>>        print 'second test'
>>    def test_third(self):
>>        print 'third test'
>>
>> class tee(unittest.TestCase):
>>    pass
>>
>> and I want to attach all test methods of 'test'(i.e. test_first(),
>> test_second() and test_third()) class to 'tee' class. So I'm trying to
>> do something like
>>
>> if __name__=="__main__":
>>    for name,func in inspect.getmembers(test,inspect.ismethod):
>>        if name.find('test_')!= -1:
>>            tee.name = new.instancemethod(func,None,tee)
>
> This ends up repeatedly assigning to the attribute "name" of tee; if
> you check dir(tee), you'll see the string "name" as an entry. It does
> *not* assign to the attribute named by the string in the variable
> `name`.
> You want setattr(): http://docs.python.org/library/functions.html#setattr
> Assuming the rest of your code chunk is correct:
>
> setattr(tee, name, new.instancemethod(func,None,tee))
>
> Cheers,
> Chris
> --
> http://blog.rebertia.com
> --
> http://mail.python.org/mailman/listinfo/python-list
>



-- 
Gerald Britton
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding methods from one class to another, dynamically

2010-02-01 Thread Carl Banks
On Feb 1, 12:06 pm, Oltmans  wrote:
> Hello Python gurus,
>
> I'm quite new when it comes to Python so I will appreciate any help.
> Here is what I'm trying to do. I've two classes like below
>
> import new
> import unittest
>
> class test(unittest.TestCase):
>     def test_first(self):
>         print 'first test'
>     def test_second(self):
>         print 'second test'
>     def test_third(self):
>         print 'third test'
>
> class tee(unittest.TestCase):
>     pass
>
> and I want to attach all test methods of 'test'(i.e. test_first(),
> test_second() and test_third()) class to 'tee' class.


Simplest way:

class tee(test):
pass


To do it dynamically the following might work:

class tee(unittest.TestCase):
pass

tee.__bases__ = (test,)
tee.__bases__ = (test2,) # dynamically reassign base


> So I'm trying to
> do something like
>
> if __name__=="__main__":
>     for name,func in inspect.getmembers(test,inspect.ismethod):
>         if name.find('test_')!= -1:
>             tee.name = new.instancemethod(func,None,tee)
>
> after doing above when I run this statement
> print dirs(tee)
> I don't see test_first(), test_second() and test_third() attached to
> class 'tee'. Any ideas, on how can I attach methods of class 'test' to
> class 'tee' dynamically? Any help is highly appreciated.

If you want to do it this way--and I recommend regular inheritance if
you can--this is how:

for x in dir(test):  # or inspect.getmembers
if x.startswith('test_'):
method = getattr(test,x)
function = method.im_func
setattr(tee,x,function)

The business with method.im_func is because in Python 2.x the getattr
on a class will actually returns an unbound method, so you have to get
at the actual function object with im_func.  In Python 3 this is not
necessary.

Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Iterating over a function call

2010-02-01 Thread Gerald Britton
[snip[

> You have itertools.consume which is close to what you want:
>
>    consume(imap(func, iterable)) # 2.x
>
>    consume(map(func, iterable)) # 3.x
>
> HTH

It does! Though in my case this is simpler:

deque(imap(func, iterable), 0)

since the recipe for consume just calls deque anyway when you want to
eat up the rest of the iterable.  It also solves the iterator-variable
leakage problem and is only a wee bit slower than a conventional
for-loop.

>
> --
> Arnaud
> --
> http://mail.python.org/mailman/listinfo/python-list
>



-- 
Gerald Britton
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding methods from one class to another, dynamically

2010-02-01 Thread Chris Rebert
On Mon, Feb 1, 2010 at 12:06 PM, Oltmans  wrote:
> Hello Python gurus,
>
> I'm quite new when it comes to Python so I will appreciate any help.
> Here is what I'm trying to do. I've two classes like below
>
> import new
> import unittest
>
> class test(unittest.TestCase):
>    def test_first(self):
>        print 'first test'
>    def test_second(self):
>        print 'second test'
>    def test_third(self):
>        print 'third test'
>
> class tee(unittest.TestCase):
>    pass
>
> and I want to attach all test methods of 'test'(i.e. test_first(),
> test_second() and test_third()) class to 'tee' class. So I'm trying to
> do something like
>
> if __name__=="__main__":
>    for name,func in inspect.getmembers(test,inspect.ismethod):
>        if name.find('test_')!= -1:
>            tee.name = new.instancemethod(func,None,tee)

This ends up repeatedly assigning to the attribute "name" of tee; if
you check dir(tee), you'll see the string "name" as an entry. It does
*not* assign to the attribute named by the string in the variable
`name`.
You want setattr(): http://docs.python.org/library/functions.html#setattr
Assuming the rest of your code chunk is correct:

setattr(tee, name, new.instancemethod(func,None,tee))

Cheers,
Chris
--
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3147 - new .pyc format

2010-02-01 Thread Daniel Fetchinson
>> I also think the PEP is a great idea and proposes a solution to a real
>> problem. But I also hear the 'directory clutter' argument and I'm really
>> concerned too, having all these extra directories around (and quite a
>> large number of them indeed!).
>
> Keep in mind that if you don't explicitly ask for the proposed feature,
> you won't see any change at all. You need to run Python with the -R
> switch, or set an environment variable. The average developer won't see
> any clutter at all unless she is explicitly supporting multiple versions.
>
>
>
>> How about this scheme:
>>
>> 1. install python source files to a shared (among python installations)
>> location /this/is/shared
>> 2. when python X.Y imports a source file from /this/is/shared it will
>> create pyc files in its private area /usr/lib/pythonX.Y/site-packages/
>
> $ touch /usr/lib/python2.5/site-packages/STEVEN
> touch: cannot touch `/usr/lib/python2.5/site-packages/STEVEN': Permission
> denied
>
> There's your first problem: most users don't have write-access to the
> private area.

True, I haven't thought about that (I should have though).

> When you install a package, you normally do so as root, and
> it all works. When you import a module and it gets compiled as a .pyc
> file, you're generally running as a regular user.
>
>
>> Time comparison would be between /this/is/shared/x.py and
>> /usr/lib/pythonX.Y/site-packages/x.pyc, for instance.
>
> I don't quite understand what you mean by "time comparison".

I meant the comparison of timestamps on .py and .pyc files in order to
determine which is newer and if a recompilation should take place or
not.

> [...]
>> In /usr/lib/pythonX.Y/site-packages there would be only pyc files with
>> magic number matching python X.Y.
>
> Personally, I think it is a terribly idea to keep the source file and
> byte code file in such radically different places. They should be kept
> together. What you call "clutter" I call having the files that belong
> together kept together.

I see why you think so, it's reasonable, however there is compelling
argument, I think, for the opposite view: namely to keep things
separate. An average developer definitely wants easy access to .py
files. However I see no good reason for having access to .pyc files. I
for one have never inspected a .pyc file. Why would you want to have a
.pyc file at hand?

If we don't really want to have .pyc files in convenient locations
because we (almost) never want to access them really, then I'd say
it's a good idea to keep them totally separate and so make don't get
in the way.

>> So, basically nothing would change only the location of py and pyc files
>> would be different from current behavior, but the same algorithm would
>> be run to determine which one to load, when to create a pyc file, when
>> to ignore the old one, etc.
>
> What happens when there is a .pyc file in the same location as the .py
> file? Because it *will* happen. Does it get ignored, or does it take
> precedence over the site specific file?
>
> Given:
>
> ./module.pyc
> /usr/lib/pythonX.Y/site-packages/module.pyc
>
> and you execute "import module", which gets used? Note that in this
> situation, there may or may not be a module.py file.
>
>
>> What would be wrong with this setup?
>
> Consider:
>
> ./module.py
> ./package/module.py
>
> Under your suggestion, both of these will compile to
>
> /usr/lib/pythonX.Y/site-packages/module.pyc

I see the problems with my suggestion. However it would be great if in
some other way the .pyc files could be kept out of the way. Granted, I
don't have a good proposal for this.

Cheers,
Daniel


-- 
Psss, psss, put it down! - http://www.cafepress.com/putitdown
-- 
http://mail.python.org/mailman/listinfo/python-list


Adding methods from one class to another, dynamically

2010-02-01 Thread Oltmans
Hello Python gurus,

I'm quite new when it comes to Python so I will appreciate any help.
Here is what I'm trying to do. I've two classes like below

import new
import unittest

class test(unittest.TestCase):
def test_first(self):
print 'first test'
def test_second(self):
print 'second test'
def test_third(self):
print 'third test'

class tee(unittest.TestCase):
pass

and I want to attach all test methods of 'test'(i.e. test_first(),
test_second() and test_third()) class to 'tee' class. So I'm trying to
do something like

if __name__=="__main__":
for name,func in inspect.getmembers(test,inspect.ismethod):
if name.find('test_')!= -1:
tee.name = new.instancemethod(func,None,tee)

after doing above when I run this statement
print dirs(tee)
I don't see test_first(), test_second() and test_third() attached to
class 'tee'. Any ideas, on how can I attach methods of class 'test' to
class 'tee' dynamically? Any help is highly appreciated.

Many thanks and I look forward to any help.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Iterating over a function call

2010-02-01 Thread Arnaud Delobelle
Gerald Britton  writes:

> Hi -- I have many sections of code like this:
>
> for value in value_iterator:
>  value_function(value)
>
> I noticed that this does two things I don't like:
>
> 1. looks up "value_function" and "value" for each iteration, but
> "value_function" doesn't change.
> 2. side effect of (maybe) leaking the iterator variable "value" into
> the code following the loop (if the iterator is not empty).
>
> I can take care of 2 by explicitly deleting the variable at the end:
>
>del value
>
> but I'd probably forget to do that sometimes.  I then realized that,
> in the 2.x series, I can accomplish the same thing with:
>
> map(value_function, value_iterator)
>
> and avoid both problems BUT map() returns a list which is never used.
> Not a big deal for small iterables, I guess, but it seems messy.  Upon
> conversion to 3.x I have to explicitly list-ify it:
>
> list(map(value_function, value_iterator))
>
> which works but again the list returned is never used (extra work) and
> has to be gc'd I suppose (extra memory).
>
> It's easy to make a little function to take care of this (2.x):
>
> from itertools import imap
> def apply(function, iterable):
> for item in imap(function, iterable):
> pass
>
> then later:
>
>apply(value_function, value_iterator)
>
> or something similar thing in 3.x, but that just adds an additional
> function def that I have to include whenever I want to do something
> like this.
>

You have itertools.consume which is close to what you want:

consume(imap(func, iterable)) # 2.x

consume(map(func, iterable)) # 3.x

HTH

-- 
Arnaud
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Iterating over a function call

2010-02-01 Thread Diez B. Roggisch

So.I'm wondering if there is any interest in an apply() built-in
function that would work like map() does in 2.x (calls the function
with each value returned by the iterator) but return nothing.  Maybe
"apply" isn't the best name; it's just the first one that occurred to
me.

Or is this just silly and should I forget about it?


IMHO - yes. Looking up names is what python does all the time, trying to 
microoptimize that away is silly. It is only justfied if you have 
extremely timing-critical code.


And then, all you need to do is to say


def whatever():
  _function = function
  for value in values:
  _function(value)

which will reduce lookup-time, as _function is found in locals() rather 
than globals().


And any function that does something worthy will dwarf the second 
namespace-lookup I'd say.


Diez

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python-list Digest, Vol 77, Issue 7 *

2010-02-01 Thread Gabriel Genellina
En Mon, 01 Feb 2010 07:26:13 -0300, Rohit Roger$   
escribió:



Help for datetime module

Source code is :
from datetime import datetime
d = datetime(datetime.now().year, datetime.now().month,  
datetime.now().day, 11, 59, 0 )

print d


And your problem is...?
This is what I get:
py> print d
2010-02-01 11:59:00

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: create a string of variable lenght

2010-02-01 Thread Benjamin Kaplan
On Mon, Feb 1, 2010 at 12:00 PM, Tracubik  wrote:
> Il Sun, 31 Jan 2010 19:54:17 -0500, Benjamin Kaplan ha scritto:
>
>> First of all, if you haven't read this before, please do. It will make
>> this much clearer.
>> http://www.joelonsoftware.com/articles/Unicode.html
>
> i'm reading it right now, thanks :-)
>
> [cut]
>
>> Solution to your problem: in addition to keeping the #-*- coding ...
>> line, go with Günther's advice and use Unicode strings.
>
> that is: always use the "u" operator (i.e. my_name = u"Nico"), right?
>
> Ciao,
> Nico
>


Short answer: yes.

Slightly longer explanation for future reference: This is true for
Python 2 but not Python 3. One of the big changes in Python 3 is that
strings are Unicode by default because you're not the only one who
runs into this problem. So in Python 3, just writing 'Nico' will make
a Unicode string and you have to explicitly declare b'Nico' if you
want to look at it as a series of bytes.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to decode rtf characterset ?

2010-02-01 Thread M.-A. Lemburg
Stef Mientki wrote:
> hello,
> 
> I want to translate rtf files to unicode strings.
> I succeeded in remove all the tags,
> but now I'm stucked to the special accent characters,
> like :
> 
> "Vóór"
> 
> the character "ó" is represented by the string r"\'f3",
> or in bytes: 92, 39,102, 51

> so I think I need a way to translate that into the string r"\xf3"
> but I can't find a way to accomplish that.
> 
> a
> Any suggestions are very welcome.

You could try something along these lines:

>>> s = r"\'f3"
>>> s = s.replace("\\'", "\\x")
>>> u = s.decode('unicode-escape')
>>> u
u'\xf3'

However, this assumes Latin-1 codes being using by the RTF
text.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Feb 01 2010)
>>> Python/Zope Consulting and Support ...http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/


::: Try our new mxODBC.Connect Python Database Interface for free ! 


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
   http://www.egenix.com/company/contact/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: HTML Parser which allows low-keyed local changes (upon serialization)

2010-02-01 Thread M.-A. Lemburg
Robert wrote:
> I think you confused the logical level of what I meant with "file
> position":
> Of course its not about (necessarily) writing back to the same open file
> (OS-level), but regarding the whole serializiation string (wherever it
> is finally written to - I typically write the auto-converted HTML files
> to a 2nd test folder first, and want use "diff -u ..." to see
> human-readable what changed happened - which again is only reasonable if
> the original layout is preserved as good as possible )
> 
> lxml and BeautifulSoup e.g. : load&parse a HTML file to a tree,
> immediately serialize the tree without changes => you see big
> differences of original and serialized files with quite any file.
> 
> The main issue: those libs seem to not track any info about the original
> string/file positions of the objects they parse. The just forget the
> past. Thus they cannot by principle do what I want it seems ...
> 
> Or does anybody see attributes of the tree objects - which I overlooked?
> Or a lib which can do or at least enable better this
> source-back-connected editing?

You'd have to write your own parse (or extend the example HTML
one we include), but mxTextTools allows you to work on original
code quite easily: it tags parts of the input string with objects.

You can then have those objects manipulate the underlying text as
necessary and write back the text using the original formatting
plus your local changes.

http://www.egenix.com/products/python/mxBase/mxTextTools/

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Feb 01 2010)
>>> Python/Zope Consulting and Support ...http://www.egenix.com/
>>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...http://python.egenix.com/


::: Try our new mxODBC.Connect Python Database Interface for free ! 


   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
   http://www.egenix.com/company/contact/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Edu-sig] odd drawing problem with turtle.py

2010-02-01 Thread kirby urner
On Mon, Feb 1, 2010 at 3:27 AM, Brian Blais  wro

>
> I don't see where you've defined a Turtle class to instantiate sir.
>
>
> Turtle is given in turtle.py.  I should have subclassed it, but I was being
> lazy.  :)
>
> thanks for the fast replies!
>
>
>

> bb
>
>
>


No obvious need to subclass.

You weren't being lazy, I was being sloppy.

I'm glad Vern caught my error right away.

Kirby
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: HTML Parser which allows low-keyed local changes (upon serialization)

2010-02-01 Thread Robert

Stefan Behnel wrote:

Robert, 01.02.2010 14:36:

Stefan Behnel wrote:

Robert, 31.01.2010 20:57:

I tried lxml, but after walking and making changes in the element tree,
I'm forced to do a full serialization of the whole document
(etree.tostring(tree)) - which destroys the "human edited" format of the
original HTML code. makes it rather unreadable.

What do you mean? Could you give an example? lxml certainly does not
destroy anything it parsed, unless you tell it to do so.

of course it does not destroy during parsing.(?)


I meant "parsed" in the sense of "has parsed and is now working on".



I mean: I want to walk with a Python script through the parsed tree HTML
and modify here and there things  (auto alt tags from DB/similar, link
corrections, text sections/translated sentences... due to HTML code and
content checks.)


Sure, perfectly valid use case.



Then I want to output the changed tree - but as close to the original
format as far as possible. No changes to my white space identation,
etc..  Only lokal changes, where really tags where changed.


That's up to you. If you only apply local changes that do not change any
surrounding whitespace, you'll be fine.



Thats similiar like that what a good HTML editor does: After you made
little changes, it doesn't reformat/re-spit-out your whole code layout
from tree/attribute logic only. you have lokal changes only.


HTML editors don't work that way. They always "re-spit-out" the whole code
when you click on "save". They certainly don't track the original file
position of tags. What they preserve is the content, including whitespace
(or not, if they reformat the code, but that's usually an *option*).



Such a "good HTML editor" must somehow track the original positions of
the tags in the file. And during each logical change in the tree it must
tracks the file position changes/offsets.


Sorry, but that's nonsense. The file position of a tag is determined by
whitespace, i.e. line endings and indentation. lxml does not alter that,
unless you tell it do do so.

Since you keep claiming that it *does* alter it, please come up with a
reproducible example that shows a) what you do in your code, b) what your
input is and c) what unexpected output it creates. Do not forget to include
the version number of lxml and libxml2 that you are using, as well as a
comment on /how/ the output differs from what you expected.

My stab in the dark is that you forgot to copy the tail text of elements
that you replace by new content, and that you didn't properly indent new
content that you added. But that's just that, a stab in the dark. You
didn't provide enough information for even an educated guess.



I think you confused the logical level of what I meant with "file 
position":
Of course its not about (necessarily) writing back to the same 
open file (OS-level), but regarding the whole serializiation 
string (wherever it is finally written to - I typically write the 
auto-converted HTML files to a 2nd test folder first, and want use 
"diff -u ..." to see human-readable what changed happened - which 
again is only reasonable if the original layout is preserved as 
good as possible )


lxml and BeautifulSoup e.g. : load&parse a HTML file to a tree, 
immediately serialize the tree without changes => you see big 
differences of original and serialized files with quite any file.


The main issue: those libs seem to not track any info about the 
original string/file positions of the objects they parse. The just 
forget the past. Thus they cannot by principle do what I want it 
seems ...


Or does anybody see attributes of the tree objects - which I 
overlooked? Or a lib which can do or at least enable better this 
source-back-connected editing?



Robert
--
http://mail.python.org/mailman/listinfo/python-list


Re: how to decode rtf characterset ?

2010-02-01 Thread MRAB

Stef Mientki wrote:

hello,

I want to translate rtf files to unicode strings.
I succeeded in remove all the tags,
but now I'm stucked to the special accent characters,
like :

"Vóór"

the character "ó" is represented by the string r"\'f3",
or in bytes: 92, 39,102, 51

so I think I need a way to translate that into the string r"\xf3"
but I can't find a way to accomplish that.

a
Any suggestions are very welcome.


Change r"\'f3" to r"\xf3" and then decode to Unicode:

>>> s = r"\'f3"
>>> s = s.replace(r"\'", r"\x").decode("unicode_escape")
>>> print s
ó
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python distutils build problems with MinGW

2010-02-01 Thread Andrej Mitrovic
Well, in any case this seems to be working ok for me now.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python distutils build problems with MinGW

2010-02-01 Thread Andrej Mitrovic
On Feb 1, 5:44 pm, casevh  wrote:
> On Feb 1, 8:31 am, Andrej Mitrovic  wrote:
>
>
>
> > On Feb 1, 4:03 am, Andrej Mitrovic  wrote:
>
> > > On Feb 1, 2:59 am, Andrej Mitrovic  wrote:
>
> > > > Hi,
>
> > > > I've made a similar post on the Cython mailing list, however I think
> > > > this is more python-specific. I'm having trouble setting up distutils
> > > > to use MinGW instead of Visual Studio when building a module. Even tho
> > > > I've just uninstalled VS, and cleared out any leftover VS environment
> > > > variables, distutils keeps wanting to use it.
>
> > > > The steps I took:
>
> > > > Fresh installation of Python 3.1.1
> > > > Successfully installed MinGW, added to the path variable (gcc in
> > > > command prompt works)
> > > > Successfully installed Cython, imports from Cython in Python work.
> > > > Added a distutils.cfg file in \Python31\Lib\distutils\ directory with:
>
> > > > [build]
> > > > compiler=mingw32
>
> > > > (also tried adding [build_ext] compiler=mingw32)
>
> > > > There's a demo setup.py module that came with Cython, I tried the
> > > > following commands:
>
> > > > 
>
> > > > > python setup.py build_ext --inplace
>
> > > > error: Unable to find vcvarsall.bat
>
> > > > > python setup.py build
>
> > > > error: Unable to find vcvarsall.bat
> > > > 
>
> > > > I'm having the exact same issue with trying to build the Polygon
> > > > library via MinGW. In fact, the reason I had installed Visual Studio
> > > > in the first place was to be able to build the Polygon library, since
> > > > I was having these errors.
>
> > > > What do I need to do to make distutils/python use MinGW?
>
> > > Update:
>
> > > I installed and tried building with Python 2.6, it calls MinGW when I
> > > have the distutils.cfg file configured properly (same configuration as
> > > the Python 3.1.1 one)
>
> > > But why doesn't it work on a fresh Python 3.1.1 installation as well?
> > > Is this a bug?
>
> > Also tried calling (Python 3.1.1):
>
> > 
> > python setup.py build --compiler=mingw32
>
> > error: Unable to find vcvarsall.bat
> > 
>
> > I've tried using pexports and the dlltool to build new python31.def
> > and libpython31.a files, and put them in the libs folder. That didn't
> > work either.
>
> > I've also tried adding some print statements in the \distutils\dist.py
> > file, in the parse_config_files() function, just to see if Python
> > properly parses the config file. And it does, both Python 2.6 and 3.1
> > parse the distutils.cfg file properly. Yet something is making python
> > 3 look for the VS/VC compiler instead of MinGW. I'll keep updating on
> > any progres..- Hide quoted text -
>
> > - Show quoted text -
>
> I think this ishttp://bugs.python.org/issue6377.
>
> I applied the patch to my local copy of Python 3.1 and it seems to
> work.
>
> casevh

Thanks for the link, it seems like it's got more to do than what I
just posted. But in any case, it works for me now.

I think I'll have to open myself a blog and post some guides for
problems like these, so people can avoid spending whole nights around
a problem like this. :)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: create a string of variable lenght

2010-02-01 Thread Tracubik
Il Sun, 31 Jan 2010 19:54:17 -0500, Benjamin Kaplan ha scritto:

> First of all, if you haven't read this before, please do. It will make
> this much clearer.
> http://www.joelonsoftware.com/articles/Unicode.html

i'm reading it right now, thanks :-)

[cut]

> Solution to your problem: in addition to keeping the #-*- coding ...
> line, go with Günther's advice and use Unicode strings.

that is: always use the "u" operator (i.e. my_name = u"Nico"), right?

Ciao,
Nico

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python distutils build problems with MinGW

2010-02-01 Thread Andrej Mitrovic
I've found the problem:

For the windows Python 3.1.1 x86 installation, the file \Python31\Lib
\Distutils\command\build_ext.py, has this:

Line 313:

self.compiler = new_compiler(compiler=None,

But Python 2.6 has this line:

Line 306:

self.compiler = new_compiler(compiler=self.compiler,



I've changed the Python 3.1.1 \Python31\Lib\Distutils\command
\build_ext.py, Line 313 to this:

self.compiler = new_compiler(compiler=self.compiler,

And now MinGW gets properly called in Python 3.1.1. I think this must
have been a typo.


Is there anyone else that can confirm this?

The installation that distributes the file with that line is from this
Python ftp link: http://python.org/ftp/python/3.1.1/python-3.1.1.msi
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how long a Str can be used in this python code segment?

2010-02-01 Thread Antoine Pitrou
Le Mon, 01 Feb 2010 01:33:09 -0800, Stephen.Wu a écrit :
> 
> actually, I just use file.read(length) way, i just want to know what
> exactly para of length I should set, I'm afraid length doesn't equal to
> the amount of physical memory after trials...

There's no exact length you "should" set, just set something big enough 
that looping doesn't add any noticeable overhead, but small enough that 
it doesn't take too much memory. Something between 64kB and 1MB sounds 
reasonable.


-- 
http://mail.python.org/mailman/listinfo/python-list


Iterating over a function call

2010-02-01 Thread Gerald Britton
Hi -- I have many sections of code like this:

for value in value_iterator:
 value_function(value)

I noticed that this does two things I don't like:

1. looks up "value_function" and "value" for each iteration, but
"value_function" doesn't change.
2. side effect of (maybe) leaking the iterator variable "value" into
the code following the loop (if the iterator is not empty).

I can take care of 2 by explicitly deleting the variable at the end:

   del value

but I'd probably forget to do that sometimes.  I then realized that,
in the 2.x series, I can accomplish the same thing with:

map(value_function, value_iterator)

and avoid both problems BUT map() returns a list which is never used.
Not a big deal for small iterables, I guess, but it seems messy.  Upon
conversion to 3.x I have to explicitly list-ify it:

list(map(value_function, value_iterator))

which works but again the list returned is never used (extra work) and
has to be gc'd I suppose (extra memory).

It's easy to make a little function to take care of this (2.x):

from itertools import imap
def apply(function, iterable):
for item in imap(function, iterable):
pass

then later:

   apply(value_function, value_iterator)

or something similar thing in 3.x, but that just adds an additional
function def that I have to include whenever I want to do something
like this.

So.I'm wondering if there is any interest in an apply() built-in
function that would work like map() does in 2.x (calls the function
with each value returned by the iterator) but return nothing.  Maybe
"apply" isn't the best name; it's just the first one that occurred to
me.

Or is this just silly and should I forget about it?

-- 
Gerald Britton
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python distutils build problems with MinGW

2010-02-01 Thread casevh
On Feb 1, 8:31 am, Andrej Mitrovic  wrote:
> On Feb 1, 4:03 am, Andrej Mitrovic  wrote:
>
>
>
>
>
> > On Feb 1, 2:59 am, Andrej Mitrovic  wrote:
>
> > > Hi,
>
> > > I've made a similar post on the Cython mailing list, however I think
> > > this is more python-specific. I'm having trouble setting up distutils
> > > to use MinGW instead of Visual Studio when building a module. Even tho
> > > I've just uninstalled VS, and cleared out any leftover VS environment
> > > variables, distutils keeps wanting to use it.
>
> > > The steps I took:
>
> > > Fresh installation of Python 3.1.1
> > > Successfully installed MinGW, added to the path variable (gcc in
> > > command prompt works)
> > > Successfully installed Cython, imports from Cython in Python work.
> > > Added a distutils.cfg file in \Python31\Lib\distutils\ directory with:
>
> > > [build]
> > > compiler=mingw32
>
> > > (also tried adding [build_ext] compiler=mingw32)
>
> > > There's a demo setup.py module that came with Cython, I tried the
> > > following commands:
>
> > > 
>
> > > > python setup.py build_ext --inplace
>
> > > error: Unable to find vcvarsall.bat
>
> > > > python setup.py build
>
> > > error: Unable to find vcvarsall.bat
> > > 
>
> > > I'm having the exact same issue with trying to build the Polygon
> > > library via MinGW. In fact, the reason I had installed Visual Studio
> > > in the first place was to be able to build the Polygon library, since
> > > I was having these errors.
>
> > > What do I need to do to make distutils/python use MinGW?
>
> > Update:
>
> > I installed and tried building with Python 2.6, it calls MinGW when I
> > have the distutils.cfg file configured properly (same configuration as
> > the Python 3.1.1 one)
>
> > But why doesn't it work on a fresh Python 3.1.1 installation as well?
> > Is this a bug?
>
> Also tried calling (Python 3.1.1):
>
> 
> python setup.py build --compiler=mingw32
>
> error: Unable to find vcvarsall.bat
> 
>
> I've tried using pexports and the dlltool to build new python31.def
> and libpython31.a files, and put them in the libs folder. That didn't
> work either.
>
> I've also tried adding some print statements in the \distutils\dist.py
> file, in the parse_config_files() function, just to see if Python
> properly parses the config file. And it does, both Python 2.6 and 3.1
> parse the distutils.cfg file properly. Yet something is making python
> 3 look for the VS/VC compiler instead of MinGW. I'll keep updating on
> any progres..- Hide quoted text -
>
> - Show quoted text -

I think this is http://bugs.python.org/issue6377.

I applied the patch to my local copy of Python 3.1 and it seems to
work.

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python distutils build problems with MinGW

2010-02-01 Thread Andrej Mitrovic
On Feb 1, 4:03 am, Andrej Mitrovic  wrote:
> On Feb 1, 2:59 am, Andrej Mitrovic  wrote:
>
>
>
> > Hi,
>
> > I've made a similar post on the Cython mailing list, however I think
> > this is more python-specific. I'm having trouble setting up distutils
> > to use MinGW instead of Visual Studio when building a module. Even tho
> > I've just uninstalled VS, and cleared out any leftover VS environment
> > variables, distutils keeps wanting to use it.
>
> > The steps I took:
>
> > Fresh installation of Python 3.1.1
> > Successfully installed MinGW, added to the path variable (gcc in
> > command prompt works)
> > Successfully installed Cython, imports from Cython in Python work.
> > Added a distutils.cfg file in \Python31\Lib\distutils\ directory with:
>
> > [build]
> > compiler=mingw32
>
> > (also tried adding [build_ext] compiler=mingw32)
>
> > There's a demo setup.py module that came with Cython, I tried the
> > following commands:
>
> > 
>
> > > python setup.py build_ext --inplace
>
> > error: Unable to find vcvarsall.bat
>
> > > python setup.py build
>
> > error: Unable to find vcvarsall.bat
> > 
>
> > I'm having the exact same issue with trying to build the Polygon
> > library via MinGW. In fact, the reason I had installed Visual Studio
> > in the first place was to be able to build the Polygon library, since
> > I was having these errors.
>
> > What do I need to do to make distutils/python use MinGW?
>
> Update:
>
> I installed and tried building with Python 2.6, it calls MinGW when I
> have the distutils.cfg file configured properly (same configuration as
> the Python 3.1.1 one)
>
> But why doesn't it work on a fresh Python 3.1.1 installation as well?
> Is this a bug?

Also tried calling (Python 3.1.1):


python setup.py build --compiler=mingw32

error: Unable to find vcvarsall.bat


I've tried using pexports and the dlltool to build new python31.def
and libpython31.a files, and put them in the libs folder. That didn't
work either.

I've also tried adding some print statements in the \distutils\dist.py
file, in the parse_config_files() function, just to see if Python
properly parses the config file. And it does, both Python 2.6 and 3.1
parse the distutils.cfg file properly. Yet something is making python
3 look for the VS/VC compiler instead of MinGW. I'll keep updating on
any progres..
-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: Wing IDE 3.2.4 released

2010-02-01 Thread Stephan Deibel

Hi,

Wingware has released version 3.2.4 of Wing IDE, our integrated development
environment for the Python programming language.  Wing IDE can be used on
Windows, Linux, and OS X to develop Python code for web, GUI, and embedded
scripting applications.  Wing IDE provides auto-completion, call tips, a
powerful debugger, unit testing, version control, search, and many other
features.

This release includes the following minor features and improvements:

* Corrected support for non-ascii I/O when debugging under Python 3.x
* Support debugging of wide unicode builds of Python 3.x
* Improve GUI responsiveness in very large projects (optimized external
 file change checking)
* Auto-enter last failed or canceled version control commit message
* Added context menu for copy/paste to commit message area in version
 control tools
* Version control annotate commands like 'svn blame' show results in
 scratch buffer
* Many other minor features and bug fixes; See the change log
 at http://wingware.com/pub/wingide/3.2.4/CHANGELOG.txt for details

*Wing 3.2 Highlights*

Version 3.2 of Wing IDE includes the following new features not present
in Wing IDE 3.1:

* Support for Python 3.0 and 3.1
* Rewritten version control integration with support for Subversion, CVS,
 Bazaar, git, Mercurial, and Perforce (*)
* Added 64-bit Debian, RPM, and tar file installers for Linux
* File management in Project view (**)
* Auto-completion in the editor obtains completion data from live runtime
 when the debugger is active (**)
* Perspectives: Create and save named GUI layouts and optionally 
automatically

 transition when debugging is started (*)
* Improved support for Cython and Pyrex (*.pyx files)
* Added key binding documentation to the manual
* Added Restart Debugging item in Debug menu and tool bar (**)
* Improved OS Commands and Bookmarks tools (*)

(*)'d items are available in Wing IDE Professional only.
(**)'d items are available in Wing IDE Personal and Professional only.

The release also contains many other minor features and bug fixes; see the
change log for details:  http://wingware.com/pub/wingide/3.2.4/CHANGELOG.txt

*Downloads*

Wing IDE Professional and Wing IDE Personal are commercial software and
require a license to run. A free trial license can be obtained directly from
the product when launched.  Wing IDE 101 can be used free of charge.

Wing IDE Pro 3.2.4http://wingware.com/downloads/wingide/3.2

Wing IDE Personal 3.2.4   http://wingware.com/downloads/wingide-personal/3.2

Wing IDE 101 3.2.4http://wingware.com/downloads/wingide-101/3.2

*About Wing IDE*

Wing IDE is an integrated development environment for the Python programming
language.  It provides powerful debugging, editing, code intelligence,
testing, version control, and search capabilities that reduce 
development and

debugging time, cut down on coding errors, and make it easier to understand
and navigate Python code.

Wing IDE is available in three product levels:  Wing IDE Professional is
the full-featured Python IDE, Wing IDE Personal offers a reduced feature
set at a low price, and Wing IDE 101 is a free simplified version designed
for teaching entry level programming courses with Python.

System requirements are Windows 2000 or later, OS X 10.3.9 or later for 
PPC or

Intel (requires X11 Server), or a recent Linux system (either 32 or 64 bit).
Wing IDE 3.2 supports Python versions 2.0.x through 3.1.x.

*Purchasing and Upgrading*

Wing 3.2 is a free upgrade for all Wing IDE 3.0 and 3.1 users. Any 2.x 
license

sold after May 2nd 2006 is free to upgrade; others cost 1/2 the normal price
to upgrade.

Upgrade a 2.x license: https://wingware.com/store/upgrade

Purchase a 3.x license:https://wingware.com/store/purchase

--

The Wingware Team
Wingware | Python IDE
Advancing Software Development

www.wingware.com

--
http://mail.python.org/mailman/listinfo/python-list


Re: how long a Str can be used in this python code segment?

2010-02-01 Thread MRAB

Chris Rebert wrote:

On Mon, Feb 1, 2010 at 1:17 AM, Stephen.Wu <54wut...@gmail.com> wrote:

tmp=file.read() (very huge file)
if targetStr in tmp:
   print "find it"
else:
   print "not find"
file.close()

I checked if file.read() is huge to some extend, it doesn't work, but
could any give me some certain information on this prolbem?


If the file's contents is larger than available memory, you'll get a
MemoryError. To avoid this, you can read the file in by chunks (or if
applicable, by lines) and see if each chunk/line matches.


If you're processing in chunks then you also need to consider the
possibility that what you're looking for crosses a chunk boundary, of
course. It's an easy case to miss! :-)
--
http://mail.python.org/mailman/listinfo/python-list


Re: Unable to install numpy

2010-02-01 Thread Robert Kern

On 2010-01-31 08:03 AM, vsoler wrote:

On Jan 18, 9:08 pm, Robert Kern  wrote:

On 2010-01-18 14:02 PM, vsoler wrote:


Hi all,



I just download Numpy, and tried to install it using "numpy-1.4.0-
win32-superpack-python2.6.exe"



I get an error:  "Python version 2.6 required, which was not found in
the Registry"



However, I am using Python 2.6 every day. I'm running Windows 7.



What can I do?


Please ask numpy installation questions on the numpy mailing list.

http://www.scipy.org/Mailing_Lists


Thank you Robert.

I'm going to direct my questions towards scipy.org. However, since I
do not find my questions already answered, could you tell me where to
post it?

I beg your pardon for still asking this question outside of scipy.org,
but I've been unable to move on.


Subscribe to the numpy-discussion mailing list following the directions given on 
the above-linked page. Then send email to numpy-discuss...@scipy.org with your 
question.


--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

--
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3147 - new .pyc format

2010-02-01 Thread Steven D'Aprano
On Mon, 01 Feb 2010 11:14:42 +0100, Daniel Fetchinson wrote:

> I also think the PEP is a great idea and proposes a solution to a real
> problem. But I also hear the 'directory clutter' argument and I'm really
> concerned too, having all these extra directories around (and quite a
> large number of them indeed!). 

Keep in mind that if you don't explicitly ask for the proposed feature, 
you won't see any change at all. You need to run Python with the -R 
switch, or set an environment variable. The average developer won't see 
any clutter at all unless she is explicitly supporting multiple versions.



> How about this scheme:
> 
> 1. install python source files to a shared (among python installations)
> location /this/is/shared
> 2. when python X.Y imports a source file from /this/is/shared it will
> create pyc files in its private area /usr/lib/pythonX.Y/site-packages/

$ touch /usr/lib/python2.5/site-packages/STEVEN
touch: cannot touch `/usr/lib/python2.5/site-packages/STEVEN': Permission 
denied

There's your first problem: most users don't have write-access to the 
private area. When you install a package, you normally do so as root, and 
it all works. When you import a module and it gets compiled as a .pyc 
file, you're generally running as a regular user.


> Time comparison would be between /this/is/shared/x.py and 
> /usr/lib/pythonX.Y/site-packages/x.pyc, for instance.

I don't quite understand what you mean by "time comparison".


[...]
> In /usr/lib/pythonX.Y/site-packages there would be only pyc files with
> magic number matching python X.Y.

Personally, I think it is a terribly idea to keep the source file and 
byte code file in such radically different places. They should be kept 
together. What you call "clutter" I call having the files that belong 
together kept together.



> So, basically nothing would change only the location of py and pyc files
> would be different from current behavior, but the same algorithm would
> be run to determine which one to load, when to create a pyc file, when
> to ignore the old one, etc.

What happens when there is a .pyc file in the same location as the .py 
file? Because it *will* happen. Does it get ignored, or does it take 
precedence over the site specific file?

Given:

./module.pyc
/usr/lib/pythonX.Y/site-packages/module.pyc

and you execute "import module", which gets used? Note that in this 
situation, there may or may not be a module.py file.


> What would be wrong with this setup?

Consider:

./module.py
./package/module.py

Under your suggestion, both of these will compile to

/usr/lib/pythonX.Y/site-packages/module.pyc



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: libpcap and python

2010-02-01 Thread Grant Edwards
On 2010-02-01, Mag Gam  wrote:
> Hello All,
>
> I used tcpdump to capture data on my network. I would like to analyze
> the data using python -- currently using ethereal and wireshark.
>
> I would like to get certain type of packets (I can get the hex code
> for them), what is the best way to do this? Lets say I want to capture
> all events of `ping localhost`

http://www.google.com/search?q=python+pcap

-- 
Grant Edwards   grante Yow! My face is new, my
  at   license is expired, and I'm
   visi.comunder a doctor's care
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: HTML Parser which allows low-keyed local changes (upon serialization)

2010-02-01 Thread Stefan Behnel
Robert, 01.02.2010 14:36:
> Stefan Behnel wrote:
>> Robert, 31.01.2010 20:57:
>>> I tried lxml, but after walking and making changes in the element tree,
>>> I'm forced to do a full serialization of the whole document
>>> (etree.tostring(tree)) - which destroys the "human edited" format of the
>>> original HTML code. makes it rather unreadable.
>>
>> What do you mean? Could you give an example? lxml certainly does not
>> destroy anything it parsed, unless you tell it to do so.
> 
> of course it does not destroy during parsing.(?)

I meant "parsed" in the sense of "has parsed and is now working on".


> I mean: I want to walk with a Python script through the parsed tree HTML
> and modify here and there things  (auto alt tags from DB/similar, link
> corrections, text sections/translated sentences... due to HTML code and
> content checks.)

Sure, perfectly valid use case.


> Then I want to output the changed tree - but as close to the original
> format as far as possible. No changes to my white space identation,
> etc..  Only lokal changes, where really tags where changed.

That's up to you. If you only apply local changes that do not change any
surrounding whitespace, you'll be fine.


> Thats similiar like that what a good HTML editor does: After you made
> little changes, it doesn't reformat/re-spit-out your whole code layout
> from tree/attribute logic only. you have lokal changes only.

HTML editors don't work that way. They always "re-spit-out" the whole code
when you click on "save". They certainly don't track the original file
position of tags. What they preserve is the content, including whitespace
(or not, if they reformat the code, but that's usually an *option*).


> Such a "good HTML editor" must somehow track the original positions of
> the tags in the file. And during each logical change in the tree it must
> tracks the file position changes/offsets.

Sorry, but that's nonsense. The file position of a tag is determined by
whitespace, i.e. line endings and indentation. lxml does not alter that,
unless you tell it do do so.

Since you keep claiming that it *does* alter it, please come up with a
reproducible example that shows a) what you do in your code, b) what your
input is and c) what unexpected output it creates. Do not forget to include
the version number of lxml and libxml2 that you are using, as well as a
comment on /how/ the output differs from what you expected.

My stab in the dark is that you forgot to copy the tail text of elements
that you replace by new content, and that you didn't properly indent new
content that you added. But that's just that, a stab in the dark. You
didn't provide enough information for even an educated guess.

Stefan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: HTML Parser which allows low-keyed local changes (upon serialization)

2010-02-01 Thread Robert

Robert wrote:

Stefan Behnel wrote:

Robert, 31.01.2010 20:57:

I tried lxml, but after walking and making changes in the element tree,
I'm forced to do a full serialization of the whole document
(etree.tostring(tree)) - which destroys the "human edited" format of the
original HTML code. makes it rather unreadable.


What do you mean? Could you give an example? lxml certainly does not
destroy anything it parsed, unless you tell it to do so.



of course it does not destroy during parsing.(?)

I mean: I want to walk with a Python script through the parsed tree HTML 
and modify here and there things  (auto alt tags from DB/similar, link 
corrections, text sections/translated sentences... due to HTML code and 
content checks.)


Then I want to output the changed tree - but as close to the original 
format as far as possible. No changes to my white space identation, 
etc..  Only lokal changes, where really tags where changed.


Thats similiar like that what a good HTML editor does: After you made 
little changes, it doesn't reformat/re-spit-out your whole code layout 
from tree/attribute logic only. you have lokal changes only.
But a simple HTML editor like that in Mozilla-Seamonkey outputs a whole 
new HTML, produces the HTML from logical tree only (regarding his (ugly) 
style), destroys my whitspace layout and much more  - forgetting 
anything about the original layout.


Such a "good HTML editor" must somehow track the original positions of 
the tags in the file. And during each logical change in the tree it must 
tracks the file position changes/offsets. That thing seems to miss in 
lxml and BeautifulSoup which I tried so far.


This is a frequent need I have. Nobody else's?

Seems I need to write my own or patch BS to do that extra tracking?



basic feature(s) of such parser perhaps:

* can it tell for each tag object in the parsed tree, at what 
original file position start:end it resided? even a basic need: 
tell me the line number e.g. (for warning/analysis reports e.g.)


(* do the tree objects auto track/know if they were changed. (for 
convenience; a tree copy may serve this otherwise .. )


the creation of a output with local changes whould be rather 
simple from that ...



Robert
--
http://mail.python.org/mailman/listinfo/python-list


Re: [Edu-sig] odd drawing problem with turtle.py

2010-02-01 Thread Vern Ceder

Brian Blais wrote:

On Jan 31, 2010, at 23:05 , John Posner wrote:


Try commenting out this statement:

   self.turtle.tracer(False)

That helps on Python 2.6.4.



interesting.  It seems as if the tracer property is a global one:


Actually, the tracer method that does the work is part of the 
TurtleScreen class, and individual Turtle instances just access the 
tracer method of the TurtleScreen they inhabit... if that makes sense. 
So since your turtles are on the same screen, yes, in effect, tracer is 
sort of global.


Cheers,
Vern



In [1]:t1=Turtle()

In [2]:t1.tracer()
Out[2]:1

In [3]:t1.tracer(False)

In [4]:t1.tracer()
Out[4]:0

In [5]:t2=Turtle()

In [6]:t2.tracer()
Out[6]:0

In [7]:t2.tracer(True)

In [8]:t1.tracer()
Out[8]:1

looks like I need to set the tracer manually. 

however, even if the tracer is off, shouldn't it still draw the line? 
 when I put in the T.tracer(True) it works, but I shouldn't need to I think.



On Jan 31, 2010, at 21:11 , kirby urner wrote:


I don't see where you've defined a Turtle class to instantiate sir.


Turtle is given in turtle.py.  I should have subclassed it, but I was 
being lazy.  :)


thanks for the fast replies!


bb


On Sun, Jan 31, 2010 at 4:27 PM, Brian Blais > wrote:

I'm on Python 2.5, but using the updated turtle.py Version 1.0.1 - 24. 9.
2009.  The following script draws 5 circles, which it is supposed to, but
then doesn't draw the second turtle which is supposed to simply move
forward.  Any ideas?
from turtle import *
from numpy.random import randint
resetscreen()
class Circle(object):
def __init__(self,x,y,r,color):
self.x=x
self.y=y
self.r=r
self.color=color

self.turtle=Turtle(visible=False)
self.turtle.tracer(False)
self.draw()

def draw(self):
self.turtle.penup()
self.turtle.setposition(self.x,self.y)
self.turtle.setheading(0)
self.turtle.backward(self.r)
self.turtle.pendown()
self.turtle.fill(True)
self.turtle.pencolor("black")
self.turtle.fillcolor(self.color)
self.turtle.circle(self.r)
self.turtle.fill(False)
self.turtle.penup()

for i in range(5):
c=Circle(randint(-350,350),randint(-250,250),10,"red")


T=Turtle()
T.forward(100)
T.forward(100)







thanks,

bb
--
Brian Blais
bbl...@bryant.edu 
http://web.bryant.edu/~bblais
http://bblais.blogspot.com/



___
Edu-sig mailing list
edu-...@python.org 
http://mail.python.org/mailman/listinfo/edu-sig




--
Brian Blais
bbl...@bryant.edu 
http://web.bryant.edu/~bblais
http://bblais.blogspot.com/






___
Edu-sig mailing list
edu-...@python.org
http://mail.python.org/mailman/listinfo/edu-sig


--
This time for sure!
   -Bullwinkle J. Moose
-
Vern Ceder, Director of Technology
Canterbury School, 3210 Smith Road, Ft Wayne, IN 46804
vce...@canterburyschool.org; 260-436-0746; FAX: 260-436-5137

The Quick Python Book, 2nd Ed - http://bit.ly/bRsWDW
--
http://mail.python.org/mailman/listinfo/python-list


Re: HTML Parser which allows low-keyed local changes (upon serialization)

2010-02-01 Thread Robert

Stefan Behnel wrote:

Robert, 31.01.2010 20:57:

I tried lxml, but after walking and making changes in the element tree,
I'm forced to do a full serialization of the whole document
(etree.tostring(tree)) - which destroys the "human edited" format of the
original HTML code. makes it rather unreadable.


What do you mean? Could you give an example? lxml certainly does not
destroy anything it parsed, unless you tell it to do so.



of course it does not destroy during parsing.(?)

I mean: I want to walk with a Python script through the parsed 
tree HTML and modify here and there things  (auto alt tags from 
DB/similar, link corrections, text sections/translated 
sentences... due to HTML code and content checks.)


Then I want to output the changed tree - but as close to the 
original format as far as possible. No changes to my white space 
identation, etc..  Only lokal changes, where really tags where 
changed.


Thats similiar like that what a good HTML editor does: After you 
made little changes, it doesn't reformat/re-spit-out your whole 
code layout from tree/attribute logic only. you have lokal changes 
only.
But a simple HTML editor like that in Mozilla-Seamonkey outputs a 
whole new HTML, produces the HTML from logical tree only 
(regarding his (ugly) style), destroys my whitspace layout and 
much more  - forgetting anything about the original layout.


Such a "good HTML editor" must somehow track the original 
positions of the tags in the file. And during each logical change 
in the tree it must tracks the file position changes/offsets. That 
thing seems to miss in lxml and BeautifulSoup which I tried so far.


This is a frequent need I have. Nobody else's?

Seems I need to write my own or patch BS to do that extra tracking?


Robert
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Ruby

2010-02-01 Thread Steve Holden
Terry Reedy wrote:
> On 1/31/2010 7:25 PM, Steven D'Aprano wrote:
>> On Sun, 31 Jan 2010 15:40:36 -0800, Chris Rebert wrote:
>>
>>> On Sun, Jan 31, 2010 at 2:36 PM, Steven D'Aprano
>>>   wrote:
 On Sun, 31 Jan 2010 04:28:41 -0800, Ed Keith wrote:
> In most functional languages you just name a function to access it and
> you do it ALL the time.
>
> for example, in if you have a function 'f' which takes two parameters
> to call the function and get the result you use:
>
>   f 2 3
>
> If you want the function itself you use:
>
> f

 How do you call a function of no arguments?
>>>
>>> It's not really a function in that case, it's just a named constant.
>>> (Recall that functions don't/can't have side-effects.)
> 
> Three of you gave essentially identical answers, but I still do not see
> how given something like
> 
> def f(): return 1
> 
> I differentiate between 'function object at address xxx' and 'int 1'
> objects.
> 
But in a functional environment you don't need to. That's pretty much
the whole point.

regards
 Steve
-- 
Steve Holden   +1 571 484 6266   +1 800 494 3119
PyCon is coming! Atlanta, Feb 2010  http://us.pycon.org/
Holden Web LLC http://www.holdenweb.com/
UPCOMING EVENTS:http://holdenweb.eventbrite.com/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: User-defined exceptions from 2.6

2010-02-01 Thread Joan Miller
On 1 feb, 12:45, Steven D'Aprano  wrote:
> On Mon, 01 Feb 2010 02:19:39 -0800, Joan Miller wrote:
> > Which is the best way to create user-defined exceptions since that
> > *BaseException.message* is deprecated in Python 2.6 ?
>
> Inherit from an existing exception.
>
> >>> class MyValueException(ValueError):
>
> ...     pass
> ...
>
> >>> raise MyValueException("value is impossible, try again")
>
> Traceback (most recent call last):
>   File "", line 1, in 
> __main__.MyValueException: value is impossible, try again
>
> >>> class GenericError(Exception):
>
> ...     pass
> ...>>> raise GenericError("oops")
>
> Traceback (most recent call last):
>   File "", line 1, in 
> __main__.GenericError: oops
>
> Do you need anything more complicated?
>
It's enought, thanks!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Ruby

2010-02-01 Thread Steve Holden
Steven D'Aprano wrote:
> On Sun, 31 Jan 2010 22:43:56 -0800, alex23 wrote:
> 
>> Steven D'Aprano  wrote:
>>> You're using that term wrong. It looks to me that you don't actually
>>> know what a straw man argument is. A straw man argument is when
>>> somebody responds to a deliberately weakened or invalid argument as if
>>> it had been made by their opponent.
>> Jeez, Steve, you're beginning to sound like some kind of fallacy
>> zealot... ;)
> 
> Death to all those who confuse agumentum ad populum with argumentum ad 
> verecundiam!!!
> 
> 
Yeah, what did the zealots ever do for us?

regards
 Steve
-- 
Steve Holden   +1 571 484 6266   +1 800 494 3119
PyCon is coming! Atlanta, Feb 2010  http://us.pycon.org/
Holden Web LLC http://www.holdenweb.com/
UPCOMING EVENTS:http://holdenweb.eventbrite.com/

-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   >