Re: Python as a scripting language. Alternative to bash script?

2010-07-07 Thread Stefan Behnel

John Nagle, 28.06.2010 19:57:

Programs have "argv" and "argc", plus environment variables,
going in. So, going in, there are essentially subroutine parameters.
But all that comes back is an exit code. They should have had
something similar coming back, with arguments to "exit()" returning
the results. Then the "many small intercommunicating programs"
concept would have worked much better.


Except that you just broke the simplicity of the pipe mechanism.

Stefan

--
http://mail.python.org/mailman/listinfo/python-list


Python 3 - Is PIL/wxPython/PyWin32 supported?

2010-07-07 Thread durumdara
Hi!

I have an environment under Python 2.6 (WinXP). That is based on PIL,
wxPython/PyWin32.

In the project's pages I see official installer for only PyWin32.

I don't know that PIL or wxPython supports Python 3 or not. May with
some trick these packages are working.

Does anybody know about it?
Can I replace my Py2.6 without lost PIL/wxPython?

Thanks for your help:
   dd

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Plot problem.. ?? No sign at all

2010-07-07 Thread Johan Grönqvist

2010-07-06 19:18, Ritchy lelis skrev:

On 6 jul, 17:29, Alan G Isaac  wrote:

Unfortunately I cannot make sense of the code you posted.
Provide a detailed description in words (or psuedocode)
of what you are trying to accomplish.  Be very careful
and detailed is you want a useful response.

Alan Isaac


hummm...

ok, i will try to make that detailed description.


I can tell you why I do not understand from your posted code what you 
are trying to do.


Firstly, I do not understand if you are trying to plot a surface, a set 
of curves, or a curve, or just a set of points? In your posted code, the 
plot command is part of the else clause, and my guess is that you never 
intend the else-clause to be executed at all.


In your code snippet you loop over two arrays (Vi and Vref), compute a 
scalar value V0, and all plot-commands you issue are of the form 
plot(V0). This will probably draw a line of one point (for each value in 
Vi and Vref), which may not be what you want, and if it draws anything 
at all, then all points will be drawn at the same x-value, which is also 
probably not what you want.


Secondly, how are the Vi and Vref related to your axes? I assume you 
want to plot all values you compute for V0, but as a function of what? 
When I use the plot command, I usually give it (at least) two arguments, 
where the first is the x-axis, and the second is the y-axis.


After I have understood those things, the next question would be about 
the maths relating the Vi and Vref values to the V0 values, but I do not 
think I will understand those until after the above points are explained 
clearer.


I definitely think your english is not a problem here.



Johan

--
http://mail.python.org/mailman/listinfo/python-list


Re: The real problem with Python 3 - no business case for conversion (was "I strongly dislike Python 3")

2010-07-07 Thread Paul McGuire
On Jul 6, 3:30 am, David Cournapeau  wrote:
> On Tue, Jul 6, 2010 at 4:30 AM, D'Arcy J.M. Cain  wrote:
>
> One thing that would be very useful is how to maintain something that
> works on 2.x and 3.x, but not limiting yourself to 2.6. Giving up
> versions below 2.6 is out of the question for most projects with a
> significant userbase IMHO. As such, the idea of running the python 3
> warnings is not so useful IMHO - unless it could be made to work
> better for python 2.x < 2.6, but I am not sure the idea even makes
> sense.
>
This is exactly how I felt about my support for pyparsing, that I was
trying continue to provide support for 2.3 users, up through 3.x
users, with a single code base.  (This would actually have been
possible if I had been willing to introduce a performance penalty for
Python 2 users, but performance is such a critical issue for parsing I
couldn't justify it to myself.)  This meant that I had to constrain my
implementation, while trying to incorporate forward-looking support
features (such as __bool__ and __dir__), which have no effect on older
Python versions, but support additions in newer Pythons.  I just
couldn't get through on the python-dev list that I couldn't just
upgrade my code to 2.6 and then use 2to3 to keep in step across the
2-3 chasm, as this would leave behind my faithful pre-2.6 users.

Here are some of the methods I used:

- No use of sets.  Instead I defined a very simple set simulation
using dict keys, which could be interchanged with set for later
versions.

- No generator expressions, only list comprehensions.

- No use of decorators.  BUT, pyparsing includes a decorator method,
traceParseAction, which can be used by users with later Pythons as
@traceParseAction in their own code.

- No print statements.  As pyparsing is intended to be an internal
module, it does no I/O as part of its function - it only processes a
given string, and returns a data structure.

- Python 2-3 compatible exception syntax.  This may have been my
trickiest step.  The change of syntax for except from

except ExceptionType, ex:

to:

except ExceptionType as ex:

is completely forward and backward incompatible.  The workaround is to
rewrite as:

except ExceptionType:
ex = sys.exc_info()[0]

which works just fine in 2.x and 3.x.  However, there is a slight
performance penalty in doing this, and pyparsing uses exceptions as
part of its grammar success/failure signalling and backtracking; I've
used this technique everywhere I can get away with it, but there is
one critical spot where I can't use it, so I have to keep 2 code bases
with slight differences between them.

- Implement __bool__, followed by __nonzero__ = __bool__.  This will
give you boolean support for your classes in 2.3-3.1.

- Implement __dir__, which is unused by old Pythons, but supports
customization of dir() output for your own classes.

- Implement __len__, __contains__, __iter__ and __reversed__ for
container classes.

- No ternary expressions.  Not too difficult really, there are several
well-known workarounds for this, either by careful use of and's and
or's, or using the bool-as-int to return the value from
(falseValue,trueValue)[condition].

- Define a version-sensitive portion of your module, to define
synonyms for constants that changed name between versions.  Something
like:

_PY3K = sys.version_info[0] > 2
if _PY3K:
_MAX_INT = sys.maxsize
basestring = str
_str2dict = set
alphas = string.ascii_lowercase + string.ascii_uppercase
else:
_MAX_INT = sys.maxint
range = xrange
_str2dict = lambda strg : dict( [(c,0) for c in strg] )
alphas = string.lowercase + string.uppercase

The main body of my code uses range throughout (for example), and with
this definition I get the iterator behavior of xrange regardless of
Python version.


In the end I still have 2 source files, one for Py2 and one for Py3,
but there is only a small and manageable number of differences between
them, and I expect at some point I will move forward to supporting Py3
as my primary target version.  But personally I think this overall
Python 2-3 migration process is moving along at a decent rate, and I
should be able to make my switchover in another 12-18 months.  But in
the meantime, I am still able to support all versions of Python NOW,
and I plan to continue doing so (albeit "support" for 2.x versions
will eventually mean "continue to offer a frozen feature set, with
minimal bug-fixing if any").

I realize that pyparsing is a simple-minded module in comparison to
others: it is pure Python, so it has no issues with C extensions; it
does no I/O, so print-as-statement vs. print-as-function is not an
issue; and it imports few other modules, so the ones it does have not
been dropped in Py3; and overall it is only a few thousand lines of
code.  But I just offer this post as a concrete data point in this
discussion.

-- Paul
-- 
http://mail.python.org/mailman/list

Re: The real problem with Python 3 - no business case for conversion

2010-07-07 Thread Paul Rubin
Paul McGuire  writes:
> is completely forward and backward incompatible.  The workaround is to
> rewrite as:
>
> except ExceptionType:
> ex = sys.exc_info()[0]
>
> which works just fine in 2.x and 3.x.

Are you sure?  I wonder if there might be some race condition that could
make it fail.

I didn't even know about (or forgot) this change.  Yucch.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The real problem with Python 3 - no business case for conversion

2010-07-07 Thread Thomas Jollans
On 07/07/2010 10:58 AM, Paul Rubin wrote:
> Paul McGuire  writes:
>> is completely forward and backward incompatible.  The workaround is to
>> rewrite as:
>>
>> except ExceptionType:
>> ex = sys.exc_info()[0]
>>
>> which works just fine in 2.x and 3.x.
> 
> Are you sure?  I wonder if there might be some race condition that could
> make it fail.

Luckily, no: (lib. docs on exc_info())

This function returns a tuple of three values that give information
about the exception that is currently being handled. The information
returned is specific both to the current thread and to the current stack
frame.

> 
> I didn't even know about (or forgot) this change.  Yucch.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The real problem with Python 3 - no business case for conversion (was "I strongly dislike Python 3")

2010-07-07 Thread geremy condra
On Tue, Jul 6, 2010 at 1:37 AM, Terry Reedy  wrote:
> On 7/5/2010 9:00 PM, Philip Semanchuk wrote:
>>
>> On Jul 5, 2010, at 6:41 PM, Chris Rebert wrote:
>>
>>> On Mon, Jul 5, 2010 at 3:38 PM, Philip Semanchuk
>
 I ported two pure C extensions from 2 to 3 and was even able to keep a
 single C codebase. I'd be willing to contribute my experiences to a
 document
 somewhere. (Is there a Wiki?)
>>>
>>> Indeed there is: http://wiki.python.org/moin/
>>
>> Thanks. I don't want to appear ungrateful, but I was hoping for
>> something specific to the 2-to-3 conversion. I guess someone has to
>> start somewhere...
>
> There is an existing 2to3 and other pages for Python code conversion. I do
> not know of any for CAPI conversion. The need for such has been acknowledged
> among the devs but if there is nothing yet, we need someone with specialized
> experience and a bit of time to make a first draft. If you start one, give
> it an easy to remember name C2to3? 2to3Capi? You choose. And link to it from
> the 2to3 page
>
> In his post on this thread, Martin Loewis volunteered to list what he knows
> from psycopg2 if someone else will edit.

I'm not sure why I don't have this post, but I'm happy to help edit
etc if Martin
wants to put together a rough draft.

Geremy Condra
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread Jonathan Hartley
On Jul 6, 4:50 pm, sturlamolden  wrote:
> Just a little reminder:
>
> Microsoft has withdrawn VS2008 in favor of VS2010. The express version
> is also unavailable for download. >:((
>
> We can still get a VC++ 2008 compiler required to build extensions for
> the official Python 2.6 and 2.7 binary installers here (Windows 7 SDK
> for .NET 3.5 SP1):
>
> http://www.microsoft.com/downloads/details.aspx?familyid=71DEB800-C59...
>
> Download today, before it goes away!
>
> Microsoft has now published a download for Windows 7 SDK for .NET 4.
> It has the VC++ 2010 compiler. It can be a matter of days before the VC
> ++ 2008 compiler is totally unavailable.


I presume this problem would go away if future versions of Python
itself were compiled on Windows with something like MinGW gcc. Also,
this would solve the pain of Python developers attempting to
redistribute py2exe versions of their programs (i.e. they have to own
a Visual Studio license to legally be able to redistribute the
required C runtime) I don't understand enough to know why Visual
Studio was chosen instead of MinGW. Can anyone shed any light on that
decision?

Many thanks

  Jonathan Hartley
-- 
http://mail.python.org/mailman/listinfo/python-list


Problem With the PyRtf Footers

2010-07-07 Thread srinivas hn
Hi all,

Am using the pyrtf for the generating the rtf documents from the html.Am
able to generate the documents the problem is with the footer.Its coming
only for the first page for the rest of the pages it is coming empty.Am
using the section.FirstFooter for the first page footer and section.Footer
for the subsequent pages.I am not able to figure out what is exactly the
problem.If any body knows please help me.

Thanks in Advance !


Srinivas HN
ph-9986229891
-- 
http://mail.python.org/mailman/listinfo/python-list


Python -- floating point arithmetic

2010-07-07 Thread david mainzer

Dear Python-User,


today i create some slides about floating point arithmetic. I used an
example from

http://docs.python.org/tutorial/floatingpoint.html

so i start the python shell on my linux machine:

d...@maxwell $ python
Python 2.6.5 (release26-maint, May 25 2010, 12:37:06)
[GCC 4.3.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> >>> sum = 0.0
>>> >>> for i in range(10):
... sum += 0.1
...
>>> >>> sum
0.99989
>>> >>>
But thats looks a little bit wrong for me ... i must be a number greater
then 1.0 because 0.1 = 
0.155511151231257827021181583404541015625000
in python ... if i print it.

So i create an example program:

sum = 0.0
n = 10
d = 1.0 / n
print "%.60f" % ( d )
for i in range(n):
print "%.60f" % ( sum )
sum += d

print sum
print "%.60f" % ( sum )


 RESULTs --
0.1555111512312578270211815834045410156250
0.
0.1555111512312578270211815834045410156250
0.20001110223024625156540423631668090820312500
0.3000444089209850062616169452667236328125
0.40002220446049250313080847263336181640625000
0.5000
0.59997779553950749686919152736663818359375000
0.6999555910790149937383830547332763671875
0.79993338661852249060757458209991455078125000
0.8999111821580299874767661094665527343750
1.0
0.99988897769753748434595763683319091796875000

and the jump from 0.50*** to 0.5999* looks wrong
for me ... do i a mistake or is there something wrong in the
representation of the floating points in python?

my next question, why could i run

print "%.66f" % ( sum )

but not

print "%.67f" % ( sum )

can anybody tell me how python internal represent a float number??


Best and many thanks in advanced,
David

-- 
http://mail.python.org/mailman/listinfo/python-list


Python -- floating point arithmetic

2010-07-07 Thread david mainzer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA384

Dear Python-User,


today i create some slides about floating point arithmetic. I used an
example from

http://docs.python.org/tutorial/floatingpoint.html

so i start the python shell on my linux machine:

d...@maxwell $ python
Python 2.6.5 (release26-maint, May 25 2010, 12:37:06)
[GCC 4.3.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> sum = 0.0
>>> for i in range(10):
... sum += 0.1
...
>>> sum
0.99989
>>>

But thats looks a little bit wrong for me ... i must be a number greater
then 1.0 because 0.1 = 
0.155511151231257827021181583404541015625000
in python ... if i print it.

So i create an example program:

sum = 0.0
n = 10
d = 1.0 / n
print "%.60f" % ( d )
for i in range(n):
print "%.60f" % ( sum )
sum += d

print sum
print "%.60f" % ( sum )


-  RESULTs --
0.1555111512312578270211815834045410156250
0.
0.1555111512312578270211815834045410156250
0.20001110223024625156540423631668090820312500
0.3000444089209850062616169452667236328125
0.40002220446049250313080847263336181640625000
0.5000
0.59997779553950749686919152736663818359375000
0.6999555910790149937383830547332763671875
0.79993338661852249060757458209991455078125000
0.8999111821580299874767661094665527343750
1.0
0.99988897769753748434595763683319091796875000

and the jump from 0.50*** to 0.5999* looks wrong
for me ... do i a mistake or is there something wrong in the
representation of the floating points in python?

my next question, why could i run

print "%.66f" % ( sum )

but not

print "%.67f" % ( sum )

can anybody tell me how python internal represent a float number??


Best and many thanks in advanced,
David
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.15 (GNU/Linux)

iQIcBAEBCQAGBQJMNGpsAAoJEJ82b5xvAlAYYMYP/RTaRcB2NCawBQ25V463+pkO
/YtTqsamrFqENrljqpsPrwOOqR02TQrOvZrV72snDPkkcN36tZHKwbmcnOS0tOAc
kjX0/oNQOvvEMyJUuHJt9kdNjxMEvUUlENEZHLtpxbypIAo3Waf0FBxKo9F4fJJS
PaDIuDgXiLOiaTTUCwa5kKowzjUe8BJOczpUulvGiIvbUac4cxwkPiGvv6L4wzmI
/4x1mG5FSibEzcI2zFRsQNHfTqxuKC3a49DuSPtXZo4YWdqVeSXLntQk70uTa78K
q4cBVEIDqETQyG0mcRABJpcTMPsnGgbgJD74uhDSTARuyHh405XbjKlic7pe1M12
AhuN71QjpGFl80OOXxCja4OKJCAhPEkfhNsJjrlFnSXAoFWg5YvhDbSVkjW6ftt0
tzyGZEoBpRSjGSIeAojYaqt1Xdxwldz2qFjRLX03io3Fgr8PKRbcLRgg1FXaMFPc
gYaY0UEh4dEjt/5afp3ET0LkOhZVMbbi2oSkkFQpRMAzik63zSJRwRxiQXWunlSi
HhKqL4sAICf6MODzquuNzJO9wH8Fkpwi+JPEAwnQm/S6Ty00dN22GezG4TWGfepH
AhKOIEtRIcMgI7kyBfUdOQe6sifKGGuTEeXK12Td9znZN+wKfqG+Ch1mq4Rwy6P7
p2xwF7ZUWNgRU+5Y/bLG
=aS+I
-END PGP SIGNATURE-
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Recommend a MySQLdb Forum

2010-07-07 Thread Philip Semanchuk


On Jul 6, 2010, at 3:16 PM, Tim Johnson wrote:


Greetings:
I would appreciate it if some could recommend a MySQLdb forum.


The one associated the sourceforge project seems like a good bet.

1) go here: http://sourceforge.net/projects/mysql-python/
2) click support

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python -- floating point arithmetic

2010-07-07 Thread Mark Dickinson
On Jul 7, 1:05 pm, david mainzer  wrote:
> Dear Python-User,
>
> today i create some slides about floating point arithmetic. I used an
> example from
>
> http://docs.python.org/tutorial/floatingpoint.html
>
> so i start the python shell on my linux machine:
>
> d...@maxwell $ python
> Python 2.6.5 (release26-maint, May 25 2010, 12:37:06)
> [GCC 4.3.4] on linux2
> Type "help", "copyright", "credits" or "license" for more information.>>> >>> 
> sum = 0.0
> >>> >>> for i in range(10):
>
> ...     sum += 0.1
> ...>>> >>> sum
> 0.99989
>
> But thats looks a little bit wrong for me ... i must be a number greater
> then 1.0 because 0.1 = 
> 0.155511151231257827021181583404541015625000
> in python ... if i print it.

So you've identified one source of error here, namely that 0.1 isn't
exactly representable (and you're correct that the value stored
internally is actually a little greater than 0.1).  But you're
forgetting about the other source of error in your example: when you
do 'sum += 0.1', the result typically isn't exactly representable, so
there's another rounding step going on.  That rounding step might
produce a number that's smaller than the actual exact sum, and if
enough of your 'sum += 0.1' results are rounded down instead of up,
that would easily explain why the total is still less than 1.0.

>
> So i create an example program:
>
> sum = 0.0
> n = 10
> d = 1.0 / n
> print "%.60f" % ( d )
> for i in range(n):
>     print "%.60f" % ( sum )
>     sum += d
>
> print sum
> print "%.60f" % ( sum )
>
>  RESULTs --
> 0.1555111512312578270211815834045410156250
> 0.
> 0.1555111512312578270211815834045410156250
> 0.20001110223024625156540423631668090820312500
> 0.3000444089209850062616169452667236328125
> 0.40002220446049250313080847263336181640625000
> 0.5000
> 0.59997779553950749686919152736663818359375000
> 0.6999555910790149937383830547332763671875
> 0.79993338661852249060757458209991455078125000
> 0.8999111821580299874767661094665527343750
> 1.0
> 0.99988897769753748434595763683319091796875000
>
> and the jump from 0.50*** to 0.5999* looks wrong
> for me ... do i a mistake or is there something wrong in the
> representation of the floating points in python?

Look at this more closely:  you're adding

0.5000

to

0.155511151231257827021181583404541015625

The *exact* result is, of course

0.655511151231257827021181583404541015625

However, that's not a number that can be exactly represented as a C
double (which is how Python stores floats internally).  This value
falls between the two (consecutive) representable values:

0.59997779553950749686919152736663818359375

and

0.600088817841970012523233890533447265625

But of these two, the first is closer to the exact value than the
second, so that's the result that you get.

You can convince yourself of these results by using the fractions
module to do exact arithmetic:

>>> from fractions import Fraction
>>> tenth = Fraction.from_float(0.1)
>>> half = Fraction.from_float(0.5)
>>> point_six = Fraction.from_float(0.6)   # 0.5999
>>> point_six_plus = Fraction.from_float(0.6 + 2**-53)  # next float up, 
>>> 0.600
>>> sum = tenth + half # exact value of the sum
>>> point_six < sum < point_six_plus# lies between point_six and 
>>> point_six_plus
True
>>> sum - point_six < point_six_plus - sum  # but it's closer to point_six
True


> my next question, why could i run
>
> print "%.66f" % ( sum )
>
> but not
>
> print "%.67f" % ( sum )

That's a historical artefact resulting from use of a fixed-length
buffer somewhere deep in Python's internals.  This restriction is
removed in Python 2.7 and Python 3.x.

> can anybody tell me how python internal represent a float number??

In CPython, it's stored as a C double, which typically means in IEEE
754 binary64 format.  (Of course, since it's a Python object, it's not
just storing the C double itself;  it also has fields for the object
type and the reference count, so a Python float typically takes 16
bytes of memory on a 32-bit machine, and 24 bytes on a 64-bit
machine.)

--
Mark
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python -- floating point arithmetic

2010-07-07 Thread Thomas Jollans
On 07/07/2010 02:05 PM, david mainzer wrote:
> today i create some slides about floating point arithmetic. I used an
> example from
> 
> http://docs.python.org/tutorial/floatingpoint.html
> 
> so i start the python shell on my linux machine:
> 
> d...@maxwell $ python
> Python 2.6.5 (release26-maint, May 25 2010, 12:37:06)
> [GCC 4.3.4] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
>>> sum = 0.0
>>> for i in range(10):
> ... sum += 0.1
> ...
>>> sum
> 0.99989
>>>
> But thats looks a little bit wrong for me ... i must be a number greater
> then 1.0 because 0.1 = 
> 0.155511151231257827021181583404541015625000
> in python ... if i print it.

The simple fact of the matter is: floating point arithmetic isn't
accurate. This has nothing to do with Python, it's the fault of your
processor's floating point handling. It's good enough in most cases, but
you should never rely on a floating-point number to have exactly a
certain value. It won't work. To generalize your example a bit:

>>> def test_floating_product(a, b):
... sum = 0
... for _ in range(int(b)):
... sum += a
... return sum, a * int(b), sum == a * b
...
>>> test_floating_product(0.1, 1)
(0.1, 0.1, True)
>>> test_floating_product(0.1, 10)
(0., 1.0, False)
>>> test_floating_product(0.2, 4)
(0.8, 0.8, True)
>>> test_floating_product(0.2, 5)
(1.0, 1.0, True)
>>> test_floating_product(0.2, 6)
(1.2, 1.2002, False)
>>>


>  RESULTs --
> 0.1555111512312578270211815834045410156250
> 0.
> 0.1555111512312578270211815834045410156250
> 0.20001110223024625156540423631668090820312500
> 0.3000444089209850062616169452667236328125
> 0.40002220446049250313080847263336181640625000
> 0.5000
> 0.59997779553950749686919152736663818359375000
> 0.6999555910790149937383830547332763671875
> 0.79993338661852249060757458209991455078125000
> 0.8999111821580299874767661094665527343750
> 1.0
> 0.99988897769753748434595763683319091796875000
> 
> and the jump from 0.50*** to 0.5999* looks wrong
> for me ... do i a mistake or is there something wrong in the
> representation of the floating points in python?

the difference is almost exactly 0.1, so that looks okay

> 
> my next question, why could i run
> 
> print "%.66f" % ( sum )
> 
> but not
> 
> print "%.67f" % ( sum )

I can run either... with Python 3.1. Using 2.6, I get a nice error message:

>>> "%.129f" % 0.1
Traceback (most recent call last):
  File "", line 1, in 
OverflowError: formatted float is too long (precision too large?)

There just isn't anything like 67 decimals of information available.
Having that information wouldn't help you a bit.

basically, floating point number are stored in the format

N * (2 ** E)

And use a lot of guesswork. as E gets larger, the precision decreases.
Rounding errors occur at the last few decimals, in either direction,
depending on the numbers.


> 
> can anybody tell me how python internal represent a float number??
> 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python -- floating point arithmetic

2010-07-07 Thread Christian Heimes
> can anybody tell me how python internal represent a float number??

It's an IEEE 754 double precision float on all hardware platforms that
support IEEE 754 semantics. Python follows the C99 standards for double
and complex numbers.

Christian

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: C interpreter in Lisp/scheme/python

2010-07-07 Thread Richard Bos
Tim Rentsch  wrote:

> nanothermite911fbibustards 
> 
> > How to make Lisp go faster than C
> > Didier Verna
> 
> Asking whether Lisp is faster than C is like asking why it's
> colder in the mountains than it is in the summer.

YM warmer.

HTH; HAND.

Richard
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python 3 - Is PIL/wxPython/PyWin32 supported?

2010-07-07 Thread Daniel Fetchinson
> I don't know that PIL or wxPython supports Python 3 or not. May with
> some trick these packages are working.
>
> Does anybody know about it?
> Can I replace my Py2.6 without lost PIL/wxPython?

PIL currently does not support python 3 but release 1.1.7 will in the
future. Don't ask me when, I don't know.

I have no idea about the rest.

Cheers,
Daniel



-- 
Psss, psss, put it down! - http://www.cafepress.com/putitdown
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python -- floating point arithmetic

2010-07-07 Thread Philip Semanchuk


On Jul 7, 2010, at 9:08 AM, Thomas Jollans wrote:


On 07/07/2010 02:05 PM, david mainzer wrote:

today i create some slides about floating point arithmetic. I used an
example from

http://docs.python.org/tutorial/floatingpoint.html

so i start the python shell on my linux machine:

d...@maxwell $ python
Python 2.6.5 (release26-maint, May 25 2010, 12:37:06)
[GCC 4.3.4] on linux2
Type "help", "copyright", "credits" or "license" for more  
information.

sum = 0.0
for i in range(10):

... sum += 0.1
...

sum

0.99989


But thats looks a little bit wrong for me ... i must be a number  
greater
then 1.0 because 0.1 =  
0.155511151231257827021181583404541015625000

in python ... if i print it.


The simple fact of the matter is: floating point arithmetic isn't
accurate. This has nothing to do with Python, it's the fault of your
processor's floating point handling. It's good enough in most cases,  
but

you should never rely on a floating-point number to have exactly a
certain value. It won't work.


Yes, this might be a good time to review the dense but interesting  
document, "What Every Computer Scientist Should Know About Floating- 
Point Arithmetic":

http://docs.sun.com/source/806-3568/ncg_goldberg.html


Cheers
Philip





--
http://mail.python.org/mailman/listinfo/python-list


Re: Why Python forbids multiple instances of one module?

2010-07-07 Thread CHEN Guang
>> Why Python forbids multiple instances of one module?
>> If only Python allows multiple instances of one module, module will
>> be enough to replace class in most cases.
>> After all, it is much easier to write a module than a class, at least we do
>> not have to write self everywhere.

> If you really want to do that, it should be possible by deleting the
> entry from sys.modules and re-importing it.  You save yourself having
> to explicitly write self everywhere, but instead you have to declare
> all your "instance" variables as globals in each "method" that uses
> them, which isn't much less of a chore.  You also lose inheritance,
> properties (and descriptors in general), magic method support,
> metaclasses, and pretty much all the other nice features that
> new-style classes have to offer.
 
Wow, it works! Thanks a lot for the good idea. 
It is cool to write, test, debug and maintain POP codes, while realizing the 
OOP power.
I think inheritance might be simulated with:
from parentModule import *
I really expect for the day when operator overloading and new-style class 
features find their way into module.
Thans again.-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python -- floating point arithmetic

2010-07-07 Thread bart.c

david mainzer wrote:


sum = 0.0
for i in range(10):

... sum += 0.1
...

sum

0.99989




But thats looks a little bit wrong for me ... i must be a number
greater
then 1.0 because 0.1 =
0.155511151231257827021181583404541015625000
in python ... if i print it.

So i create an example program:

sum = 0.0
n = 10
d = 1.0 / n
print "%.60f" % ( d )
for i in range(n):
   print "%.60f" % ( sum )
   sum += d

print sum
print "%.60f" % ( sum )


-  RESULTs --
0.1555111512312578270211815834045410156250
0.
0.1555111512312578270211815834045410156250
0.20001110223024625156540423631668090820312500
0.3000444089209850062616169452667236328125
0.40002220446049250313080847263336181640625000
0.5000
0.59997779553950749686919152736663818359375000
0.6999555910790149937383830547332763671875
0.79993338661852249060757458209991455078125000
0.8999111821580299874767661094665527343750
1.0
0.99988897769753748434595763683319091796875000

and the jump from 0.50*** to 0.5999* looks wrong
for me ... do i a mistake or is there something wrong in the
representation of the floating points in python?


I think the main problem is, as sum gets bigger, the less significant bits 
of the 0.1 representation fall off the end (enough to make it effectively 
just under 0.1 you're adding instead of just over).



can anybody tell me how python internal represent a float number??


Try "google ieee floating point". The problems aren't specific to Python.

--
Bartc 


--
http://mail.python.org/mailman/listinfo/python-list


Re: Python -- floating point arithmetic

2010-07-07 Thread bart.c

david mainzer wrote:


sum = 0.0
for i in range(10):

... sum += 0.1
...

sum

0.99989




But thats looks a little bit wrong for me ... i must be a number
greater
then 1.0 because 0.1 =
0.155511151231257827021181583404541015625000
in python ... if i print it.

So i create an example program:

sum = 0.0
n = 10
d = 1.0 / n
print "%.60f" % ( d )
for i in range(n):
   print "%.60f" % ( sum )
   sum += d

print sum
print "%.60f" % ( sum )


-  RESULTs --
0.1555111512312578270211815834045410156250
0.
0.1555111512312578270211815834045410156250
0.20001110223024625156540423631668090820312500
0.3000444089209850062616169452667236328125
0.40002220446049250313080847263336181640625000
0.5000
0.59997779553950749686919152736663818359375000
0.6999555910790149937383830547332763671875
0.79993338661852249060757458209991455078125000
0.8999111821580299874767661094665527343750
1.0
0.99988897769753748434595763683319091796875000

and the jump from 0.50*** to 0.5999* looks wrong
for me ... do i a mistake or is there something wrong in the
representation of the floating points in python?


I think the main problem is, as sum gets bigger, the less significant bits 
of the 0.1 representation fall off the end (enough to make it effectively 
just under 0.1 you're adding instead of just over).



can anybody tell me how python internal represent a float number??


Try "google ieee floating point". The problems aren't specific to Python.

--
Bartc 


--
http://mail.python.org/mailman/listinfo/python-list


Re: C interpreter in Lisp/scheme/python

2010-07-07 Thread Michele Simionato
On Jun 14, 1:07 am, bolega  wrote:
> I am trying to compare LISP/Scheme/Python for their expressiveness.
>
> For this, I propose a vanilla C interpreter. I have seen a book which
> writes C interpreter in C.
>
> The criteria would be the small size and high readability of the code.
>
> Are there already answers anywhere ?
>
> How would a gury approach such a project ?
>
> Bolega

This look like a huge project for an evaluation of expressiveness
which result is obvious. Lisp (including Scheme) is more expressive
than Python, for many definitions of expressiveness (see for instance
http://www.ccs.neu.edu/scheme/pubs/scp91-felleisen.ps.gz if you like
academic papers). However, who cares? What matters in everyday life
are other things, like the availability of libraries, tools, easy of
maintenance, etc.

In your proposed project the choice of the parsing library would
matter a lot. Writing languages is a domain where Lisp is
traditionally strong, so you may find good libraries to help you with
the task. My guess is that it would take more or less the same amount
of effort both in Python and in Lisp. The place where Lisp has an
advantage is writing an *embedded* language: then thanks to macros you
could write a *compiled* sublanguage. Doing the same in Python is
essentially impossible.

  Michele Simionato
-- 
http://mail.python.org/mailman/listinfo/python-list


Error message repetition

2010-07-07 Thread Tambet
Hello!

I have such problem that:

   - My console shows maximally x last lines, then truncates
   - Error message takes 2 line
   - In case of very big stack trace, there will be 2*x error lines
   - In such case I do not see any debug output

In this case, it's about recursion:

  File "b2.py", line 124, in seek_solution
solution = self.seek_solution(weight + U.gravity, len(test), test)
  File "b2.py", line 124, in seek_solution
solution = self.seek_solution(weight + U.gravity, len(test), test)
  File "b2.py", line 124, in seek_solution
solution = self.seek_solution(weight + U.gravity, len(test), test)
  File "b2.py", line 124, in seek_solution
solution = self.seek_solution(weight + U.gravity, len(test), test)
  File "b2.py", line 124, in seek_solution
solution = self.seek_solution(weight + U.gravity, len(test), test)
  File "b2.py", line 124, in seek_solution
solution = self.seek_solution(weight + U.gravity, len(test), test)
and so on...

I think it should be instead:
  File "b2.py", line 124, in seek_solution [*repeated x times*]
solution = self.seek_solution(weight + U.gravity, len(test), test)

Getting big strack trace is most probable with functions calling themselves
- thus, long stack traces usually contain such repetitions.

As those functions might not call themselves directly, one cycle of
recursion might become four lines long, in such case:

Stack item 1:  File "b2.py", line 124, in seek_solution
solution = self.seek_solution(weight + U.gravity, len(test), test)
Stack item 2:  File "b2.py", line 124, in seek_solution
solution = self.seek_solution(weight + U.gravity, len(test), test)
Stack item repetitions: [#1, #2] * x

This could simply enumerate stack items and then create formulas of
repetitions, like:
[[#1, #2] * 15, #3 * 15] * 3

If it shows each message written-out at least one and max two or three
times, but over that gives it an id and shows patterns of those instead, it
will be a lot better. I have, sometimes, gone through some four pages of
Java stack traces etc., but I think that having such long array of errors
does not make it more readable or simple - if you can see everything in one
page, then it's good enough for pondering. And scrolling up and looking at
output would be a nice feature ;) Especially now, as I am going to raise
recursion limit - this program would be perfectly possible without relying
on built-in loop constructions so much, but I did it yesterday in such
manner and it simply raised into unmanageable complexity. Thus, now I am
trying to encapsulate it's logic into pieces, which maximize the use of
Pythons overloads and function-based iterators etc., but this makes error
messages that long when I do something wrong - and I can't even check if
that was really some mistake in code or just the recursion actually needs to
be deeper than I hoped.

Tambet
-- 
http://mail.python.org/mailman/listinfo/python-list


tarfile and progress information

2010-07-07 Thread Nathan Huesken
Hi,

I am packing large files with tarfile. Is there any way I can get
progress information while packing?

Thanks!
Nathan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The real problem with Python 3 - no business case for conversion (was "I strongly dislike Python 3")

2010-07-07 Thread Kevin Walzer

On 7/2/10 3:07 PM, John Nagle wrote:



That's the real issue, not parentheses on the "print" statement.
Where's the business case for moving to Python 3? It's not faster.
It doesn't do anything you can't do in Python 2.6. There's no
"killer app" for it. End of life for Python 2.x is many years away;
most server Linux distros aren't even shipping with 2.6 yet. How can a
business justify spending money on conversion to Python 3?


That's decision for each business to make. My guess is that many 
businesses won't upgrade for some time, until the major 
libraries/modules support Python 3. I don't plan to move to Python 3 for 
at least a couple of years.


Python 3 is a nice cleanup of some legacy syntax issues. But
that's just not enough. Perl 6 is a nice cleanup of Perl 5, and
look how that went. Ten years on, it's not even mainstream, let
alone dominant.


The Perl analogy isn't really useful here. Perl 6 is somewhere between 
the HURD and Duke Nukem Forever in terms of being viable. Even the Perl 
website says, "If you are looking for production ready code please use 
Perl 5." That's one reason why Perl 5 development has recently undergone 
a resurgence.


Python 3, by contrast, is production-ready in itself; libraries are 
gradually moving to support it, and Python 2 has a definite end-of-life 
release in 2.7, with an extended maintenance period for 2.7. The Python 
developers are providing a much stronger and clearer path forward for 
Python 3. The transition period may last five years, but the path is clear.


As a Mac developer, I'm sympathetic to your frustration. A few years ago 
Apple deprecated one of its major API's (Carbon), on which my own 
development depended, and there was a lot of uncertainty about major 
libraries that use Carbon being updated. This is normal in any 
transition period. Eventually, the major libraries I depend on were 
updated by their developers (i.e. ported to the Cocoa API), I was able 
to migrate my own applications to the updated libraries, and life went on.


I think the same thing will happen with Python. It's useful to note the 
libraries that are not yet ported to support Python 3, and to share best 
practices for moving forward. Past a certain point, however, I don't see 
much point in attacking the existence of Python 3 or questioning the 
need to move toward Python 3. It's here, it's the way forward, and 
that's not going to change. Might as well accept it.


--Kevin


--
Kevin Walzer
Code by Kevin
http://www.codebykevin.com
--
http://mail.python.org/mailman/listinfo/python-list


Re: Error message repetition

2010-07-07 Thread Thomas Jollans
On 07/07/2010 05:10 PM, Tambet wrote:
> Hello!
> 
> I have such problem that:
> 
> * My console shows maximally x last lines, then truncates
> * Error message takes 2 line
> * In case of very big stack trace, there will be 2*x error lines
> * In such case I do not see any debug output
> 
> In this case, it's about recursion:
> 
>   File "b2.py", line 124, in seek_solution
> solution = self.seek_solution(weight + U.gravity, len(test), test)
>   File "b2.py", line 124, in seek_solution
> solution = self.seek_solution(weight + U.gravity, len(test), test)
>   File "b2.py", line 124, in seek_solution
> solution = self.seek_solution(weight + U.gravity, len(test), test)
>   File "b2.py", line 124, in seek_solution
> solution = self.seek_solution(weight + U.gravity, len(test), test)
>   File "b2.py", line 124, in seek_solution
> solution = self.seek_solution(weight + U.gravity, len(test), test)
>   File "b2.py", line 124, in seek_solution
> solution = self.seek_solution(weight + U.gravity, len(test), test)
> and so on...

Depending on how far this goes up, you might just be able to change the
backlog your terminal emulator saves? that would allow you to scroll up.
If you can't do that, you should get a proper console.

Anyway, if you want to customise the traceback output, you can!
simply replace sys.excepthook with your own version.

http://docs.python.org/dev/py3k/library/sys.html#sys.excepthook

> 
> I think it should be instead:
>   File "b2.py", line 124, in seek_solution [*repeated x times*]
> solution = self.seek_solution(weight + U.gravity, len(test), test)
> 
> Getting big strack trace is most probable with functions calling
> themselves - thus, long stack traces usually contain such repetitions.
> 
> As those functions might not call themselves directly, one cycle of
> recursion might become four lines long, in such case:
> 
> Stack item 1:  File "b2.py", line 124, in seek_solution
> solution = self.seek_solution(weight + U.gravity, len(test), test)
> Stack item 2:  File "b2.py", line 124, in seek_solution
> solution = self.seek_solution(weight + U.gravity, len(test), test)
> Stack item repetitions: [#1, #2] * x
> 
> This could simply enumerate stack items and then create formulas of
> repetitions, like:
> [[#1, #2] * 15, #3 * 15] * 3
> 
> If it shows each message written-out at least one and max two or three
> times, but over that gives it an id and shows patterns of those instead,
> it will be a lot better. I have, sometimes, gone through some four pages
> of Java stack traces etc., but I think that having such long array of
> errors does not make it more readable or simple - if you can see
> everything in one page, then it's good enough for pondering. And
> scrolling up and looking at output would be a nice feature ;) Especially
> now, as I am going to raise recursion limit - this program would be
> perfectly possible without relying on built-in loop constructions so
> much, but I did it yesterday in such manner and it simply raised into
> unmanageable complexity. Thus, now I am trying to encapsulate it's logic
> into pieces, which maximize the use of Pythons overloads and
> function-based iterators etc., but this makes error messages that long
> when I do something wrong - and I can't even check if that was really
> some mistake in code or just the recursion actually needs to be deeper
> than I hoped.
> 
> Tambet
> 

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python as a scripting language. Alternative to bash script?

2010-07-07 Thread Michael Torrie
On 07/06/2010 09:34 PM, Chris Rebert wrote:
> On Tue, Jul 6, 2010 at 6:40 AM, Michael Torrie  wrote:
>> While it's possible to set up pipes and spawn programs in parallel to
>> operate on the pipes, in practice it's simpler to tell subprocess.Popen
>> to use a shell and then just rely on Bash's very nice syntax for setting
>> up the pipeline.
> 
> Until there's a Python variable involved that is, unless you want to
> overlook all the edge cases or do the escaping all by yourself (and
> then pray you did it right).

Very good point.  This is a problem that the pipes module suffers from
as well.

Although we learned in the other thread on escaping SQL statements that
escaping is faster, easier and just as safe as other parameterization
mechanisms.  Uh huh.

Back on target, a library similar to pipes that was safe (pipes is not)
and had a pythonic syntax would be cool.  pipes module works alright,
syntax wise, but it's not a clean syntax.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Error message repetition

2010-07-07 Thread Tambet
> Depending on how far this goes up, you might just be able to change the
> backlog your terminal emulator saves? that would allow you to scroll up.
> If you can't do that, you should get a proper console.
>

I use bash, which allows to do that. This was rather a case example -
actually this output gets pretty annoying anyway and should contain only
most important information at any given moment. Scrollbar getting too small
is another example of the same case ...the case that if something outputs
2000 lines of repetition, it's not very much of use. This could be that
someone does not understand the cryptic output, but I think that anyone
having that long traceback is probably intelligent enough to google for
"Python traceback what those cryptic things mean" ;) I think that this
functionality should be triggered only by traceback bigger than, say, 20
lines or so. Also, this is not the point, where processor usage costs
anything.

Anyway, if you want to customise the traceback output, you can!
> simply replace sys.excepthook with your own version.
>
> http://docs.python.org/dev/py3k/library/sys.html#sys.excepthook
>

Ok, I should have thought of that - anyway, such thing should be standard
functionality. I personally love nonverbose output - if it can't say it in 1
A4, it should just tell me that it could if I asked. I mean, normally I like
to have one sight over the traceback and another over program itself.

Thanks for that hook ...I will use it in case I get four of five more
similar errors today ;)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python -- floating point arithmetic

2010-07-07 Thread Nobody
On Wed, 07 Jul 2010 15:08:07 +0200, Thomas Jollans wrote:

> you should never rely on a floating-point number to have exactly a
> certain value.

"Never" is an overstatement. There are situations where you can rely
upon a floating-point number having exactly a certain value.

First, floating-point values are exact. They may be an approximation
to some other value, but they are still exact values, not some kind of
indeterminate quantum state. Specifically, a floating-point value is a
rational number whose denominator is a power of two.

Second, if the result of performing a primitive arithmetic operation
(addition, subtraction, multiplication, division, remainder) or the
square-root function on the equivalent rational values is exactly
representable as a floating-point number, then the result will be exactly
that value.

Third, if the result of performing a primitive arithmetic operation or the
square-root function on the equivalent rational values *isn't* exactly
representable as a floating-point number, then the floating-point result
will be obtained by rounding the exact value according to the FPU's
current rounding mode.

All of this is entirely deterministic, and follows relatively simple
rules. Even if the CPU has a built-in random number generator, it will
*not* be used to generate the least-significant bits of a floating-point
arithmetic result.

The second and third cases above assume that floating-point arithmetic
follows IEEE-754 (the second case is likely to be true even for systems
which don't strictly adhere to IEEE-754). This is true for most modern
architectures, provided that:

1. You aren't using Borland C, which forcibly "optimises" x/y to x*(1/y),
so 12/3 isn't exactly equal to 4, as 1/3 isn't exactly representable. Real
compilers won't use this sort of approximation unless specifically
instructed (e.g. -funsafe-math-optimizations for gcc).

2. You aren't using one of the early Pentium chips.

In spite of this, there are some "gotcha"s. E.g. on x86, results are
computed to 80-bit (long double) precision internally. These will be
truncated to 64 bits if stored in memory. Whether the truncation occurs is
largely up to the compiler, although it can be forced with -ffloat-store
with gcc.

More complex functions (trigonometric, etc) are only accurate to within a
given relative error (e.g. +/- the value of the least significant bit), as
it isn't always possible to determine the correct value for the least
significant bit for a given rounding mode (and even if it is theoretically
possible, there is no limit to the number of bits of precision which would
be required).


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: C interpreter in Lisp/scheme/python

2010-07-07 Thread Rivka Miller
On Jun 13, 4:07 pm, bolega  wrote:
> I am trying to compare LISP/Scheme/Python for their expressiveness.
>
> For this, I propose a vanilla C interpreter. I have seen a book which
> writes C interpreter in C.
>
> The criteria would be the small size and high readability of the code.
>
> Are there already answers anywhere ?
>
> How would a gury approach such a project ?
>
> Bolega

You should probably narrow down your project to one. For example,
write a LISt Processor Meta Circular Evaluator in C.

You can take Paul Graham's rendition as a start and forget about
garbage collection.

Start with getchar()/putchar() for I/O.

Although C comes with a regex library, you probably do not need a
regex or parser at all for this. This is the beauty of LISP which is
why McCarthy was able to bypass the several man years of effort
involved in FORmula TRANslator. Even as a young boy like L. Peter
Deutsch was able to write it in assembly for one of the PDP's.

You will have go implement an associative array or a symbol-value
table probably as a stack or linked list. You will have to decide how
you implement the trees, as cons cells or some other method. Dynamic
scoping is easy to implement and that is what elisp has. I am not
aware of any book that provides implementation of LISP in C and
explains it at the same time.

This is the extent of help I can provide, someone else can probably
help you more.

Anyone know what the first initial of L. Peter Deutsch stand for ?

Rivka

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The real problem with Python 3 - no business case for conversion (was "I strongly dislike Python 3")

2010-07-07 Thread Roy Smith
In article <5325a$4c349b5b$4275d90a$27...@fuse.net>,
 Kevin Walzer  wrote:

> That's decision for each business to make. My guess is that many 
> businesses won't upgrade for some time, until the major 
> libraries/modules support Python 3. I don't plan to move to Python 3 for 
> at least a couple of years.

It takes a long time for big businesses to upgrade.  It's not like me or 
you.  I just download the latest and greatest, run the installer, and 
I'm good to go.

A big company has to install it in a test lab, certify it, get approval 
from IT, log a change request, etc.

You need to get approval from your manager, your director, your VP, and 
so on up the management chain until you finally reach somebody who has 
no clue what's going on and either sits on the request or denies it out 
of ignorance.  Or, more likely, you just hit some middle-management 
layer where the guy doesn't have the authority to approve it himself, 
and isn't willing to expend the political capital it would take to get 
approval from the next layer up.

Somebody might decide they don't want to disturb any existing production 
systems (not a bad idea, really), so you need to order new hardware for 
it.  Even if you can get capital approval for that, it mushrooms into 
finding rack space, and the UPS is already oversubscribed, and so is the 
cooling, and there's no available network ports, and so on.  Suddenly, 
downloading some free software has become a 5-figure project.

Big businesses have lots of ways to ensure that no progress is ever 
made.  If you think any of the above is made up, you've never worked for 
a big company.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: C interpreter in Lisp/scheme/python

2010-07-07 Thread wolfgang.riedel
On 20 June, 03:48, Tim Rentsch  wrote:
> nanothermite911fbibustards 
> writes:
>

> Asking whether Lisp is faster than C is like asking why it's
> colder in the mountains than it is in the summer.

original Karl Valentin would be 
but yours is in his sense.

Wolfgang
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python -- floating point arithmetic

2010-07-07 Thread Ethan Furman

Nobody wrote:

On Wed, 07 Jul 2010 15:08:07 +0200, Thomas Jollans wrote:


you should never rely on a floating-point number to have exactly a
certain value.


"Never" is an overstatement. There are situations where you can rely
upon a floating-point number having exactly a certain value.


It's not much of an overstatement.  How many areas are there where you 
need the number 
0.155511151231257827021181583404541015625000?


If I'm looking for 0.1, I will *never* (except by accident ;) say

if var == 0.1:

it'll either be <= or >=.

By contrast, if I'm dealing with integers I can say if var == 4 because 
I *know* that there are values that var can hold that are *exactly* 4. 
Not 3.9817263 or 4.19726.


~Ethan~
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python 3 - Is PIL/wxPython/PyWin32 supported?

2010-07-07 Thread Giampaolo Rodolà
2010/7/7 durumdara :
> Hi!
>
> I have an environment under Python 2.6 (WinXP). That is based on PIL,
> wxPython/PyWin32.
>
> In the project's pages I see official installer for only PyWin32.
>
> I don't know that PIL or wxPython supports Python 3 or not. May with
> some trick these packages are working.
>
> Does anybody know about it?
> Can I replace my Py2.6 without lost PIL/wxPython?
>
> Thanks for your help:
>   dd
>
> --
> http://mail.python.org/mailman/listinfo/python-list

No, as of now you just can't.
Now that 2.7 is out and is officially declared as the last 2.x release
it's likely that there will be a lot more traction in porting such big
names to Python 3.


--- Giampaolo
http://code.google.com/p/pyftpdlib/
http://code.google.com/p/psutil/
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python -- floating point arithmetic

2010-07-07 Thread Raymond Hettinger
On Jul 7, 5:55 am, Mark Dickinson  wrote:
> On Jul 7, 1:05 pm, david mainzer  wrote:
>
>
>
> > Dear Python-User,
>
> > today i create some slides about floating point arithmetic. I used an
> > example from
>
> >http://docs.python.org/tutorial/floatingpoint.html
>
> > so i start the python shell on my linux machine:
>
> > d...@maxwell $ python
> > Python 2.6.5 (release26-maint, May 25 2010, 12:37:06)
> > [GCC 4.3.4] on linux2
> > Type "help", "copyright", "credits" or "license" for more information.>>> 
> > >>> sum = 0.0
> > >>> >>> for i in range(10):
>
> > ...     sum += 0.1
> > ...>>> >>> sum
> > 0.99989
>
> > But thats looks a little bit wrong for me ... i must be a number greater
> > then 1.0 because 0.1 = 
> > 0.155511151231257827021181583404541015625000
> > in python ... if i print it.

[Mark Dickinson]
> So you've identified one source of error here, namely that 0.1 isn't
> exactly representable (and you're correct that the value stored
> internally is actually a little greater than 0.1).  But you're
> forgetting about the other source of error in your example: when you
> do 'sum += 0.1', the result typically isn't exactly representable, so
> there's another rounding step going on.  That rounding step might
> produce a number that's smaller than the actual exact sum, and if
> enough of your 'sum += 0.1' results are rounded down instead of up,
> that would easily explain why the total is still less than 1.0.

One key for understanding floating point mysteries is to look at the
actual binary sums rather that their approximate representation as a
decimal string.  The hex() method can make it easier to visualize
Mark's explanation:

>>> s = 0.0
>>> for i in range(10):
... s += 0.1
... print s.hex(), repr(s)


0x1.ap-4 0.10001
0x1.ap-3 0.20001
0x1.4p-2 0.30004
0x1.ap-2 0.40002
0x1.0p-1 0.5
0x1.3p-1 0.59998
0x1.6p-1 0.69996
0x1.9p-1 0.79993
0x1.cp-1 0.89991
0x1.fp-1 0.99989

Having used hex() to understand representation error (how the binary
partial sums are displayed), you can use the Fractions module to gain
a better understanding of rounding error introduced by each addition:

>>> s = 0.0
>>> for i in range(10):
exact = Fraction.from_float(s) + Fraction.from_float(0.1)
s += 0.1
actual = Fraction.from_float(s)
error = actual - exact
print '%-35s%-35s\t%s' % (actual, exact, error)


3602879701896397/36028797018963968 3602879701896397/36028797018963968
0
3602879701896397/18014398509481984 3602879701896397/18014398509481984
0
1351079888211149/4503599627370496  10808639105689191/36028797018963968
1/36028797018963968
3602879701896397/9007199254740992
14411518807585589/36028797018963968 -1/36028797018963968
1/2
18014398509481985/36028797018963968 -1/36028797018963968
5404319552844595/9007199254740992
21617278211378381/36028797018963968 -1/36028797018963968
3152519739159347/4503599627370496
25220157913274777/36028797018963968 -1/36028797018963968
7205759403792793/9007199254740992
28823037615171173/36028797018963968 -1/36028797018963968
2026619832316723/2251799813685248
32425917317067569/36028797018963968 -1/36028797018963968
9007199254740991/9007199254740992
36028797018963965/36028797018963968 -1/36028797018963968

Hope this helps your slides,


Raymond
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python as a scripting language. Alternative to bash script?

2010-07-07 Thread Chris Rebert
On Wed, Jul 7, 2010 at 8:31 AM, Michael Torrie  wrote:
> On 07/06/2010 09:34 PM, Chris Rebert wrote:
>> On Tue, Jul 6, 2010 at 6:40 AM, Michael Torrie  wrote:
>>> While it's possible to set up pipes and spawn programs in parallel to
>>> operate on the pipes, in practice it's simpler to tell subprocess.Popen
>>> to use a shell and then just rely on Bash's very nice syntax for setting
>>> up the pipeline.
>>
>> Until there's a Python variable involved that is, unless you want to
>> overlook all the edge cases or do the escaping all by yourself (and
>> then pray you did it right).
>
> Very good point.  This is a problem that the pipes module suffers from
> as well.
>
> Although we learned in the other thread on escaping SQL statements that
> escaping is faster, easier and just as safe as other parameterization
> mechanisms.  Uh huh.
>
> Back on target, a library similar to pipes that was safe (pipes is not)
> and had a pythonic syntax would be cool.  pipes module works alright,
> syntax wise, but it's not a clean syntax.

Actually, your original post inspired me to take a crack at writing
something like that yesterday:
http://rebertia.com/code/subproc_pipelines.py

Thoughts anyone? (Written on a whim, so no tests or docs at present.)

Cheers,
Chris
--
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Is This Open To SQL Injection?

2010-07-07 Thread Victor Subervi
Hi;
I have this code:

sql = 'insert into personalDataKeys values (%s, %s, %s)' % (store, user,
', %s'.join('%s' * len(col_vals))
cursor.execute(sql, col_vals)

Is this open to injection attacks? If so, how correct?
TIA,
beno
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is This Open To SQL Injection?

2010-07-07 Thread Stephen Hansen
On 7/7/10 11:38 AM, Victor Subervi wrote:
> Hi;
> I have this code:
> 
> sql = 'insert into personalDataKeys values (%s, %s, %s)' % (store,
> user, ', %s'.join('%s' * len(col_vals))
> cursor.execute(sql, col_vals)

First, its always best to be explicit with insert statements. Meaning,
don't rely on the underlining structure of a table, as in:

INSERT INTO YourRandomTable VALUES ("my", "value", "here");

Instead, do:

INSERT INTO YourRandomTable (field1, field2, field3) VALUES ("my",
"value", "here");

> Is this open to injection attacks? If so, how correct?

Secondly, I'd say: probably yes. Maybe. You're doing string formatting
to construct your SQL, which is where the trouble comes from. Its
possible to do safely, but takes exquisite care -- which is why we've
been nudging you away from it.

But I can't be a whole lot more specific because I can't for the life of
me figure out what you're actually doing with it.

I can't figure out why you're passing the store and user variables into
the SQL statement, instead of just passing them into the .execute as you
are supposed to. I.e., cursor.execute(sql, [store, user] + col_vals) or
something.

It looks like you're sort of trying to get one generic SQL statement
which can set some arbitrary number of random columns-- if so, why? I
can't picture just what this table layout is or what kind of data it
holds to advise better.

-- 

   Stephen Hansen
   ... Also: Ixokai
   ... Mail: me+list/python (AT) ixokai (DOT) io
   ... Blog: http://meh.ixokai.io/



signature.asc
Description: OpenPGP digital signature
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread sturlamolden
On 7 Jul, 11:32, Jonathan Hartley  wrote:
> Also,
> this would solve the pain of Python developers attempting to
> redistribute py2exe versions of their programs (i.e. they have to own
> a Visual Studio license to legally be able to redistribute the
> required C runtime)

http://www.microsoft.com/downloads/details.aspx?FamilyID=9b2da534-3e03-4391-8a4d-074b9f2bc1bf&displaylang=en

If this is not sufficient, ask Microsoft for permission or buy a copy
of Visual Studio (any will do, you can rebuild Python).

I don't understand enough to know why Visual
> Studio was chosen instead of MinGW. Can anyone shed any light on that
> decision?

It the standard C and C++ compiler on Windows.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread Martin v. Loewis
> Python 3.1.1, file [pymem.h]:
> 
> PyAPI_FUNC(void *) PyMem_Malloc(size_t);
> 
> #define PyMem_MALLOC(n)(((n) < 0 || (n) > PY_SSIZE_T_MAX) ? NULL \
> : malloc((n) ? (n) : 1))
> 
> The problem with the latter that it seems that it's intended for safety
> but does the opposite...

Why do you say that? It certainly *does* achieve safety, wrt. to certain
errors, specifically:
- passing sizes that are out-of-range
- supporting malloc(0) on all systems


> Perhaps (if it isn't intentional) this is a bug of the oversight type,
> that nobody remembered to update the macro?

Update in what way?

> Except for the problems with file descriptors I think a practical
> interim solution for extensions implemented in C could be to just link
> the runtime lib statically. For a minimal extension this increased the
> size from 8 KiB to 49 KiB. And generally with MS tools the size is
> acceptably small.

If you think that's fine for your extension module (which may well be
the case), go ahead. But then, you could also just link with a different
DLL version of the CRT instead.

> I think that this would be safe because since the C API has to access
> things in the interpreter I think it's a given that all the relevant
> functions delegate to shared library (DLL) implementations, but I have
> not checked the source code.

There are certainly more cases than the ones mentioned so far, in
particular the time zone and the locale. The CRT carries global
variables for these, so if you set them in the copy of the CRT that
Python links with, you won't see the change in your extension module -
which may or may not be a problem.

> As a more longterm solution, perhaps python.org could make available the
> redistributables for various MSVC versions, and then one could introduce
> some scheme for indicating the runtime lib dependencies of any given
> extension.

My preferred long-term solution is to reduce the usage of the C library
in CPython as much as reasonable, atleast on Windows. Memory management
could directly use the heap functions (or even more directly
VirtualAlloc); filenos could be OS handles, and so on. There are
probably limitations to what you can achieve, but I think it's worth trying.

Regards,
Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread sturlamolden
On 7 Jul, 06:54, "Alf P. Steinbach /Usenet"  wrote:

> PyAPI_FUNC(void *) PyMem_Malloc(size_t);
>
> #define PyMem_MALLOC(n)         (((n) < 0 || (n) > PY_SSIZE_T_MAX) ? NULL \
>                                 : malloc((n) ? (n) : 1))

I was afraid of that :(



> Except for the problems with file descriptors I think a practical interim
> solution for extensions implemented in C could be to just link the runtime lib
> statically.

You still have two CRTs linked into the same process.



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread Martin v. Loewis
> I presume this problem would go away if future versions of Python
> itself were compiled on Windows with something like MinGW gcc. Also,
> this would solve the pain of Python developers attempting to
> redistribute py2exe versions of their programs (i.e. they have to own
> a Visual Studio license to legally be able to redistribute the
> required C runtime) I don't understand enough to know why Visual
> Studio was chosen instead of MinGW. Can anyone shed any light on that
> decision?

sturlamolden has already given the primary reason: Python,
traditionally, attempts to use and work with the system vendor's
compiler. On Windows, that's MSC. It's typically the one that best knows
about platform details that other compilers might be unaware of.

In addition, it's also the compiler and IDE that Windows developers (not
just Python core people, but also extension developers and embedders)
prefer to use, as it has quite good IDE support (in particular debugging
and code browsing).

Perhaps more importantly, none of the other compilers is really an
alternative. GCC in particular cannot build the Win32 extensions, since
it doesn't support the COM and ATL C++ features that they rely on (and
may not support other MSC extensions, either). So the Win32 extensions
must be built with VS, which means Python itself needs to use the same
compiler.

Likewise important: gcc/mingw is *not* a complete C compiler on Windows.
A complete C compiler would have to include a CRT (on Windows); mingw
doesn't (cygwin does, but I think you weren't proposing that Python be
built for cygwin - you can easily get cygwin Python anyway). Instead,
mingw relies on users having a CRT available to
them - and this will be a Microsoft one. So even if gcc was used, we
would have versioning issues with Microsoft CRTs, plus we would have to
rely on target systems including the right CRT, as we couldn't include
it in the distribution.

HTH,
Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is This Open To SQL Injection?

2010-07-07 Thread MRAB

Stephen Hansen wrote:

On 7/7/10 11:38 AM, Victor Subervi wrote:

Hi;
I have this code:

sql = 'insert into personalDataKeys values (%s, %s, %s)' % (store,
user, ', %s'.join('%s' * len(col_vals))
cursor.execute(sql, col_vals)


First, its always best to be explicit with insert statements. Meaning,
don't rely on the underlining structure of a table, as in:

INSERT INTO YourRandomTable VALUES ("my", "value", "here");

Instead, do:

INSERT INTO YourRandomTable (field1, field2, field3) VALUES ("my",
"value", "here");


Is this open to injection attacks? If so, how correct?


Secondly, I'd say: probably yes. Maybe. You're doing string formatting
to construct your SQL, which is where the trouble comes from. Its
possible to do safely, but takes exquisite care -- which is why we've
been nudging you away from it.

But I can't be a whole lot more specific because I can't for the life of
me figure out what you're actually doing with it.

I can't figure out why you're passing the store and user variables into
the SQL statement, instead of just passing them into the .execute as you
are supposed to. I.e., cursor.execute(sql, [store, user] + col_vals) or
something.

It looks like you're sort of trying to get one generic SQL statement
which can set some arbitrary number of random columns-- if so, why? I
can't picture just what this table layout is or what kind of data it
holds to advise better.


Not only that, there's a missing ")" and the .join is wrong. For
example, if 'store' is "STORE", 'user' is "USER" and 'col_vals' has,
say, 3 members, then what you get is:

insert into personalDataKeys values (STORE, USER, %, %ss, %s%, %ss, 
%s%, %ss)

--
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread sturlamolden
On 7 Jul, 21:12, sturlamolden  wrote:

> > #define PyMem_MALLOC(n)         (((n) < 0 || (n) > PY_SSIZE_T_MAX) ? NULL \
> >                                 : malloc((n) ? (n) : 1))
>
> I was afraid of that :(

Also observe that this macro is very badly written (even illegal) C.
Consider what this would do:

PyMem_MALLOC(n++)

According to Linus Thorvalds using macros like this is not even legal
C:

http://www.linuxfocus.org/common/src/January2004_linus.html

This would be ok, and safe as long as we use the GIL:

register Py_ssize_t __pymem_malloc_tmp;
#define PyMem_MALLOC(n)\
 (__pymem_malloc_tmp = n, (((__pymem_malloc_tmp) < 0 ||
(__pymem_malloc_tmp) > PY_SSIZE_T_MAX) ? NULL \
                                 : malloc((__pymem_malloc_tmp) ?
(__pymem_malloc_tmp) : 1)))


An inline function is a better solution, but not ANSI C standard:

inline void *PyMem_MALLOC(Py_ssize_t n)
{
  return (((n) < 0 || (n) > PY_SSIZE_T_MAX) ? NULL
   : malloc((n) ? (n) : 1));
}






-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread Alf P. Steinbach /Usenet

* Martin v. Loewis, on 07.07.2010 21:10:

Python 3.1.1, file [pymem.h]:

PyAPI_FUNC(void *) PyMem_Malloc(size_t);

#define PyMem_MALLOC(n)(((n)<  0 || (n)>  PY_SSIZE_T_MAX) ? NULL \
 : malloc((n) ? (n) : 1))

The problem with the latter that it seems that it's intended for safety
but does the opposite...


Why do you say that? It certainly *does* achieve safety, wrt. to certain
errors, specifically:
- passing sizes that are out-of-range
- supporting malloc(0) on all systems


It uses malloc instead of PyMem_Malloc. malloc may well be different and use a 
different heap in an extension DLL than in the Python interpreter and other 
extensions. That's one thing that the docs (rightly) warn you about.




Perhaps (if it isn't intentional) this is a bug of the oversight type,
that nobody remembered to update the macro?


Update in what way?


I was guessing that at one time there was no PyMem_Malloc. And that it was 
introduced to fix Windows-specific problems, but inadvertently without updating 
the macro. It's just a guess as to reasons why the macro uses malloc directly.




Except for the problems with file descriptors I think a practical
interim solution for extensions implemented in C could be to just link
the runtime lib statically. For a minimal extension this increased the
size from 8 KiB to 49 KiB. And generally with MS tools the size is
acceptably small.


If you think that's fine for your extension module (which may well be
the case), go ahead.


I have no comment on that except pointing it out as a somewhat stupid, somewhat 
evil social inclusion/exclusion argument, talking to the audience. Argh. You're 
wasting my time. But anyway, 49 KiB is small by today's standards. For example, 
you get 20 of those in a single MiB, and about 20.000 in a single GiB.




But then, you could also just link with a different
DLL version of the CRT instead.


When I wrote "link the runtime lib statically" that was an alternative to the 
usual link-as-DLL.


It wouldn't make sense to link the runtime lib statically as an alternative to 
linking it statically.


As for linking to a different /version/ of the CRT, if you really mean that, I 
think that's difficult. It's not necessarily impossible, after all there's 
STLPort. But I think that it must at the very least be rather difficult to do 
with Microsoft's tools, for otherwise people would have employed that solution 
before, and so I wouldn't trust the result, and wouldn't waste the time trying.



Cheers,

- Alf

--
blog at http://alfps.wordpress.com>
--
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread Alf P. Steinbach /Usenet

* sturlamolden, on 07.07.2010 21:12:

On 7 Jul, 06:54, "Alf P. Steinbach /Usenet"  wrote:


PyAPI_FUNC(void *) PyMem_Malloc(size_t);

#define PyMem_MALLOC(n) (((n)<  0 || (n)>  PY_SSIZE_T_MAX) ? NULL \
 : malloc((n) ? (n) : 1))


I was afraid of that :(




Except for the problems with file descriptors I think a practical interim
solution for extensions implemented in C could be to just link the runtime lib
statically.


You still have two CRTs linked into the same process.


So?


Cheers,

- Alf

--
blog at http://alfps.wordpress.com>
--
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread sturlamolden
On 7 Jul, 21:41, "Alf P. Steinbach /Usenet"  wrote:

> > You still have two CRTs linked into the same process.
>
> So?

CRT resources cannot be shared across CRT borders. That is the
problem. Multiple CRTs are not a problem if CRT resources are never
shared.




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread Martin v. Loewis

> Also observe that this macro is very badly written (even illegal) C.
> Consider what this would do:
> 
> PyMem_MALLOC(n++)
> 
> According to Linus Thorvalds using macros like this is not even legal
> C:
> 
> http://www.linuxfocus.org/common/src/January2004_linus.html

[Please don't use "legal" wrt. programs - it's not "illegal" to violate
the language's rules; you don't go to jail when doing so. Linus said
"not allowed"]

You are misinterpreting that statement. Linus said that the isdigit
macro was non-conforming, *and meant that specifically for isdigit()*.
That's because the C standard says that isdigit is a function. Under
the as-if rule, you may implement it as a macro as long as nobody can
tell the difference. However, in the presented implementation, there
is a notable difference.

However, the C standard is silent wrt. to PyMem_MALLOC, and it certainly
allows the definition of macros which use the macro arguments more than
once.

> This would be ok, and safe as long as we use the GIL:

The macro is ok as it stands (minus the issues with multiple heaps).
The Python convention is that you clearly recognize PyMem_MALLOC as
a macro, so you should know not to pass parameters with side effects.

> register Py_ssize_t __pymem_malloc_tmp;
> #define PyMem_MALLOC(n)\
>  (__pymem_malloc_tmp = n, (((__pymem_malloc_tmp) < 0 ||
> (__pymem_malloc_tmp) > PY_SSIZE_T_MAX) ? NULL \
>  : malloc((__pymem_malloc_tmp) ?
> (__pymem_malloc_tmp) : 1)))

That would partially defeat the purpose, namely it would require the
compiler to put the size into a variable in memory, and possibly prevent
optimizations from taking place that rely on constant propagation
(depending on how smart the compiler is).

Regards,
Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread sturlamolden
On 7 Jul, 21:47, "Martin v. Loewis"  wrote:

> That would partially defeat the purpose, namely it would require the
> compiler to put the size into a variable in memory, and possibly prevent
> optimizations from taking place that rely on constant propagation
> (depending on how smart the compiler is).

Also after reading carefully what Linus said, it would still be
incorrect if n is a complex expression. So, an inline function is the
"correct" one here.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread Martin v. Loewis
>>> Perhaps (if it isn't intentional) this is a bug of the oversight type,
>>> that nobody remembered to update the macro?
>>
>> Update in what way?
> 
> I was guessing that at one time there was no PyMem_Malloc. And that it
> was introduced to fix Windows-specific problems, but inadvertently
> without updating the macro. It's just a guess as to reasons why the
> macro uses malloc directly.

It might indeed be that the function version was introduced specifically
for Windows. However, the macro was left intentionally: both for
backwards compatibility, and for use inside Python itself.

>>> Except for the problems with file descriptors I think a practical
>>> interim solution for extensions implemented in C could be to just link
>>> the runtime lib statically.
[...]
> 
> When I wrote "link the runtime lib statically" that was an alternative
> to the usual link-as-DLL.

Ok, I lost the thread. When you said "a practical interim solution"
you were talking about what problem? I thought the discussion was
about the need to link with the same DLL version as Python.

> It wouldn't make sense to link the runtime lib statically as an
> alternative to linking it statically.

However, it would surely make sense to link with a different DLL than
the one that Python links with, assuming that would actually work.

> As for linking to a different /version/ of the CRT, if you really mean
> that, I think that's difficult. It's not necessarily impossible, after
> all there's STLPort. But I think that it must at the very least be
> rather difficult to do with Microsoft's tools, for otherwise people
> would have employed that solution before, and so I wouldn't trust the
> result, and wouldn't waste the time trying.

It's actually straight-forward (or used to be, until they came up with
the SxS madness). It was actually the case that people did so
unexpectingly, and it seemed to work fine, except that it crashed when
passing FILE*. Then we started explaining that mixing CRTs is risky.

Regards,
Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread Alf P. Steinbach /Usenet

* sturlamolden, on 07.07.2010 21:46:

On 7 Jul, 21:41, "Alf P. Steinbach /Usenet"  wrote:


You still have two CRTs linked into the same process.


So?


CRT resources cannot be shared across CRT borders. That is the
problem. Multiple CRTs are not a problem if CRT resources are never
shared.


Yeah, but then we're down to file descriptors, C library locales and such as the 
remaining problems.


Cheers,

- Alf

--
blog at http://alfps.wordpress.com>
--
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread Alf P. Steinbach /Usenet

* Martin v. Loewis, on 07.07.2010 21:56:

Perhaps (if it isn't intentional) this is a bug of the oversight type,
that nobody remembered to update the macro?


Update in what way?


I was guessing that at one time there was no PyMem_Malloc. And that it
was introduced to fix Windows-specific problems, but inadvertently
without updating the macro. It's just a guess as to reasons why the
macro uses malloc directly.


It might indeed be that the function version was introduced specifically
for Windows. However, the macro was left intentionally: both for
backwards compatibility, and for use inside Python itself.


Except for the problems with file descriptors I think a practical
interim solution for extensions implemented in C could be to just link
the runtime lib statically.

[...]


When I wrote "link the runtime lib statically" that was an alternative
to the usual link-as-DLL.


Ok, I lost the thread. When you said "a practical interim solution"
you were talking about what problem? I thought the discussion was
about the need to link with the same DLL version as Python.


The main problem that the required MSVC redistributables are not necessarily 
present on the end user's system.




It wouldn't make sense to link the runtime lib statically as an
alternative to linking it statically.


However, it would surely make sense to link with a different DLL than
the one that Python links with, assuming that would actually work.


As for linking to a different /version/ of the CRT, if you really mean
that, I think that's difficult. It's not necessarily impossible, after
all there's STLPort. But I think that it must at the very least be
rather difficult to do with Microsoft's tools, for otherwise people
would have employed that solution before, and so I wouldn't trust the
result, and wouldn't waste the time trying.


It's actually straight-forward (or used to be, until they came up with
the SxS madness). It was actually the case that people did so
unexpectingly, and it seemed to work fine, except that it crashed when
passing FILE*. Then we started explaining that mixing CRTs is risky.


Oh.

Well then. :-)


Cheers,

- Alf

--
blog at http://alfps.wordpress.com>
--
http://mail.python.org/mailman/listinfo/python-list


Re: Is This Open To SQL Injection?

2010-07-07 Thread Ian

On 07/07/2010 19:38, Victor Subervi wrote:

Hi;
I have this code:

sql = 'insert into personalDataKeys values (%s, %s, %s)' % (store, 
user, ', %s'.join('%s' * len(col_vals))

cursor.execute(sql, col_vals)

Is this open to injection attacks? If so, how correct?
TIA,
beno

Yes, it is trivially open to injection attacks.

What would happen if someone enters the next line into one of your col_vals

x,y);DROP DATABASE personalDataKeys; ha ha

Your sql statement would be closed early by the semicolon, and the DROP 
TABLE personalDataKeys is then executed and would cause some unexpected 
data loss.


Things could be more serious - DROP DATABASE mysql;  for a mysql 
installation for example.


You must always always every time and without any exceptions 
what-so-ever, put all and every piece of data that comes from outside 
the program through the appropriate routine to make whatever has been 
entered into storable data and not part of the sql statement.


In php this is mysql_real_escape_string().  In your favourite language 
there will be an equivalent.


If you miss just one occurrence its like leaving the side window 
unlocked! Someone will get in one day.


Regards

Ian

p.s. Did I mention that there are no exceptions to the "sanitise every 
piece of data" rule?


--
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread sturlamolden
On 7 Jul, 21:47, "Martin v. Loewis"  wrote:

> However, the C standard is silent wrt. to PyMem_MALLOC, and it certainly
> allows the definition of macros which use the macro arguments more than
> once.

Ok, I knew there was something odd here. PyMem_Malloc is indeed a
function, whilst PyMem_MALLOC is a deprecated macro.

:)






-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread Grant Edwards
On 2010-07-07, Martin v. Loewis  wrote:
>
>> Also observe that this macro is very badly written (even illegal) C.
>> Consider what this would do:
>> 
>> PyMem_MALLOC(n++)
>> 
>> According to Linus Thorvalds using macros like this is not even legal
>> C:
>> 
>> http://www.linuxfocus.org/common/src/January2004_linus.html
>
> [Please don't use "legal" wrt. programs - it's not "illegal" to violate
> the language's rules; you don't go to jail when doing so. Linus said
> "not allowed"]

Nonsense.

The world "illegal" doesn't mean "you'll go to jail".

"Legal" and "illegal" are used to indicate conformance or
nonconformace with respect to some set of rules -- be they a
programming language standard, FIFA footbal rules, or Forumula 1
technical regulations.

It's perfectly standard usage to refer to an "illegal forward pass" in
American football, to "illegal tires" used during a race, or to an
"illegal operation" in a program.

-- 
Grant Edwards   grant.b.edwardsYow! I feel like I'm
  at   in a Toilet Bowl with a
  gmail.comthumbtack in my forehead!!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread Christian Heimes
> Yeah, but then we're down to file descriptors, C library locales and such as 
> the 
> remaining problems.

Don't forget errno! Every CRT might have its own errno thread local. I
don't know how its handled on Windows but I suspect it suffers from the
same problem.

Christia

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread sturlamolden
On 7 Jul, 22:26, Christian Heimes  wrote:

> Don't forget errno! Every CRT might have its own errno thread local. I
> don't know how its handled on Windows but I suspect it suffers from the
> same problem.

The Windows API "errno" is GetLastError. But a delinquent CRT might
map GetLastError() to other integers.



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread Christian Heimes
> The main problem that the required MSVC redistributables are not necessarily 
> present on the end user's system.

It's not a problem for Python anymore. It took a while to sort all
problems out. Martin and other developers have successfully figured out
how to install the CRT for system wide and local user installations. A
system wide installation installs the CRT in the side by side cache
(WinSxS). A local installation keeps the msvcrt90.dll and the
Microsoft.VC90.CRT.manifest next to the python.exe. Python extensions no
longer embed a manifest so they share the CRT from the python.exe process.

In order to ship a standalone exe you have to keep the CRT next to your
exe. This should work for py2exe binaries as well. At our company we
install our application stack entirely from subversion including Python
2.6.5, Sun JRE and lots of other stuff. This works perfectly fine for us
even for servers without the MSVCRT redistributable.

Christian

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The real problem with Python 3 - no business case for conversion (was "I strongly dislike Python 3")

2010-07-07 Thread Carl Banks
On Jul 7, 1:31 am, Paul McGuire  wrote:
> On Jul 6, 3:30 am, David Cournapeau  wrote:> On Tue, Jul 
> 6, 2010 at 4:30 AM, D'Arcy J.M. Cain  wrote:
>
> > One thing that would be very useful is how to maintain something that
> > works on 2.x and 3.x, but not limiting yourself to 2.6. Giving up
> > versions below 2.6 is out of the question for most projects with a
> > significant userbase IMHO. As such, the idea of running the python 3
> > warnings is not so useful IMHO - unless it could be made to work
> > better for python 2.x < 2.6, but I am not sure the idea even makes
> > sense.
>
> This is exactly how I felt about my support for pyparsing, that I was
> trying continue to provide support for 2.3 users, up through 3.x
> users, with a single code base.  (This would actually have been
> possible if I had been willing to introduce a performance penalty for
> Python 2 users, but performance is such a critical issue for parsing I
> couldn't justify it to myself.)  This meant that I had to constrain my
> implementation, while trying to incorporate forward-looking support
> features (such as __bool__ and __dir__), which have no effect on older
> Python versions, but support additions in newer Pythons.  I just
> couldn't get through on the python-dev list that I couldn't just
> upgrade my code to 2.6 and then use 2to3 to keep in step across the
> 2-3 chasm, as this would leave behind my faithful pre-2.6 users.
>
> Here are some of the methods I used:
>
> - No use of sets.  Instead I defined a very simple set simulation
> using dict keys, which could be interchanged with set for later
> versions.
>
> - No generator expressions, only list comprehensions.
>
> - No use of decorators.  BUT, pyparsing includes a decorator method,
> traceParseAction, which can be used by users with later Pythons as
> @traceParseAction in their own code.
>
> - No print statements.  As pyparsing is intended to be an internal
> module, it does no I/O as part of its function - it only processes a
> given string, and returns a data structure.
>
> - Python 2-3 compatible exception syntax.  This may have been my
> trickiest step.  The change of syntax for except from
>
>     except ExceptionType, ex:
>
> to:
>
>     except ExceptionType as ex:
>
> is completely forward and backward incompatible.  The workaround is to
> rewrite as:
>
>     except ExceptionType:
>         ex = sys.exc_info()[0]
>
> which works just fine in 2.x and 3.x.  However, there is a slight
> performance penalty in doing this, and pyparsing uses exceptions as
> part of its grammar success/failure signalling and backtracking; I've
> used this technique everywhere I can get away with it, but there is
> one critical spot where I can't use it, so I have to keep 2 code bases
> with slight differences between them.
>
> - Implement __bool__, followed by __nonzero__ = __bool__.  This will
> give you boolean support for your classes in 2.3-3.1.
>
> - Implement __dir__, which is unused by old Pythons, but supports
> customization of dir() output for your own classes.
>
> - Implement __len__, __contains__, __iter__ and __reversed__ for
> container classes.
>
> - No ternary expressions.  Not too difficult really, there are several
> well-known workarounds for this, either by careful use of and's and
> or's, or using the bool-as-int to return the value from
> (falseValue,trueValue)[condition].
>
> - Define a version-sensitive portion of your module, to define
> synonyms for constants that changed name between versions.  Something
> like:
>
>     _PY3K = sys.version_info[0] > 2
>     if _PY3K:
>         _MAX_INT = sys.maxsize
>         basestring = str
>         _str2dict = set
>         alphas = string.ascii_lowercase + string.ascii_uppercase
>     else:
>         _MAX_INT = sys.maxint
>         range = xrange
>         _str2dict = lambda strg : dict( [(c,0) for c in strg] )
>         alphas = string.lowercase + string.uppercase
>
> The main body of my code uses range throughout (for example), and with
> this definition I get the iterator behavior of xrange regardless of
> Python version.
>
> In the end I still have 2 source files, one for Py2 and one for Py3,
> but there is only a small and manageable number of differences between
> them, and I expect at some point I will move forward to supporting Py3
> as my primary target version.  But personally I think this overall
> Python 2-3 migration process is moving along at a decent rate, and I
> should be able to make my switchover in another 12-18 months.  But in
> the meantime, I am still able to support all versions of Python NOW,
> and I plan to continue doing so (albeit "support" for 2.x versions
> will eventually mean "continue to offer a frozen feature set, with
> minimal bug-fixing if any").
>
> I realize that pyparsing is a simple-minded module in comparison to
> others: it is pure Python, so it has no issues with C extensions; it
> does no I/O, so print-as-statement vs. print-as-function is not an
> issue; and it imports few oth

Fascinating interview by Richard Stallman on Russia TV

2010-07-07 Thread bolega
"Democracy is sick in the US, government monitors your Internet"
http://www.youtube.com/watch?v=2BfCJq_zIdk&feature=fvsr

Enjoy .


-- 
http://mail.python.org/mailman/listinfo/python-list


Fascinating interview by Richard Stallman on Russia TV

2010-07-07 Thread bolega
"Democracy is sick in the US, government monitors your Internet"
http://www.youtube.com/watch?v=2BfCJq_zIdk&feature=fvsr

Enjoy .


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The real problem with Python 3 - no business case for conversion (was "I strongly dislike Python 3")

2010-07-07 Thread Brendan Abel
> > > One thing that would be very useful is how to maintain something that
> > > works on 2.x and 3.x, but not limiting yourself to 2.6. Giving up
> > > versions below 2.6 is out of the question for most projects with a
> > > significant userbase IMHO. As such, the idea of running the python 3
> > > warnings is not so useful IMHO - unless it could be made to work
> > > better for python 2.x < 2.6, but I am not sure the idea even makes
> > > sense.

The entire fact that 3.x was *designed* to be incompatible should tell
you that supporting 2.x and 3.x with a single code base is a bad idea,
except for the very smallest of projects.  This is the point where a
project should fork and provide two different versions.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread Alf P. Steinbach /Usenet

* Christian Heimes, on 07.07.2010 22:47:

The main problem that the required MSVC redistributables are not necessarily
present on the end user's system.


It's not a problem for Python anymore. It took a while to sort all
problems out. Martin and other developers have successfully figured out
how to install the CRT for system wide and local user installations. A
system wide installation installs the CRT in the side by side cache
(WinSxS). A local installation keeps the msvcrt90.dll and the
Microsoft.VC90.CRT.manifest next to the python.exe. Python extensions no
longer embed a manifest so they share the CRT from the python.exe process.

In order to ship a standalone exe you have to keep the CRT next to your
exe. This should work for py2exe binaries as well. At our company we
install our application stack entirely from subversion including Python
2.6.5, Sun JRE and lots of other stuff. This works perfectly fine for us
even for servers without the MSVCRT redistributable.


I think you're talking about a different problem. The CRT installed along with 
CPython works for extensions using the MSVC 9.0 CRT.


However developing an extension with MSVC 10 the extension will use the 10.0 
CRT, which is not necessarily present on the end user's system.


As I see it there are five solutions with different trade-offs:

  A Already having Visual Studio 2008 (MSVC 9.0), or coughing up the
money for an MSDN subscription, or visiting trade shows, so as to
obtain that compiler version.
-> Not an option for everybody.

  B Linking the CRT statically.
-> Increased size, problems with CRT state such as file descriptors.

  C Linking the CRT dynamically and bundling the MSVC redistributables
with the extension.
-> Even more increased size for the download, but smaller total
   footprint for extensions in sum; same CRT state problems.

  D Linking the CRT dynamically and providing an optional download and
install of the redistributables if they're not present. This would
best be done with some support from the Python installation machinery.
-> Small nice size for extensions, still same CRT state problems.

  E As D + a new compiler-independent native code interface that
does not carry dependencies on CRT state such as file descriptors, like JNI.
-> Really huge effort, and cannot be applied until some new Python version.

And I think the clue here is that the CRT state problems can be avoided by 
careful coding.


Hence, for those who cannot do A I think B is a realistic practical option, and 
D would be nice...



Cheers,

- Alf

--
blog at http://alfps.wordpress.com>
--
http://mail.python.org/mailman/listinfo/python-list


Re: Recommend a MySQLdb Forum

2010-07-07 Thread Tim Johnson
On 2010-07-07, Philip Semanchuk  wrote:
>
> On Jul 6, 2010, at 3:16 PM, Tim Johnson wrote:
>
>> Greetings:
>> I would appreciate it if some could recommend a MySQLdb forum.
>
> The one associated the sourceforge project seems like a good bet.
>
> 1) go here: http://sourceforge.net/projects/mysql-python/
> 2) click support
  Thanks Philip

-- 
Tim 
tim at johnsons-web.com or akwebsoft.com
http://www.akwebsoft.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: SMTPHandler and Unicode

2010-07-07 Thread norbert
> Well, you could use an approach like the one suggested here:
>
> http://plumberjack.blogspot.com/2010/07/using-custom-formatter-to-dea...

That's nice, thanks. I'll use something like this. Just a thought : I
will use "errors=replace" in the call to the encode method to be sure
that the logger does not raise any exception.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread Martin v. Loewis
Am 07.07.2010 22:35, schrieb sturlamolden:
> On 7 Jul, 22:26, Christian Heimes  wrote:
> 
>> Don't forget errno! Every CRT might have its own errno thread local. I
>> don't know how its handled on Windows but I suspect it suffers from the
>> same problem.
> 
> The Windows API "errno" is GetLastError. But a delinquent CRT might
> map GetLastError() to other integers.

Please check the source before posting. msvcrt defines errno as

_CRTIMP extern int * __cdecl _errno(void);
#define errno   (*_errno())

where _errno is (see dosmap.c)

int * __cdecl _errno(void)
{
_ptiddata ptd = _getptd_noexit();
if (!ptd) {
return &ErrnoNoMem;
} else {
return ( &ptd->_terrno );
}

}

where _getptd_noexit returns the CRT's per-thread data (see tidtable.c).

So it *is* a mapping to other integers, and, even though it's called
dosmap.c, it is maintained because of the (limited) POSIX support in the
CRT. In particular, there is a mapping between GetLastError values and
errno values that can't be implemented through simple defines
(e.g. both ERROR_FILE_NOT_FOUND and ERROR_PATH_NOT_FOUND map to ENOENT).
In addition, a standard C implementation can rely on only certain APIs
changing errno, which MS perhaps might not be able to guarantee for
GetLastError values in exactly the same manner.

So with the way the Windows API is defined, a C implementation has no
alternative but to be delinquent.

Regards,
Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread Alf P. Steinbach /Usenet

* Alf P. Steinbach /Usenet, on 07.07.2010 23:19:


However developing an extension with MSVC 10 the extension will use the
10.0 CRT, which is not necessarily present on the end user's system.

As I see it there are five solutions with different trade-offs:

A Already having Visual Studio 2008 (MSVC 9.0), or coughing up the
money for an MSDN subscription, or visiting trade shows, so as to
obtain that compiler version.
-> Not an option for everybody.

B Linking the CRT statically.
-> Increased size, problems with CRT state such as file descriptors.

C Linking the CRT dynamically and bundling the MSVC redistributables
with the extension.
-> Even more increased size for the download, but smaller total
footprint for extensions in sum; same CRT state problems.

D Linking the CRT dynamically and providing an optional download and
install of the redistributables if they're not present. This would
best be done with some support from the Python installation machinery.
-> Small nice size for extensions, still same CRT state problems.

E As D + a new compiler-independent native code interface that
does not carry dependencies on CRT state such as file descriptors, like
JNI.
-> Really huge effort, and cannot be applied until some new Python version.

And I think the clue here is that the CRT state problems can be avoided
by careful coding.

Hence, for those who cannot do A I think B is a realistic practical
option, and D would be nice...


Wait...

  F Possibly, as the docs say,

"Developer Studio will throw in a lot of import libraries that you do not really 
need, adding about 100K to your executable. To get rid of them, use the Project 
Settings dialog, Link tab, to specify ignore default libraries. Add the correct 
msvcrtxx.lib to the list of libraries."


Can anyone confirm whether this works in practice with MSVC 10?


Cheers,

- Alf

--
blog at http://alfps.wordpress.com>
--
http://mail.python.org/mailman/listinfo/python-list


Storing a callback function as a class member

2010-07-07 Thread Nathan Huesken
Hi,

I have a class, where I want to store a callback function as a member
to access later:

class CallbackClass:
def setCallback(self,cb):
self.cb = cb

def callCallback(self, para):
self.cb(para)

Doing so, I get the error:
callbackFunc() takes exactly 1 parameter (2 given)

self is given as parameter this way, is it not? How can this be done?

Thanks!
Nathan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread sturlamolden
On 7 Jul, 23:33, "Martin v. Loewis"  wrote:

> > The Windows API "errno" is GetLastError. But a delinquent CRT might
> > map GetLastError() to other integers.
>
> Please check the source before posting. msvcrt defines errno as

I don't have the source to msvcrt, at least not to my knowledge.




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread Martin v. Loewis
Am 07.07.2010 23:49, schrieb sturlamolden:
> On 7 Jul, 23:33, "Martin v. Loewis"  wrote:
> 
>>> The Windows API "errno" is GetLastError. But a delinquent CRT might
>>> map GetLastError() to other integers.
>>
>> Please check the source before posting. msvcrt defines errno as
> 
> I don't have the source to msvcrt, at least not to my knowledge.

If you have Visual Studio, and opted to install the CRT sources,
you'll find them in VC/crt/src (or VC7/crt/src, depending on VS
version). I'm not 100% sure whether they are included in VS Express as well.

Regards,
Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The real problem with Python 3 - no business case for conversion (was "I strongly dislike Python 3")

2010-07-07 Thread MRAB

Brendan Abel wrote:

One thing that would be very useful is how to maintain something that
works on 2.x and 3.x, but not limiting yourself to 2.6. Giving up
versions below 2.6 is out of the question for most projects with a
significant userbase IMHO. As such, the idea of running the python 3
warnings is not so useful IMHO - unless it could be made to work
better for python 2.x < 2.6, but I am not sure the idea even makes
sense.


The entire fact that 3.x was *designed* to be incompatible should tell
you that supporting 2.x and 3.x with a single code base is a bad idea,
except for the very smallest of projects.  This is the point where a
project should fork and provide two different versions.


I wouldn't say that 3.x was designed to be incompatible. It was designed
to tidy the language, and the incompatibilities are an unfortunate
result.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread sturlamolden
On 7 Jul, 23:19, "Alf P. Steinbach /Usenet"  wrote:

>    D Linking the CRT dynamically and providing an optional download and
>      install of the redistributables if they're not present. This would
>      best be done with some support from the Python installation machinery.
>      -> Small nice size for extensions, still same CRT state problems.

This was a problem for py2exe'd apps before Python 2.6 (i.e. no public
download for Visual C++ 2003 runtime files.) But for Visual C++ 2008
and 2010, the CRTs can be downloaded from Microsoft and need not be
shipped with the application.

http://www.microsoft.com/downloads/details.aspx?familyid=9B2DA534-3E03-4391-8A4D-074B9F2BC1BF&displaylang=en
http://www.microsoft.com/downloads/details.aspx?familyid=BA9257CA-337F-4B40-8C14-157CFDFFEE4E&displaylang=en

http://www.microsoft.com/downloads/details.aspx?FamilyID=a7b7a05e-6de6-4d3a-a423-37bf0912db84&displaylang=en
http://www.microsoft.com/downloads/details.aspx?familyid=BD512D9E-43C8-4655-81BF-9350143D5867&displaylang=en







-- 
http://mail.python.org/mailman/listinfo/python-list


How to test/troubshoot an extension (pylibconfig)?

2010-07-07 Thread Grant Edwards
I'm trying to use python bindings for libconfig.  There appear to be
three very slightly different bindings:

   http://code.google.com/p/python-libconfig/
   http://wiki.github.com/cnangel/python-libconfig/
   http://github.com/azeey/python-libconfig/

I'm using the latter with libconfig 1.4.5

   http://www.hyperrealm.com/libconfig/

The python bindings appear to come with test cases, but I can't figure
out how to run them.  From reading the Python docs, it would appear
that this should do something useful, but it doesn't:

   $ python -m unittest pylibconfig
   
   --
   Ran 0 tests in 0.000s


   Trying to run the test script directory doesn't work either:

   $ python tests/test.py
   Traceback (most recent call last):
 File "tests/test.py", line 8, in 
   from x64.pylibconfig import Config
   ImportError: No module named x64.pylibconfig

Importing the module seems to be OK, but creating an instance barfs:

   Python 2.6.5 (release26-maint, Jun 22 2010, 12:58:11) 
   [GCC 4.3.4] on linux2
   Type "help", "copyright", "credits" or "license" for more information.
   
   >>> import pylibconfig
   
   >>> conf = pylibconfig.Config()
   *** glibc detected *** /usr/bin/python2.6: corrupted double-linked list: 
0x08065c48 ***
   
   
Where to go from here?
   
-- 
Grant Edwards   grant.b.edwardsYow! Do you like "TENDER
  at   VITTLES"?
  gmail.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Storing a callback function as a class member

2010-07-07 Thread MRAB

Nathan Huesken wrote:

Hi,

I have a class, where I want to store a callback function as a member
to access later:

class CallbackClass:
def setCallback(self,cb):
self.cb = cb

def callCallback(self, para):
self.cb(para)

Doing so, I get the error:
callbackFunc() takes exactly 1 parameter (2 given)

self is given as parameter this way, is it not? How can this be done?


Could you provide a short program which we could run to reproduce the
problem?
--
http://mail.python.org/mailman/listinfo/python-list


Re: The real problem with Python 3 - no business case for conversion

2010-07-07 Thread John Bokma
John Nagle  writes:

>Python 3 is a nice cleanup of some legacy syntax issues.  But
> that's just not enough.  Perl 6 is a nice cleanup of Perl 5,

Eh, I wouldn't call Perl 6 a "nice cleanup". It's much better to
consider it a new language with roots in Perl 5 (amongst others). Or
to quote from http://dev.perl.org/perl6/:

  "Perl 5 and Perl 6 are two languages in the Perl family, but of
   different lineages."

> and look how that went.  Ten years on, it's not even mainstream, let
> alone dominant.

I don't think that's the point of Perl 6 (if one can even say such a
thing, that is). Right now, (I) think of Perl 6 as a test bed for features
that couldn't be put in Perl 5 in an easy manner. Or (I) think of it as a
programming language lab.

My best guess is that with coming Christmas there will be a Perl 6
comparable to Python 3. But huge disclaimer: I hardly follow Perl 6
development.

-- 
John Bokma   j3b

Hacking & Hiking in Mexico -  http://johnbokma.com/
http://castleamber.com/ - Perl & Python Development
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to test/troubshoot an extension (pylibconfig)?

2010-07-07 Thread Grant Edwards
On 2010-07-07, Grant Edwards  wrote:

> I'm trying to use python bindings for libconfig.  There appear to be
> three very slightly different bindings:
>
>http://code.google.com/p/python-libconfig/
>http://wiki.github.com/cnangel/python-libconfig/
>http://github.com/azeey/python-libconfig/
>
> I'm using the latter with libconfig 1.4.5
>
>http://www.hyperrealm.com/libconfig/

[...]

> Importing the module seems to be OK, but creating an instance barfs:
>
>Python 2.6.5 (release26-maint, Jun 22 2010, 12:58:11) 
>[GCC 4.3.4] on linux2
>Type "help", "copyright", "credits" or "license" for more information.
>
>>>> import pylibconfig
>
>>>> conf = pylibconfig.Config()
>*** glibc detected *** /usr/bin/python2.6: corrupted double-linked list: 
> 0x08065c48 ***

Oops.  Those Python bindings are for version 1.3.2 of libconfig (which
does work).  They don't work with the current version of libconfig.  I
guess it's time to figure out how boost works...

-- 
Grant Edwards   grant.b.edwardsYow! ... I think I'd
  at   better go back to my DESK
  gmail.comand toy with a few common
   MISAPPREHENSIONS ...
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Storing a callback function as a class member

2010-07-07 Thread Rhodri James
On Wed, 07 Jul 2010 22:48:11 +0100, Nathan Huesken  
 wrote:



Hi,

I have a class, where I want to store a callback function as a member
to access later:

class CallbackClass:
def setCallback(self,cb):
self.cb = cb

def callCallback(self, para):
self.cb(para)

Doing so, I get the error:
callbackFunc() takes exactly 1 parameter (2 given)

self is given as parameter this way, is it not? How can this be done?


rho...@gnudebst:~$ python
Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.

class CBClass:

... def set_cb(self, cb):
... self.cb = cb
... def call_cb(self, para):
... self.cb(para)
...

def trivial(arg):

... print arg
...

c = CBClass()
c.set_cb(trivial)
c.call_cb("Hello, world")

Hello, world

Works for me.  Which version of Python are you using?

--
Rhodri James *-* Wildebeeste Herder to the Masses
--
http://mail.python.org/mailman/listinfo/python-list


Re: Storing a callback function as a class member

2010-07-07 Thread Emile van Sebille

On 7/7/2010 2:48 PM Nathan Huesken said...

class CallbackClass:
 def setCallback(self,cb):
 self.cb = cb

 def callCallback(self, para):
 self.cb(para)




You'll have to show how you're invoking this -- the following works for 
me (ie, I don't get an error):


class CallbackClass:
def setCallback(self,cb):
self.cb = cb
def callCallback(self, para):
self.cb(para)


a = CallbackClass()


def test(param): return 2*param


a.setCallback(test)

a.callCallback(3)



Emile

--
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread Jonathan Hartley
On Jul 7, 8:22 pm, "Martin v. Loewis"  wrote:
> > I presume this problem would go away if future versions of Python
> > itself were compiled on Windows with something like MinGW gcc. Also,
> > this would solve the pain of Python developers attempting to
> > redistribute py2exe versions of their programs (i.e. they have to own
> > a Visual Studio license to legally be able to redistribute the
> > required C runtime) I don't understand enough to know why Visual
> > Studio was chosen instead of MinGW. Can anyone shed any light on that
> > decision?
>
> sturlamolden has already given the primary reason: Python,
> traditionally, attempts to use and work with the system vendor's
> compiler. On Windows, that's MSC. It's typically the one that best knows
> about platform details that other compilers might be unaware of.
>
> In addition, it's also the compiler and IDE that Windows developers (not
> just Python core people, but also extension developers and embedders)
> prefer to use, as it has quite good IDE support (in particular debugging
> and code browsing).
>
> Perhaps more importantly, none of the other compilers is really an
> alternative. GCC in particular cannot build the Win32 extensions, since
> it doesn't support the COM and ATL C++ features that they rely on (and
> may not support other MSC extensions, either). So the Win32 extensions
> must be built with VS, which means Python itself needs to use the same
> compiler.
>
> Likewise important: gcc/mingw is *not* a complete C compiler on Windows.
> A complete C compiler would have to include a CRT (on Windows); mingw
> doesn't (cygwin does, but I think you weren't proposing that Python be
> built for cygwin - you can easily get cygwin Python anyway). Instead,
> mingw relies on users having a CRT available to
> them - and this will be a Microsoft one. So even if gcc was used, we
> would have versioning issues with Microsoft CRTs, plus we would have to
> rely on target systems including the right CRT, as we couldn't include
> it in the distribution.
>
> HTH,
> Martin


I see. Thanks very much to both of you for the info, much appreciated.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-07 Thread sturlamolden
On 8 Jul, 00:35, Jonathan Hartley  wrote:

> I see. Thanks very much to both of you for the info, much appreciated.

The problem you referred to for py2exe despaired with Python 2.6. For
Python 2.5, there was no public download option for msvcr71.dll and
msvcp71.dll. There was also the unsolved SxS issue. Thus a license for
Visual Studio 2003 was required to distribute py2exe apps for Python
2.5. That is now history. For py2exe apps using Python 2.6, 2.7 or
3.1, you can just ask your clients to install this:

http://www.microsoft.com/downloads/details.aspx?familyid=9B2DA534-3E03-4391-8A4D-074B9F2BC1BF&displaylang=en
http://www.microsoft.com/downloads/details.aspx?familyid=BA9257CA-337F-4B40-8C14-157CFDFFEE4E&displaylang=en

There are similar downloads for Visual C++ 2010 run-time files as
well. Python 3.2 will probably be built with Visual Studio 2010.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Argh! Name collision!

2010-07-07 Thread Rami Chowdhury
On Tuesday 06 July 2010 22:42:25 rantingrick wrote:
> On Jul 6, 9:11 pm, "Alf P. Steinbach /Usenet"  
> +use...@gmail.com> wrote:
> > "pyni"! Pronounced like "tiny"! Yay!
> 
> hmm, how's about an alternate spelling... "pyknee", or "pynee", or
> "pynie" ... considering those are not taken either?

Pynie's taken too -- it's the Python implementation on the Parrot VM. 


Rami Chowdhury
"As an online discussion grows longer, the probability of a comparison involving
Nazis or Hitler approaches one." -- Godwin's Law
+1-408-597-7068 / +44-7875-841-046 / +88-01819-245544
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python -- floating point arithmetic

2010-07-07 Thread Wolfram Hinderer
On 7 Jul., 19:32, Ethan Furman  wrote:
> Nobody wrote:
> > On Wed, 07 Jul 2010 15:08:07 +0200, Thomas Jollans wrote:
>
> >> you should never rely on a floating-point number to have exactly a
> >> certain value.
>
> > "Never" is an overstatement. There are situations where you can rely
> > upon a floating-point number having exactly a certain value.
>
> It's not much of an overstatement.  How many areas are there where you
> need the number
> 0.155511151231257827021181583404541015625000?
>
> If I'm looking for 0.1, I will *never* (except by accident ;) say
>
> if var == 0.1:
>
> it'll either be <= or >=.

The following is an implementation of a well-known algorithm.
Why would you want to replace the floating point comparisons? With
what?

(This is toy-code.)

#
from random import random

def min_cost_path(cost_right, cost_up):
""" return minimal cost and its path in a rectangle - going up and
right only """
cost = dict()
size_x, size_y = max(cost_right)

#compute minimal cost
cost[0, 0] = 0.0
for x in range(size_x):
cost[x + 1, 0] = cost[x, 0] + cost_right[x, 0]
for y in range(size_y):
cost[0, y + 1] = cost[0, y] + cost_up[0, y]
for x in range(size_x):
cost[x + 1, y + 1] = min(cost[x, y + 1] + cost_right[x, y
+ 1],
 cost[x + 1, y] + cost_up[x + 1,
y])

#compute path (reversed)
x = size_x
y = size_y
path = []
while x != 0 and y != 0:
if x == 0:
y -= 1
path.append("u")
elif y == 0:
x -= 1
path.append("r")
elif cost[x - 1, y] + cost_right[x - 1, y] == cost[x, y]: # fp
compare
x -= 1
path.append("r")
elif cost[x, y - 1] + cost_up[x, y - 1] == cost[x, y]: # fp
compare
y -= 1
path.append("u")
else:
raise ValueError

return cost[size_x, size_y], "".join(reversed(path))


if __name__ == "__main__":
size = 100
cost_right = dict(((x, y), random()) for x in range(size) for y in
range(size))
cost_up = dict(((x, y), random()) for x in range(size) for y in
range(size))
print min_cost_path(cost_right, cost_up)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Argh! Name collision!

2010-07-07 Thread MRAB

Rami Chowdhury wrote:

On Tuesday 06 July 2010 22:42:25 rantingrick wrote:

On Jul 6, 9:11 pm, "Alf P. Steinbach /Usenet"  wrote:

"pyni"! Pronounced like "tiny"! Yay!

hmm, how's about an alternate spelling... "pyknee", or "pynee", or
"pynie" ... considering those are not taken either?


Pynie's taken too -- it's the Python implementation on the Parrot VM. 


"PyNatInt" gets no hits on Google.
--
http://mail.python.org/mailman/listinfo/python-list


Re: delegation pattern via descriptor

2010-07-07 Thread kedra marbun
On Jul 7, 2:46 am, Bruno Desthuilliers
 wrote:
> Gregory Ewing a écrit :
>
> > Bruno Desthuilliers wrote:
> >> kedra marbun a écrit :
>
> >>> if we limit our discussion to py:
> >>> why __{get|set|delete}__ don't receive the 'name' & 'class' from
> >>> __{getattribute|{set|del}attr}__
> >>> 'name' is the name that is searched
>
> >> While it would have been technically possible, I fail to imagine any use
> >> case for this.
>
> > I think he wants to have generic descriptors that are
> > shared between multiple attributes, but have them do
> > different things based on the attribute name.
>
> I already understood this, but thanks !-)
>
> What I dont understand is what problem it could solve that couldn't be
> solved more simply using the either _getattr__ hook or hand-coded
> delegation, since such a descriptor would be so tightly coupled to the
> host class that it just doesn't make sense writing a descriptor for this.

yeah, i finally can agree descriptor isn't supposed to be used as
delegation in general, it should be the job of __getattr__

however i still think passing name would open up some other
possibilities of use (not necessarily on the basis of sharing
descriptor), surely some of them (all of it?) are bad. sure as hell,
my example is an example of bad use. for now, i conclude that passing
name is too risky, it easily allows bad practices particularly against
the law of Demeter

thanks Bruno

btw, is there a common approach to let the interface of a class that
uses __getattr__, to include names that are delegated?

class A:
def do_this(self): ...

class B:
a = A()
def do_that(self): ...

def __getattr__(self, name):
try:
return types.MethodType(getattr(self.a, name), self)
except AttributeError:
raise AttributeError

how to make 'dir(B)' includes 'do_this', do i have to use __dir__? and
if i use __dir__, somewhat 'help(B)' doesn't work as usual, i haven't
check pydoc.py ;)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Getting pyparsing to backtrack

2010-07-07 Thread Cousin Stanley

> I'm working on street address parsing again, 
> and I'm trying to deal with some of the harder cases.
>  

  For yet another test case
  my actual address includes 

  ... East South Mountain Avenue


  Sometimes written as 

  ... E. South Mtn Ave


-- 
Stanley C. Kitching
Human Being
Phoenix, Arizona

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Argh! Name collision!

2010-07-07 Thread Alf P. Steinbach /Usenet

* rantingrick, on 07.07.2010 07:42:

On Jul 6, 9:11 pm, "Alf P. Steinbach /Usenet"  wrote:


"pyni"! Pronounced like "tiny"! Yay!


hmm, how's about an alternate spelling... "pyknee", or "pynee", or
"pynie" ... considering those are not taken either?


Hm, for pure shock value I think I'll use the acronym PYthon Native Interface 
Support.


pynis! :-)

A set of C++ classes to ease the writing of extensions.

Like,



// progrock.pynis  --  "Python Native Interface Support"
// A simple C++ framework for writing Python 3.x extensions.
//
// Copyright (C) Alf P. Steinbach, 2010.

#ifndef PYNIS_PTR_H
#define PYNIS_PTR_H
#include 


//- Dependencies:

#include 
#include 
#include 



//- Interface:

namespace progrock{ namespace pynis {

enum DoAddRef { doAddRef };

class Ptr
{
private:
PyObject*   p_;

public:
Ptr( PyObject* p = 0 ): p_( p )
{}

Ptr( PyObject* p, DoAddRef ): p_( p )
{
assert( p != 0 );
Py_INCREF( p_ );
}

Ptr( Ptr const& other ): p_( other.p_ )
{
Py_XINCREF( p_ );
}

~Ptr()
{
Py_XDECREF( p_ );
}

void swapWith( Ptr& other ) { std::swap( p_, other.p_ ); }
Ptr& operator=( Ptr other ) { swapWith( other ); return *this; }

PyObject* get() const   { return p_; }

PyObject* release()
{
PyObject* const result  = p_;
Py_XDECREF( p_ );
p_ = 0;
return result;
}
};

} }  // namespace progrock::pynis


#endif



Cheers,

- Alf (shocked)

PS: Darn, forgot to google it. But I think it's unlikely the name's already in 
use!

--
blog at http://alfps.wordpress.com>
--
http://mail.python.org/mailman/listinfo/python-list


Re: delegation pattern via descriptor

2010-07-07 Thread kedra marbun
On Jul 6, 12:11 pm, Steven D'Aprano  wrote:
> On Mon, 05 Jul 2010 21:12:47 -0700, kedra marbun wrote:
> > On Jul 5, 7:49 am, Gregory Ewing  wrote:
> >> kedra marbun wrote:
> >> > now, i'm asking another favor, what about the 2nd point in my 1st
> >> > post?
>
> >> Your original post has dropped off my newsscope, so you'll have to
> >> remind me what the 2nd point was.
>
> >> --
> >> Greg
>
> > it's like 'name', it's about info that i think should be passed to
> > descriptor's __{get|set|delete}__. i wonder what are the reasons for not
> > passing the class on which the descriptor is attached to, what pattern
> > is encouraged by this?
>
> Perhaps I'm missing the context, but since the descriptor is passed the
> instance, you can easily get the class with type(self) or self.__class__.
> There's no need to pass the class as a separate argument.
>
> --
> Steven

no, the class that i meant is the one that actually has the descriptor
in its __dict__, not instance.__class__

the class obj that you said unecessary-as-arg is what __get__ receives
as the 3rd arg

class Desc:
def __get__(*args): print(args)

class a:
v0 = Desc()

class b(a): pass

b().v0  #(desc, b(), b)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is This Open To SQL Injection?

2010-07-07 Thread Kee Nethery
Yes, you SQL would be trivial to manipulate via SQL injection.

Not only do you need to validate each piece of data submitted by a user, you 
need to escape all the wildcard characters that your database uses. If the text 
string supplied by a user has quotes or parens or wildcard characters, the text 
could be interpreted as SQL and that is what you must avoid.

Kee Nethery
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The real problem with Python 3 - no business case for conversion (was "I strongly dislike Python 3")

2010-07-07 Thread Brendan Abel
On Jul 7, 3:00 pm, MRAB  wrote:
> Brendan Abel wrote:
>  One thing that would be very useful is how to maintain something that
>  works on 2.x and 3.x, but not limiting yourself to 2.6. Giving up
>  versions below 2.6 is out of the question for most projects with a
>  significant userbase IMHO. As such, the idea of running the python 3
>  warnings is not so useful IMHO - unless it could be made to work
>  better for python 2.x < 2.6, but I am not sure the idea even makes
>  sense.
>
> > The entire fact that 3.x was *designed* to be incompatible should tell
> > you that supporting 2.x and 3.x with a single code base is a bad idea,
> > except for the very smallest of projects.  This is the point where a
> > project should fork and provide two different versions.
>
> I wouldn't say that 3.x was designed to be incompatible. It was designed
> to tidy the language, and the incompatibilities are an unfortunate
> result.

You're missing the point, and arguing semantics.  It's a good thing I
didn't misspell anything.

Python 3.x will continue to change.  The incompatibilities between 3.x
and 2.x will only become more numerous.  If your goal is to support
2.x, and 3.x, you'd be best supporting them separately.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Storing a callback function as a class member

2010-07-07 Thread Ian Kelly
On Wed, Jul 7, 2010 at 3:48 PM, Nathan Huesken  wrote:
> Hi,
>
> I have a class, where I want to store a callback function as a member
> to access later:
>
> class CallbackClass:
>    def setCallback(self,cb):
>        self.cb = cb
>
>    def callCallback(self, para):
>        self.cb(para)
>
> Doing so, I get the error:
> callbackFunc() takes exactly 1 parameter (2 given)
>
> self is given as parameter this way, is it not? How can this be done?

No, self will not be passed as a parameter.  A function is only
treated as a method when it is present in the class dict.  If it is in
the instance dict as you have above, then it's just a normal function.
 If you want it to receive self in this case, then you should have
your callCallback method pass it in explicitly.

HTH,
Ian
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The real problem with Python 3 - no business case for conversion (was "I strongly dislike Python 3")

2010-07-07 Thread geremy condra
On Wed, Jul 7, 2010 at 8:26 PM, Brendan Abel <007bren...@gmail.com> wrote:
> On Jul 7, 3:00 pm, MRAB  wrote:
>> Brendan Abel wrote:
>>  One thing that would be very useful is how to maintain something that
>>  works on 2.x and 3.x, but not limiting yourself to 2.6. Giving up
>>  versions below 2.6 is out of the question for most projects with a
>>  significant userbase IMHO. As such, the idea of running the python 3
>>  warnings is not so useful IMHO - unless it could be made to work
>>  better for python 2.x < 2.6, but I am not sure the idea even makes
>>  sense.
>>
>> > The entire fact that 3.x was *designed* to be incompatible should tell
>> > you that supporting 2.x and 3.x with a single code base is a bad idea,
>> > except for the very smallest of projects.  This is the point where a
>> > project should fork and provide two different versions.
>>
>> I wouldn't say that 3.x was designed to be incompatible. It was designed
>> to tidy the language, and the incompatibilities are an unfortunate
>> result.
>
> You're missing the point, and arguing semantics.  It's a good thing I
> didn't misspell anything.
>
> Python 3.x will continue to change.  The incompatibilities between 3.x
> and 2.x will only become more numerous.  If your goal is to support
> 2.x, and 3.x, you'd be best supporting them separately.

I maintain two projects that have to work from 2.5 to 3.1. On one of
them (~5kloc) we took the separate support route, and on the other
(~30kloc) I decided to keep a single codebase. IME the maintenance
burden on the former is substantially higher than the latter. Is the
difference in difficulty perhaps domain-related, or a result of a
certain style of coding? Could you give us some more details about
what you were working on that caused you to conclude this?

Geremy Condra
-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: ActivePython 2.7.0.1 is now available

2010-07-07 Thread Sridhar Ratnakumar

We are pleased to announce the availability of ActivePython 2.7.0.1.

http://www.activestate.com/activepython

This release corresponds to the recently released Python 2.7, and, like 
ActivePython 2.6, includes the Python Package Manager (PyPM) with 
essential packages such as Distribute (a compatible fork of setuptools), 
virtualenv, pip and SQLAlchemy. See the release notes for full details:


http://docs.activestate.com/activepython/2.7/relnotes.html#changes

For a high-level overview of this release, please see our blog post:

http://www.activestate.com/blog/2010/07/activepython-27-released

This is also the first ActivePython release with 64-bit support on MacOSX.


What is ActivePython?
-

ActivePython is ActiveState's binary distribution of Python. Builds for 
Windows, Mac OS X, Linux are made freely available. Solaris, HP-UX and 
AIX builds, and access to older versions are available in ActivePython 
Business, Enterprise and OEM editions:


http://www.activestate.com/python

ActivePython includes the Python core and the many core extensions: zlib 
and bzip2 for data compression, the Berkeley DB (bsddb) and SQLite 
(sqlite3) database libraries, OpenSSL bindings for HTTPS support, the 
Tix GUI widgets for Tkinter, ElementTree for XML processing, ctypes (on 
supported platforms) for low-level library access, and others. The 
Windows distribution ships with PyWin32 -- a suite of Windows tools 
developed by Mark Hammond, including bindings to the Win32 API and 
Windows COM.


ActivePython 2.6 and 2.7 also include a binary package manager for 
Python (PyPM) that can be used to install packages much easily. For example:


  C:\>pypm install mysql-python
  [...]

  C:\>python
  >>> import MySQLdb
  >>>

See this page for full details:

http://docs.activestate.com/activepython/2.7/whatsincluded.html

As well, ActivePython ships with a wealth of documentation for both new 
and experienced Python programmers. In addition to the core Python docs, 
ActivePython includes the "What's New in Python" series, "Dive into 
Python", the Python FAQs & HOWTOs, and the Python Enhancement Proposals 
(PEPs).


An online version of the docs can be found here:

http://docs.activestate.com/activepython/2.7/

We would welcome any and all feedback to:

activepython-feedb...@activestate.com

Please file bugs against ActivePython at:

http://bugs.activestate.com/enter_bug.cgi?product=ActivePython

On what platforms does ActivePython run?


ActivePython includes installers for the following platforms:

- Windows/x86
- Windows/x64 (aka "AMD64")
- Mac OS X
- Linux/x86
- Linux/x86_64 (aka "AMD64")
- Solaris/SPARC (Business, Enterprise or OEM edition only)
- Solaris/x86 (Business, Enterprise or OEM edition only)
- HP-UX/PA-RISC (Business, Enterprise or OEM edition only)
- HP-UX/IA-64 (Enterprise or OEM edition only)
- AIX/PowerPC (Business, Enterprise or OEM edition only)
- AIX/PowerPC 64-bit (Business, Enterprise or OEM edition only)

Custom builds are available in Enterprise Edition:

http://www.activestate.com/enterprise-edition

Thanks, and enjoy!

The Python Team

--
Sridhar Ratnakumar
sridharr at activestate.com
--
http://mail.python.org/mailman/listinfo/python-list


Re: The real problem with Python 3 - no business case for conversion

2010-07-07 Thread Ben Finney
geremy condra  writes:

> On Wed, Jul 7, 2010 at 8:26 PM, Brendan Abel <007bren...@gmail.com> wrote:
> > Python 3.x will continue to change.  The incompatibilities between
> > 3.x and 2.x will only become more numerous.  If your goal is to
> > support 2.x, and 3.x, you'd be best supporting them separately.
>
> I maintain two projects that have to work from 2.5 to 3.1. On one of
> them (~5kloc) we took the separate support route, and on the other
> (~30kloc) I decided to keep a single codebase. IME the maintenance
> burden on the former is substantially higher than the latter.

The point, one more time with feeling, is that the incompatibilities
between 2.x and 3.x will *increase* over time.

If you now have a code base that is relatively easy to maintain for both
Python 2.x and 3.x, that is a result of much back-porting efforts and of
a new-feature moratorium that is currently in effect. Enjoy that
situation as you may, because it is guaranteed not to last.

Indeed, the feature moratorium is designed in part to help slow-moving
codebases migrate to Python 3.x before Python resumes its normal pace of
change again. If you're choosing to use that time to further entrench
codebases for Python 2.x, I think that's a short-sighted choice.

Python 2.7 is the last 2.x, no further 3.x features will be back-ported.
New 3.x features will begin to appear after the moratorium ends. The
combination of those two means that *the single-codebase burden will
only increase over time* as Python 3.x diverges further from what Python
2.x can support.

-- 
 \  “Programs must be written for people to read, and only |
  `\incidentally for machines to execute.” —Abelson & Sussman, |
_o__)  _Structure and Interpretation of Computer Programs_ |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The real problem with Python 3 - no business case for conversion (was "I strongly dislike Python 3")

2010-07-07 Thread MRAB

geremy condra wrote:

On Wed, Jul 7, 2010 at 8:26 PM, Brendan Abel <007bren...@gmail.com> wrote:

On Jul 7, 3:00 pm, MRAB  wrote:

Brendan Abel wrote:

One thing that would be very useful is how to maintain something that
works on 2.x and 3.x, but not limiting yourself to 2.6. Giving up
versions below 2.6 is out of the question for most projects with a
significant userbase IMHO. As such, the idea of running the python 3
warnings is not so useful IMHO - unless it could be made to work
better for python 2.x < 2.6, but I am not sure the idea even makes
sense.

The entire fact that 3.x was *designed* to be incompatible should tell
you that supporting 2.x and 3.x with a single code base is a bad idea,
except for the very smallest of projects.  This is the point where a
project should fork and provide two different versions.

I wouldn't say that 3.x was designed to be incompatible. It was designed
to tidy the language, and the incompatibilities are an unfortunate
result.

You're missing the point, and arguing semantics.  It's a good thing I
didn't misspell anything.

Python 3.x will continue to change.  The incompatibilities between 3.x
and 2.x will only become more numerous.  If your goal is to support
2.x, and 3.x, you'd be best supporting them separately.


I maintain two projects that have to work from 2.5 to 3.1. On one of
them (~5kloc) we took the separate support route, and on the other
(~30kloc) I decided to keep a single codebase. IME the maintenance
burden on the former is substantially higher than the latter. Is the
difference in difficulty perhaps domain-related, or a result of a
certain style of coding? Could you give us some more details about
what you were working on that caused you to conclude this?


In my work on the regex module I use a single codebase and generate the
sources for Python 2.5-2.7 and for Python 3.1 from it. It works easily
enough for me.
--
http://mail.python.org/mailman/listinfo/python-list


Re: The real problem with Python 3 - no business case for conversion (was "I strongly dislike Python 3")

2010-07-07 Thread Fuzzyman
On Jul 5, 1:34 am, sturlamolden  wrote:
> On 5 Jul, 01:58, John Nagle  wrote:
>
> >      Exactly.
>
> >      The "incompatible with all extension modules I need" part
> > is the problem right now.  A good first step would be to
> > identify the top 5 or 10 modules that are blocking a move to
> > Python 3 by major projects with many users.
>
> The big danger is Python 2.x becoming abandonware (2.7 being the final
> release) before major projects are ported. Using Python 2.x for new
> projects is not advisable (at least many will think so), and using 3.x
> is not possible. What to do? It's not a helpful situation for Python.

But Python 2.3, 2.4 & 2.5 are *already* abandonware and see *major*
use in many systems and businesses. Python development has always gone
ahead of what *some* people use - and they don't seem to mind that
they're using essentially abandoned versions of Python.

Now that 2.7 is out I *might* be able to persuade my current company
to migrate to 2.6 on the servers, and they're far faster at adopting
tech than many companies I know.

All the best,

Michael Foord
--
http://www.voidspace.org.uk
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The real problem with Python 3 - no business case for conversion (was "I strongly dislike Python 3")

2010-07-07 Thread Carl Banks
On Jul 7, 2:10 pm, Brendan Abel <007bren...@gmail.com> wrote:
> > > > One thing that would be very useful is how to maintain something that
> > > > works on 2.x and 3.x, but not limiting yourself to 2.6. Giving up
> > > > versions below 2.6 is out of the question for most projects with a
> > > > significant userbase IMHO. As such, the idea of running the python 3
> > > > warnings is not so useful IMHO - unless it could be made to work
> > > > better for python 2.x < 2.6, but I am not sure the idea even makes
> > > > sense.
>
> The entire fact that 3.x was *designed* to be incompatible should tell
> you that supporting 2.x and 3.x with a single code base is a bad idea,
> except for the very smallest of projects.  This is the point where a
> project should fork and provide two different versions.

Well, I think it could be a reasonable thing to maintain a single
codebase in 2.x and use 2to3 (and/or a custom translator) to translate
to 3.x version for quite a while.

For the humble library I maintain, I plan to release a Python 3
version as soon as a Python 3 version of numpy is released, maintain a
single codebase (translating from 2 version to 3) for awhile, then at
some point fork them and maintain them separately.

Given that I add features about once every 2 years I don't think it'll
be too much of a burden, though.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python 2.7 released

2010-07-07 Thread imageguy

> I, too, have multiple versions installed -- newer ones for running code
> I haven't upgraded; older ones for compatibility testing where needed.
> I just install to the default c:\pythonxy directories (although I like
> the idea of a common root) and I put NTFS hardlinks into my general
> c:\tools directory which is on the path. The out-of-context hardlinks
> work because of the registry settings which pick up the correct context
> for each version.

Sorry to be daft here, but what do you mean by a "hardlink" ?
A windows "Shortcut" ?

I have just installed 2.7 and want to start upgrading some code, but
alas still want to maintain some 2.5 code too.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The real problem with Python 3 - no business case for conversion

2010-07-07 Thread Paul Rubin
Ben Finney  writes:
> The point, one more time with feeling, is that the incompatibilities
> between 2.x and 3.x will *increase* over time.

The issue is less the "incompatibilities" than the -backwards-
incompatibilities.  Yes, Python 3 may introduce forward
incompatibilities by adding features absent from Python 2.  But it will
be possible to maintain a common codebase simply by not using those
features.  On the other hand, the door appears closed for Python 3
adding more stuff that breaks Python 2 code.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The real problem with Python 3 - no business case for conversion

2010-07-07 Thread geremy condra
On Wed, Jul 7, 2010 at 9:14 PM, Ben Finney  wrote:
> geremy condra  writes:
>
>> On Wed, Jul 7, 2010 at 8:26 PM, Brendan Abel <007bren...@gmail.com> wrote:
>> > Python 3.x will continue to change.  The incompatibilities between
>> > 3.x and 2.x will only become more numerous.  If your goal is to
>> > support 2.x, and 3.x, you'd be best supporting them separately.
>>
>> I maintain two projects that have to work from 2.5 to 3.1. On one of
>> them (~5kloc) we took the separate support route, and on the other
>> (~30kloc) I decided to keep a single codebase. IME the maintenance
>> burden on the former is substantially higher than the latter.
>
> The point, one more time with feeling, is that the incompatibilities
> between 2.x and 3.x will *increase* over time.

...and? I don't get to use features from 2.7, why would I expect to
use features from 3.3?

> If you now have a code base that is relatively easy to maintain for both
> Python 2.x and 3.x, that is a result of much back-porting efforts and of
> a new-feature moratorium that is currently in effect. Enjoy that
> situation as you may, because it is guaranteed not to last.

I have to target the oldest version of Python I want to support. New
features are irrelevant. I'm not sure why I should need to explain
that to you.

> Indeed, the feature moratorium is designed in part to help slow-moving
> codebases migrate to Python 3.x before Python resumes its normal pace of
> change again. If you're choosing to use that time to further entrench
> codebases for Python 2.x, I think that's a short-sighted choice.

I welcome the day that I can stop supporting 2.x. Until then, I have to
support both and your argument is irrelevant.

> Python 2.7 is the last 2.x, no further 3.x features will be back-ported.
> New 3.x features will begin to appear after the moratorium ends. The
> combination of those two means that *the single-codebase burden will
> only increase over time* as Python 3.x diverges further from what Python
> 2.x can support.

See above.

Geremy Condra
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Argh! Name collision!

2010-07-07 Thread Alf P. Steinbach /Usenet

* Alf P. Steinbach /Usenet, on 08.07.2010 01:47:


 enum DoAddRef { doAddRef };

 class Ptr
 {
 private:
 PyObject*   p_;

 public:
 Ptr( PyObject* p = 0 ): p_( p )
 {}

 Ptr( PyObject* p, DoAddRef ): p_( p )
 {
 assert( p != 0 );
 Py_INCREF( p_ );
 }

 Ptr( Ptr const& other ): p_( other.p_ )
 {
 Py_XINCREF( p_ );
 }

 ~Ptr()
 {
 Py_XDECREF( p_ );
 }

 void swapWith( Ptr& other ) { std::swap( p_, other.p_ ); }
 Ptr& operator=( Ptr other ) { swapWith( other ); return *this; }

 PyObject* get() const   { return p_; }

 PyObject* release()
 {
 PyObject* const result  = p_;
 Py_XDECREF( p_ );


Hark. This Py_XDECREF shouldn't be there, I don't know how it got there. The 
whole point of 'release', with conventional semantics, is to /not/ decrement the 
reference count.




 p_ = 0;
 return result;
 }
 };



Sorry for posting unfinished code,

- Alf


PS: "pyni" was a good name. But in use! When I thought about adding the "s" as 
disambiguation I thought the pure shock value of that was funny in a way, but 
now it doesn't seem funny. Is "pytes" (Python Extension Support) a good name?


--
blog at http://alfps.wordpress.com>
--
http://mail.python.org/mailman/listinfo/python-list


  1   2   >