PPC floating equality vs. byte compilation

2005-07-09 Thread Donn Cave
I ran into a phenomenon that seemed odd to me, while testing a
build of Python 2.4.1 on BeOS 5.04, on PowerPC 603e.

test_builtin.py, for example, fails a couple of tests with errors
claiming that apparently identical floating point values aren't equal.
But it only does that when imported, and only when the .pyc file
already exists.  Not if I execute it directly (python test_builtin.py),
or if I delete the .pyc file before importing it and running test_main().

For now, I'm going to just write this off as a flaky build.  I would
be surprised if 5 people in the world care, and I'm certainly not one
of them.  I just thought someone might find it interesting.

The stalwart few who still use BeOS are mostly using Intel x86 hardware,
as far as I know, but the first releases were for PowerPC, at first
on their own hardware and then for PPC Macs until Apple got nervous
and shut them out of the hardware internals.  They use a Metrowerks
PPC compiler that of course hasn't seen much development in the last
6 years, probably a lot longer.

Donn Cave, [EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: why UnboundLocalError?

2005-07-09 Thread Bengt Richter
On Fri, 8 Jul 2005 21:21:36 -0500, Alex Gittens [EMAIL PROTECTED] wrote:

I'm trying to define a function that prints fields of given widths
with specified alignments; to do so, I wrote some helper functions
nested inside of the print function itself. I'm getting an
UnboundLocalError, and after reading the Naming and binding section in
the Python docs, I don't see why.

Here's the error:
 fieldprint([5, 4], 'rl', ['Ae', 'Lau'])
Traceback (most recent call last):
  File stdin, line 1, in ?
  File fieldprint.py, line 35, in fieldprint
str +=3D cutbits()
  File fieldprint.py, line 11, in cutbits
for i in range(0, len(fields)):
UnboundLocalError: local variable 'fields' referenced before assignment

This is the code:
def fieldprint(widths,align,fields):

def measure():
totallen =3D 0
for i in range(0, len(fields)):
totallen +=3D len(fields[i])
return totallen

def cutbits():
cutbit =3D []
for i in range(0, len(fields)):
if len(fields[i]) =3D widths[i]:
cutbit.append(fields[i][:widths[i]])
fields =3D fields[widths[i]:]
 fields[i] = fields[i][widths[i]:]  # did 
you mean this, analogous to [1] below?

elif len(fields[i])  0:
leftover =3D widths[i] - len(fields[i])
if align[i] =3D=3D 'r':
cutbit.append(' '*leftover + fields=
[i])
elif align[i] =3D=3D 'l':
cutbit.append(fields[i] + ' '*lefto=
ver)
else:
raise 'Unsupported alignment option=
'
fields[i] =3D ''
 ^-- [1]
else:
cutbit.append(' '*widths[i])
return cutbit.join('')

if len(widths) !=3D len(fields) or len(widths)!=3Dlen(align) or
len(align)!=3Dlen(fields):
raise 'Argument mismatch'

str =3D ''


while measure()!=3D0:
str +=3D cutbits()

What's causing the error?
Maybe the missing [i]'s ?

It's not clear what you are trying to do with a field string that's
wider than the specified width. Do you want to keep printing lines that
have all blank fields except for where there is left-over too-wide remnants? 
E.g.,
if the fields were ['left','left12345','right','12345right'] and the widths 
were [5,5,6,6] and align 'llrr'

should the printout be (first two lines below just for ruler reference)
 1234512345123456123456
 LL
 left left1 right12345r
  2345 ight

or what? I think you can get the above with more concise code :-)
but a minor mod to yours seems to do it:

  def fieldprint(widths,align,fields):
 ... def measure():
 ... totallen = 0
 ... for i in range(0, len(fields)):
 ... totallen += len(fields[i])
 ... return totallen
 ... def cutbits():
 ... cutbit = []
 ... for i in range(0, len(fields)):
 ... if len(fields[i]) = widths[i]:
 ... cutbit.append(fields[i][:widths[i]])
 ... #fields = fields[widths[i]:]
 ... fields[i] = fields[i][widths[i]:]  # did you mean 
this, analogous to [1] below?
 ... elif len(fields[i])  0:
 ... leftover = widths[i] - len(fields[i])
 ... if align[i] == 'r':
 ... cutbit.append(' '*leftover + fields[i])
 ... elif align[i] == 'l':
 ... cutbit.append(fields[i] + ' '*leftover)
 ... else:
 ... raise 'Unsupported alignment option'
 ... fields[i] = ''  # -- [1]
 ... else:
 ... cutbit.append(' '*widths[i])
 ... # XXX return cutbit.join('')
 ... return ''.join(cutbit)
 ... if len(widths) != len(fields) or len(widths)!=len(align) or 
len(align)!=len(fields):
 ... raise 'Argument mismatch'
 ... # str = ''
 ... result_lines = []
 ... while measure()!=0:
 ... result_lines.append(cutbits())
 ... #str += cutbits()
 ... return '\n'.join(result_lines)
 ...
  fieldprint([5,5,6,6], 'llrr', ['left', 'left12345', 'right', '12345right'])
 'left left1 right12345r\n 2345 ight'
  print fieldprint([5,5,6,6], 'llrr', ['left', 'left12345', 'right', 
  '12345right'])
 left left1 right12345r
  2345 ight

Note that
for i in xrange(len(items)):
item = items[i]
# mess with item
just to walk through items one item at a time is more concisely written
for item in items:
# mess with item
and if you really need the 

Re: removing list comprehensions in Python 3.0

2005-07-09 Thread Bengt Richter
On Fri, 08 Jul 2005 22:29:30 -0600, Steven Bethard [EMAIL PROTECTED] wrote:

Dennis Lee Bieber wrote:
 On Fri, 08 Jul 2005 16:07:50 -0600, Steven Bethard
 [EMAIL PROTECTED] declaimed the following in comp.lang.python:
 
I only searched a few relatively recent threads in c.l.py, so there are 
probably more, but it looks to me like the final decision will have to 
be made by a pronouncement from Guido.
 
  Great... It takes me two releases of Python to get comfortable
 with them, and then they are threatened to be removed again...
 
  Might as well submit the language to ISO for standardization --
 then I wouldn't be following an erratic target G

Two points:

(1) There's no reason to get uncomfortable even if they're removed. 
You'd just replace [] with list().

So list(1, 2, 3) will be the same as [1, 2, 3] ??

Right now,
  list(1,2,3)
 Traceback (most recent call last):
   File stdin, line 1, in ?
 TypeError: list() takes at most 1 argument (3 given)

have fun ;-)


(2) *IMPORTANT* If this happens *at all*, it won't happen until Python 
3.0, which is probably at least 5 years away.  And the Python 2.X branch 
will still be available then, so if you don't like Python 3.0, you don't 
have to use it.

STeVe

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: removing list comprehensions in Python 3.0

2005-07-09 Thread Ron Adam
Leif K-Brooks wrote:
 Kay Schluehr wrote:
 
list.from_str(abc)

list(a, b, c )
 
 
 
 I assume we'll also have list.from_list, list.from_tuple,
 list.from_genexp, list.from_xrange, etc.?

List from list isn't needed, nor list from tuple. That's what the * is 
for.  And for that matter neither is the str.splitchar() either.


class mylist(list):
 def __init__(self,*args):
  self[:]=args[:]


mylist(*[1,2,3]) - [1,2,3]

mylist(*(1,2,3)) - [1,2,3]

mylist(*abc) - ['a','b','c']

mylist(1,2,3) - [1,2,3]

mylist([1],[2])  - [[1],[2]]

mylist('hello','world')  - ['hello','world']



Works for me.  ;-)

I always thought list([1,2,3]) - [1,2,3] was kind of silly.


Cheers,
Ron


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: removing list comprehensions in Python 3.0

2005-07-09 Thread Kay Schluehr
Leif K-Brooks schrieb:
 Kay Schluehr wrote:
 list.from_str(abc)
 
  list(a, b, c )


 I assume we'll also have list.from_list, list.from_tuple,
 list.from_genexp, list.from_xrange, etc.?

One might unify all those factory functions into a single
list.from_iter that dispatches to the right constructor that still
lives under the hood. More conceptually: there is some abstract iter
base class providing a from_iter method which may be overwritten in
concrete subclasses like tuple, str or list.

I would further suggest a lazy iterator used to evaluate objects when
they get accessed the first time:

 l = lazy( math.exp(100) , 27 )
lazy-iterator object at 0x4945f0
 l[1]
27

The first element won't ever be evaluated if it does not get accessed
explicitely. This is some very special kind of partial
evaluation/specialization.

Kay

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python Module Exposure

2005-07-09 Thread George Sakkis
Jacob Page wrote:

 Thomas Lotze wrote:
  Jacob Page wrote:
 
 better-named,
 
  Just a quick remark, without even having looked at it yet: the name is not
  really descriptive and runs a chance of misleading people. The example I'm
  thinking of is using zope.interface in the same project: it's customary to
  name interfaces ISomething.

 I've thought of IntervalSet (which is a very long name), IntSet (might
 be mistaken for integers), ItvlSet (doesn't roll off the fingers),
 ConSet, and many others.  Perhaps IntervalSet is the best choice out of
 these.  I'd love any suggestions.


Hi Jacob,

I liked a lot the idea of interval sets; I wonder why no one has
thought of it before (hell, why *I* haven't thought about it before ?
:)) I haven't had a chance to look into the internals, so my remarks
now are only API-related and stylistic:

1. As already noted, ISet is not really descriptive of what the class
does. How about RangeSet ? It's not that long and I find it pretty
descriptive. In this case, it would be a good idea to change Interval
to Range to make the association easier.

2. The module's helper functions -- which are usually called factory
functions/methods because they are essentially alternative constructors
of ISets -- would perhaps better be defined as classmethods of ISet;
that's a common way to define instance factories in python. Except for
'eq' and 'ne', the rest define an ISet of a single Interval, so they
would rather be classmethods of Interval. Also the function names could
take some improvement; at the very least they should not be 2-letter
abbreviations. Finally I would omit 'eq', an interval of a single
value; single values can be given in ISet's constructor anyway. Here's
a different layout:

class Range(object): # Interval

@classmethod
def lowerThan(cls, value, closed=False):
# lt, for closed==False
# le, for closed==True
return cls(None, False, value, closed)

@classmethod
def greaterThan(cls, value, closed=False):
# gt, for closed==False
# ge, for closed==True
return cls(value, closed, None, False)

@classmethod
def between(cls, min, max, closed=False):
# exInterval, for closed==False
# incInterval, for closed==True
return cls(min, closed, max, closed)


class RangeSet(object):  # ISet

@classmethod
def allExcept(cls, value): # ne
return cls(Range.lowerThan(value), Range.greaterThan(value))

3. Having more than one names for the same thing is good to be avoided
in general. So keep either none or empty (preferably empty to
avoid confusion with None) and remove ISet.__add__ since it is synonym
to ISet.__or__.

4. Intervals shouldn't be comparable; the way __cmp__ works is
arbitrary and not obvious.

5. Interval.adjacentTo() could take an optional argument to control
whether an endpoint is allowed to be in both ranges or not (e.g.
whether (1,3], [3, 7] are adjacent or not).

6. Possible ideas in your TODO list:
- Implement the whole interface of sets for ISet's so that a client
can use either or them transparently.
- Add an ISet.remove() for removing elements, Intervals, ISets as
complementary to ISet.append().
- More generally, think about mutable vs immutable Intervals and
ISets. The sets module in the standard library will give you a good
idea of  the relevant design and implementation issues.

After I look into the module's internals, I'll try to make some changes
and send it back to you for feedback.

Regards,
George

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Proposal: reducing self.x=x; self.y=y; self.z=z boilerplate code

2005-07-09 Thread Ralf W. Grosse-Kunstleve
--- NickC [EMAIL PROTECTED] wrote:
 I'd be very interested to hear your opinion on the 'namespace' module,
 which looks at addressing some of these issues (the Record object, in
 particular).  The URL is http://namespace.python-hosting.com, and any
 comments should be directed to the [EMAIL PROTECTED]
 discussion list.

Hi Nick,

The namespace module looks interesting, thanks for the pointer! (I saw your
other message but didn't have a chance to reply immediately.)

I tried out the namespace.Record class. The resulting user code looks nice, but
I have two concerns:

- It requires a different coding style; until it is well established it will
surprise people.

- The runtime penalty is severe.

Attached is a simple adopt_timings.py script. If I run it with Python 2.4.1
under RH WS3, 2.8GHz Xeon, I get:

overhead: 0.01
plain_grouping: 0.27
update_grouping: 0.44
plain_adopt_grouping: 0.68
record_grouping: 10.85

I.e. record_grouping (using namespace.Record) is about 40 times slower than the
manual self.x=x etc. implementation.

My conclusion is that namespace.Record may have merits for specific purposes,
but is impractical as a general-purpose utility like I have in mind.

Note that the attached code includes a new, highly simplified plain_adopt()
function, based on the information I got through other messages in this thread.
Thanks to everybody for suggestions!

Cheers,
Ralf




Sell on Yahoo! Auctions – no fees. Bid on great items.  
http://auctions.yahoo.com/import sys, os

class plain_grouping:
  def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z

class update_grouping:
  def __init__(self, x, y, z):
self.__dict__.update(locals())
del self.self

def plain_adopt():
  frame = sys._getframe(1)
  init_locals = frame.f_locals
  self = init_locals[frame.f_code.co_varnames[0]]
  self.__dict__.update(init_locals)
  del self.self

class plain_adopt_grouping:
  def __init__(self, x, y, z):
plain_adopt()

try:
  from namespace import Record
except ImportError:
  Record = None
else:
  class record_grouping(Record):
x = None
y = None
z = None

class timer:
  def __init__(self):
self.t0 = os.times()
  def get(self):
tn = os.times()
return (tn[0]+tn[1]-self.t0[0]-self.t0[1])

def time_overhead(n_repeats):
  t = timer()
  for i in xrange(n_repeats):
pass
  return t.get()

def time(method, n_repeats):
  g = method(x=1,y=2,z=3)
  assert g.x == 1
  assert g.y == 2
  assert g.z == 3
  t = timer()
  for i in xrange(n_repeats):
method(x=1,y=2,z=3)
  return t.get()

def time_all(n_repeats=10):
  print overhead: %.2f % time_overhead(n_repeats)
  print plain_grouping: %.2f % time(plain_grouping, n_repeats)
  print update_grouping: %.2f % time(update_grouping, n_repeats)
  print plain_adopt_grouping: %.2f % time(plain_adopt_grouping, n_repeats)
  if (Record is not None):
print record_grouping: %.2f % time(record_grouping, n_repeats)

if (__name__ == __main__):
  time_all()
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Lisp development with macros faster than Python development?..

2005-07-09 Thread Kay Schluehr


Kirk Job Sluder schrieb:
 Kay Schluehr [EMAIL PROTECTED] writes:

  This might be a great self experience for some great hackers but just
  annoying for others who used to work with modular standard librarys and
  think that the border of the language and an application should be
  somehow fixed to enable those.

 In what way do lisp macros prevent the creation of modular libraries?
 Common Lisp does does have mechanisms for library namespaces, and in
 practice a macro contained within a library is not that much different
 from a function contained in a library or a class contained in a
 library. Macros just provide another mechanism for creating useful
 domain-specific abstractions.

To be honest I don't understand what a domain-specific abstraction
could be? What is the benefit of abstractions if they are not
abstracting from particular domain specific stuff?

 The primary advantage to macros is that
 you can create abstractions with functionality that is not easily
 described as either a function or a class definition.

As long as macros are used to create new language features such as an
object system like CLOS this technique may be perfectly justified for
language developers ( ! ) but I still consider it as a bad idea to
muddle the language development and the application development, that
seems to be the favourite programming style of Paul Graham. On the
other hand thinking about language development as a certain application
domain I find nothing wrong with the idea that it once reaches somehow
a state of a mature product that should not be altered in arbitrary
manner for the sake of a large user community.

Kay

-- 
http://mail.python.org/mailman/listinfo/python-list


Hosting Python projects

2005-07-09 Thread Kay Schluehr
Jacob Page schrieb:
 I have created what I think may be a useful Python module, but I'd like
 to share it with the Python community to get feedback, i.e. if it's
 Pythonic.  If it's considered useful by Pythonistas, I'll see about
 hosting it on Sourceforge or something like that.  Is this a good forum
 for exposing modules to the public, or is there somewhere
 more-acceptable?  Does this newsgroup find attachments acceptable?

 --
 Jacob

One side-question: has anyone made experiences in hosting his open
source project on 

http://www.python-hosting.com/

Regards,
Kay

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pattern question

2005-07-09 Thread cantabile
Scott David Daniels a écrit :
 cantabile wrote:
 
 bruno modulix a écrit :

 You may want to have a look at the Factory pattern...
 ... demo of class Factory ...
 
 
 Taking advantage of Python's dynamic nature, you could simply:
 # similarly outrageously oversimplified dummy example
 class Gui(object):
def __init__(self, installer):
self.installer = installer
 
 class PosixGui(Gui):
 pass
 
 class Win32Gui(Gui):
 pass
 
 if os.name == 'posix':
 makeGui = PosixGui
 elif os.name == 'win32':
 makeGui = Win32Gui
 else:
 raise os %s not supported % os.name
 
 
 class Installer(object):
 def __init__(self, guiFactory):
 self.gui = guiFactory(self)
 
 def main():
 inst = Installer(makeGui)
 return inst.gui.main()
 
 --Scott David Daniels
 [EMAIL PROTECTED]

Thank you too for this tip. :)
Coming from C++ (mainly), I'm not used to this dynamic way of doing
things. That's usefull.
-- 
http://mail.python.org/mailman/listinfo/python-list


file.readlines() question

2005-07-09 Thread vch
Does a call to file.readlines() reads all lines at once in the memory? 
Are the any reasons, from the performance point of view, to prefer 
*while* loop with readline() to *for* loop with readlines()?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: file.readlines() question

2005-07-09 Thread Erik Max Francis
vch wrote:

 Does a call to file.readlines() reads all lines at once in the memory? 
 Are the any reasons, from the performance point of view, to prefer 
 *while* loop with readline() to *for* loop with readlines()?

Yes, and you just mentioned it.  .readlines reads the entire file into 
memory at once, which can obviously be expensive if the file is large.

A for loop with .readline, of course, will work, but modern versions of 
Python allow iteration over a file, which will read it line by line:

for line in aFile:
...

-- 
Erik Max Francis  [EMAIL PROTECTED]  http://www.alcyone.com/max/
San Jose, CA, USA  37 20 N 121 53 W  AIM erikmaxfrancis
   A wise man never loses anything if he have himself.
   -- Montaigne
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: file.readlines() question

2005-07-09 Thread vch
Erik Max Francis wrote:
 ... modern versions of 
 Python allow iteration over a file, which will read it line by line:
 
 for line in aFile:
 ...
 

Thanks! Just what I need.
-- 
http://mail.python.org/mailman/listinfo/python-list


__autoinit__ (Was: Proposal: reducing self.x=x; self.y=y; self.z=z boilerplate code)

2005-07-09 Thread Ralf W. Grosse-Kunstleve
My initial proposal
(http://cci.lbl.gov/~rwgk/python/adopt_init_args_2005_07_02.html) didn't
exactly get a warm welcome...

And Now for Something Completely Different:

class autoinit(object):

  def __init__(self, *args, **keyword_args):
self.__dict__.update(
  zip(self.__autoinit__.im_func.func_code.co_varnames[1:], args))
self.__dict__.update(keyword_args)
self.__autoinit__(*args, **keyword_args)

class grouping(autoinit):

  def __autoinit__(self, x, y, z):
print self.x, self.y, self.z

group = grouping(1,2,z=3)
group = grouping(z=1,x=2,y=3)
try: grouping(1)
except TypeError, e: print e
try: grouping(1,2,3,a=0)
except TypeError, e: print e


Almost like my original favorite solution, only better, and it doesn't require
a syntax change.

Under a hypothetical new proposal __autoinit__ would become a standard feature
of object.

Any takers?

Cheers,
Ralf


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
-- 
http://mail.python.org/mailman/listinfo/python-list


directory traverser

2005-07-09 Thread Florian Lindner
Hello,
IIRC there is a directory traverser for walking recursively through
subdirectories in the standard library. But I can't remember the name and
was unable to find in the docs.
Anyone can point me to it?

Thanks,

Florian
-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: eric3 3.7.1 available

2005-07-09 Thread Detlev Offenbach
Hi,

I am proud to announce the availability of eric3 3.7.1. This is a bug fix
release which fixes a severe bug next to some normal ones.

NOTE: Everybody using 3.7.0 or 3.6.x should upgrade.

It is available via http://www.die-offenbachs.de/detlev/eric3.html.

What is it?
---
Eric3 is a Python and Ruby IDE with all batteries included. Please see
a.m. website for more details.

Regards,
Detlev
-- 
Detlev Offenbach
[EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: directory traverser

2005-07-09 Thread John Machin
Florian Lindner wrote:
 Hello,
 IIRC there is a directory traverser for walking recursively through
 subdirectories in the standard library. But I can't remember the name and
 was unable to find in the docs.

Where did you look? How did you look?

 Anyone can point me to it?

Did you try Googling in this newsgroup for directory traverser? I did: 
Do you mean  traversal? -- Hmmm, the allegedly most relevant article 
is 5 years old -- Hit the date order link -- what do we find? The 
timbot discussing a function called os.walk ...
-- 
http://mail.python.org/mailman/listinfo/python-list


Formatting data to output in web browser from web form

2005-07-09 Thread Harlin Seritt
Hi,

I am using CherryPy to make a very small Blog web app.

Of course I use a textarea input on a page to get some information.
Most of the time when text is entered into it, there will be carriage
returns.

When I take the text and then try to re-write it out to output (in html
on a web page), I notice the Python strings don't translate these
special characters (carriage returns) to a p so that these
paragraphs show up properly in html for output. I tried doing something
like StringData.replace('\n', 'p'). Of course this doesnt work
because it appears that '\n' is not being used.

Is there anything I can do to make sure paragraphs show up properly?

Thanks,

Harlin Seritt

-- 
http://mail.python.org/mailman/listinfo/python-list


Problem with sha.new

2005-07-09 Thread Florian Lindner
Hello,
I try to compute SHA hashes for different files:


for root, dirs, files in os.walk(sys.argv[1]):
for file in files:
path =  os.path.join(root, file)
print path
f = open(path)
sha = sha.new(f.read())
sha.update(f.read())
print sha.hexdigest()


this generates a traceback when sha.new() is called for the second time:

/home/florian/testdir/testfile
c95ad0ce54f903e1568facb2b120ca9210f6778f
/home/florian/testdir/testfile2
Traceback (most recent call last):
  File duplicatefinder.py, line 11, in ?
sha = sha.new(f.read())
AttributeError: new

What is wrong there?

Thanks,

Florian
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Formatting data to output in web browser from web form

2005-07-09 Thread John Machin
Harlin Seritt wrote:
 Hi,
 
 I am using CherryPy to make a very small Blog web app.
 
 Of course I use a textarea input on a page to get some information.
 Most of the time when text is entered into it, there will be carriage
 returns.
 
 When I take the text and then try to re-write it out to output (in html
 on a web page), I notice the Python strings don't translate these
 special characters (carriage returns) to a p so that these
 paragraphs show up properly in html for output.  I tried doing something
 like StringData.replace('\n', 'p'). Of course this doesnt work
 because it appears that '\n' is not being used.
 
 Is there anything I can do to make sure paragraphs show up properly?
 

The ASCII carriage return (CR) is represented in Python as \x0d or 
\r. The line feed (LF) is \x0a or \n. Does this info help you with 
your problem? If, not perhaps you might like to

print repr(StringData)

so that we can see what you are calling carriage returns.

HTH,
John
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problem with sha.new

2005-07-09 Thread John Machin
Florian Lindner wrote:
 Hello,
 I try to compute SHA hashes for different files:
 
 
 for root, dirs, files in os.walk(sys.argv[1]):




 for file in files:
 path =  os.path.join(root, file)
 print path
 f = open(path)
print sha is, repr(sha) ### self-help !
 sha = sha.new(f.read())
print sha is, repr(sha) ### self-help !
 sha.update(f.read())
 print sha.hexdigest()
 
 
 this generates a traceback when sha.new() is called for the second time:
 
 /home/florian/testdir/testfile
 c95ad0ce54f903e1568facb2b120ca9210f6778f
 /home/florian/testdir/testfile2
 Traceback (most recent call last):
   File duplicatefinder.py, line 11, in ?
 sha = sha.new(f.read())
 AttributeError: new
 
 What is wrong there?
 
sha no longer refers to the module of that name; it refers to the 
sha-object returned by the FIRST call of sha.new. That sha-object 
doesn't have a method called new, hence the AttributeError exception.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problem with sha.new

2005-07-09 Thread Dan Sommers
On Sat, 09 Jul 2005 13:49:12 +0200,
Florian Lindner [EMAIL PROTECTED] wrote:

 Hello,
 I try to compute SHA hashes for different files:


 for root, dirs, files in os.walk(sys.argv[1]):
 for file in files:
 path =  os.path.join(root, file)
 print path
 f = open(path)
 sha = sha.new(f.read())
  ^^^

Now the name sha no longer refers to the sha module, but to the object
returned by sha.new.  Use a different name.

 sha.update(f.read())
 print sha.hexdigest()

Regards,
Dan

-- 
Dan Sommers
http://www.tombstonezero.net/dan/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: why UnboundLocalError?

2005-07-09 Thread and-google
Alex Gittens wrote:

 I'm getting an UnboundLocalError

 def fieldprint(widths,align,fields): [...]
 def cutbits(): [...]
 fields = fields[widths[i]:]

There's your problem. You are assigning 'fields' a completely new
value. Python doesn't allow you to rebind a variable from an outer
scope in an inner scope (except for the special case where you
explicitly use the 'global' directive, which is no use for the nested
scopes you are using here).

So when you assign an identifier in a function Python assumes that you
want that identifier to be a completely new local variable, *not* a
reference to the variable in the outer scope. By writing 'fields= ...'
in cutbits you are telling Python that fields is now a local variable
to cutbits. So when the function is entered, fields is a new variable
with no value yet, and when you first try to read it without writing to
it first you'll get an error.

What you probably want to do is keep 'fields' pointing to the same
list, but just change the contents of the list. So replace the assign
operation with a slicing one:

  del fields[:widths[i]]

-- 
Andrew Clover
mailto:[EMAIL PROTECTED]
http://www.doxdesk.com/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problem with sha.new

2005-07-09 Thread Rob Williscroft
Florian Lindner wrote in news:[EMAIL PROTECTED] in 
comp.lang.python:

 Hello,
 I try to compute SHA hashes for different files:
 
 
 for root, dirs, files in os.walk(sys.argv[1]):
 for file in files:
 path =  os.path.join(root, file)
 print path
 f = open(path)

Here you rebind 'sha' from what it was before (presumably the module sha)
to the result of 'sha.new' presumably the new 'sha' doesn't have a 'new'
method. try renameing your result variable.

 sha = sha.new(f.read())
 sha.update(f.read())
 print sha.hexdigest()

result = sha.new(f.read())
result.update(f.read())
print result.hexdigest()

also I don't think you need the second call to update as f will
have been read by this time and it will add nothing.

 
 
 this generates a traceback when sha.new() is called for the second 
 time:
 

 sha = sha.new(f.read())
 AttributeError: new
 

Rob.
-- 
http://www.victim-prime.dsl.pipex.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: __autoinit__ (Was: Proposal: reducing self.x=x; self.y=y; self.z=z boilerplate code)

2005-07-09 Thread Kay Schluehr

Ralf W. Grosse-Kunstleve schrieb:
 My initial proposal
 (http://cci.lbl.gov/~rwgk/python/adopt_init_args_2005_07_02.html) didn't
 exactly get a warm welcome...

Well ... yes ;)

Ralf, if you want to modify the class instantiation behaviour you
should have a look on metaclasses. That's what they are for. It is not
a particular good idea to integrate a new method into the object base
class for each accidental idea and write a PEP for it.

I provide you an example which is actually your use case. It doesn't
change the class hierarchy : the metaclass semantics is not is a as
for inheritance but customizes as one would expect also for
decorators.

class autoattr(type):
'''
The autoattr metaclass is used to extract auto_xxx parameters from
the argument-tuple or the keyword arguments of an object
constructor __init__
and create object attributes mit name xxx and the value of auto_xxx
passed
into __init__
'''
def __init__(cls,name, bases, dct):
super(autoattr,cls).__init__(name,bases,dct)
old_init = cls.__init__
defaults = cls.__init__.im_func.func_defaults
varnames = cls.__init__.im_func.func_code.co_varnames[1:]

def new_init(self,*args,**kwd):
for var,default in zip(varnames[-len(defaults):],defaults):
if var.startswith(auto_):
self.__dict__[var[5:]] = default
for var,arg in zip(varnames,args):
if var.startswith(auto_):
self.__dict__[var[5:]] = arg
for (key,val) in kwd.items():
if key.startswith(auto_):
self.__dict__[key[5:]] = val
old_init(self,*args,**kwd)
cls.__init__ = new_init

class app:
__metaclass__ = autoattr
def __init__(self, auto_x, y, auto_z = 9):
pass

 a = app(2,5)
 a.x
2
 a.z
9
 a.y
- AttributeError


Kay

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PPC floating equality vs. byte compilation

2005-07-09 Thread Andrew MacIntyre
Donn Cave wrote:
 I ran into a phenomenon that seemed odd to me, while testing a
 build of Python 2.4.1 on BeOS 5.04, on PowerPC 603e.
 
 test_builtin.py, for example, fails a couple of tests with errors
 claiming that apparently identical floating point values aren't equal.
 But it only does that when imported, and only when the .pyc file
 already exists.  Not if I execute it directly (python test_builtin.py),
 or if I delete the .pyc file before importing it and running test_main().
 
 For now, I'm going to just write this off as a flaky build.  I would
 be surprised if 5 people in the world care, and I'm certainly not one
 of them.  I just thought someone might find it interesting.

I have a faint recollection of seeing other references to this on other 
platforms.  That faint recollection also seems to point to it being 
something to do with the marshalling of floats (.pyc files contain 
constants in a marshalled form).  Don't think I've ever seen it myself...

-
Andrew I MacIntyre These thoughts are mine alone...
E-mail: [EMAIL PROTECTED]  (pref) | Snail: PO Box 370
[EMAIL PROTECTED] (alt) |Belconnen ACT 2616
Web:http://www.andymac.org/   |Australia
-- 
http://mail.python.org/mailman/listinfo/python-list


pyo contains absolute paths

2005-07-09 Thread David Siroky
Hi!

When I compile my python files with python -OO  into pyo files
then they still contain absolute paths of the source files which is
undesirable for me. How can I deal with that?

Thank you.

David

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: removing list comprehensions in Python 3.0

2005-07-09 Thread Peter Hansen
Bengt Richter wrote:
 On Fri, 08 Jul 2005 22:29:30 -0600, Steven Bethard [EMAIL PROTECTED] wrote:
(1) There's no reason to get uncomfortable even if they're removed. 
You'd just replace [] with list().
 
 So list(1, 2, 3) will be the same as [1, 2, 3] ??

No, the discussion is about list comprehensions.  [1,2,3] is not a list 
comprehension, as you know.

-Peter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pyo contains absolute paths

2005-07-09 Thread Peter Hansen
David Siroky wrote:
 When I compile my python files with python -OO  into pyo files
 then they still contain absolute paths of the source files which is
 undesirable for me. How can I deal with that?

Don't do that?

Delete the pyo files?

Stop using Python?

I could guess at a few more possibilities, but since you don't actually 
say what you *want* to happen, just what you don't want to happen, there 
are an infinite number of ways to satisfy you right now. wink

(Hint #1: absolute paths are always, AFAIK, put into the .pyc or .pyo 
files.)

(Hint #2: maybe explaining why you don't want this to happen would help 
too, since that will probably determine the best solution.)

-Peter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: removing list comprehensions in Python 3.0

2005-07-09 Thread Raymond Hettinger
In all probability, both list comprehensions and generator expressions
will be around in perpetuity.  List comps have been a very successful
language feature.

The root of this discussion has been the observation that a list
comprehension can be expressed in terms of list() and a generator
expression.   However, the former is faster when you actually want a
list result and many people (including Guido) like the square brackets.

After the advent of generators, it seemed for a while that all
functions and methods that returned lists would eventually return
iterators instead.  What we are learning is that there is a place for
both.  It is darned inconvenient to get an iterator when you really
need a list, when you want to slice the result, when you want to see a
few elements through repr(), and when you need to loop over the
contents more than once.


Raymond Hettinger

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: FORTRAN like formatting

2005-07-09 Thread Cyril Bazin
Ok, dennis, your solution may be better, but is quite dangerous:
Python can't handle if there is exactly 3 arguments passed to the
function. The created code is correct but the error will appear when
your run Fortran.

Cyril

On 7/9/05, Dennis Lee Bieber [EMAIL PROTECTED] wrote:
 On Fri, 8 Jul 2005 20:31:06 +0200, Cyril BAZIN [EMAIL PROTECTED]
 declaimed the following in comp.lang.python:
 
 
 
  def toTable(n1, n2, n3):
  return %20s%20s%20s%tuple([%.12f%x for x in [n1, n2, n3]])
 
 Ugh...
 
 def toTable(*ns):
 return (%20.12f * len(ns)) % ns
 
 toTable(3.14, 10.0, 3, 4, 5.99)
 '  3.1400 10.  3.'
 toTable(3.14, 10.0, 3, 4, 5.99)
 '  3.1400 10.  3.
 4.  5.9900'
 toTable(1)
 '  1.'
 
 The second one IS one line, it just wraps in the news message.
 Using the *ns argument definition lets the language collect all
 arguments into a tuple, using * len(ns) creates enough copies of a
 single item format to handle all the arguments.
 
 Oh, a top-poster... No wonder I didn't recall seeing any
 Fortran.
 
  On 7/8/05, Einstein, Daniel R [EMAIL PROTECTED] wrote:
  
  
   Hi,
  
   Sorry for this, but I need to write ASCII from my Python to be read by
   FORTRAN and the formatting is very important. Is there any way of doing
   anything like:
  
   write(*,'(3( ,1pe20.12))') (variable)
  
 
 Which Fortran compiler? I know VMS Fortran was very friendly,
 when specifying blanks not significant or something like that... To
 read three floating numbers (regardless of format) merely required
 something like:
 
 read(*, '(bn,3f)') a, b, c
 
 (or 'bs' for blanks significant -- I forget which one enabled free
 format input processing)
 
 You aren't going to get prescaling; Python follows C format
 codes (if one doesn't use the generic %s string code and rely on Python
 to convert numerics correctly).
 
 def toTable(*ns):
 return (%20.12e * len(ns)) % ns
 
 toTable(3.14, 10.0, 3)
 ' 3.1400e+000 1.e+001 3.e+000'
 
 --
   == 
 [EMAIL PROTECTED]  | Wulfraed  Dennis Lee Bieber  KD6MOG 
[EMAIL PROTECTED] |   Bestiaria Support Staff   
   == 
 Home Page: http://www.dm.net/~wulfraed/
  Overflow Page: http://wlfraed.home.netcom.com/
 --
 http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: FORTRAN like formatting

2005-07-09 Thread beliavsky
Dennis Lee Bieber wrote:

  On 7/8/05, Einstein, Daniel R [EMAIL PROTECTED] wrote:
  
  
   Hi,
  
   Sorry for this, but I need to write ASCII from my Python to be read by
   FORTRAN and the formatting is very important. Is there any way of doing
   anything like:
  
   write(*,'(3( ,1pe20.12))') (variable)
  

   Which Fortran compiler? I know VMS Fortran was very friendly,
 when specifying blanks not significant or something like that... To
 read three floating numbers (regardless of format) merely required
 something like:

   read(*, '(bn,3f)') a, b, c

 (or 'bs' for blanks significant -- I forget which one enabled free
 format input processing)

Fortran 77 and later versions have list-directed I/O, so the OP could
simply write

read (inunit,*) a,b,c

if the numbers in his input file are separated by spaces or commas. An
online reference is
http://docs.hp.com/cgi-bin/doc3k/B3150190022.12120/9 .

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: removing list comprehensions in Python 3.0

2005-07-09 Thread Bengt Richter
On Sat, 09 Jul 2005 10:16:17 -0400, Peter Hansen [EMAIL PROTECTED] wrote:

Bengt Richter wrote:
 On Fri, 08 Jul 2005 22:29:30 -0600, Steven Bethard [EMAIL PROTECTED] wrote:
(1) There's no reason to get uncomfortable even if they're removed. 
You'd just replace [] with list().
 
 So list(1, 2, 3) will be the same as [1, 2, 3] ??

No, the discussion is about list comprehensions.  [1,2,3] is not a list 
comprehension, as you know.

D'oh. Sorry to have come in from contextual outer space ;-/

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: why UnboundLocalError?

2005-07-09 Thread Bengt Richter
On 9 Jul 2005 05:26:46 -0700, [EMAIL PROTECTED] wrote:

Alex Gittens wrote:

 I'm getting an UnboundLocalError

 def fieldprint(widths,align,fields): [...]
 def cutbits(): [...]
 fields = fields[widths[i]:]

There's your problem. You are assigning 'fields' a completely new
value. Python doesn't allow you to rebind a variable from an outer
scope in an inner scope (except for the special case where you
explicitly use the 'global' directive, which is no use for the nested
scopes you are using here).

So when you assign an identifier in a function Python assumes that you
want that identifier to be a completely new local variable, *not* a
reference to the variable in the outer scope. By writing 'fields= ...'
in cutbits you are telling Python that fields is now a local variable
to cutbits. So when the function is entered, fields is a new variable
with no value yet, and when you first try to read it without writing to
it first you'll get an error.

What you probably want to do is keep 'fields' pointing to the same
list, but just change the contents of the list. So replace the assign
operation with a slicing one:

  del fields[:widths[i]]

Except the OP probably had two errors in that line, and doesn't want to slice
out fields from the field list, but rather slice characters from the i-th field,
and strings are immutable, so he can't do
   del fields[i][:widths[i]] # slice deletion illegal if fields[i] is a string
and so
   fields[i] = fields[i][:widths[i]]
would be the way to go (see my other post).

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PPC floating equality vs. byte compilation

2005-07-09 Thread Tim Peters
[Donn Cave]
 I ran into a phenomenon that seemed odd to me, while testing a
 build of Python 2.4.1 on BeOS 5.04, on PowerPC 603e.

 test_builtin.py, for example, fails a couple of tests with errors
 claiming that apparently identical floating point values aren't equal.
 But it only does that when imported, and only when the .pyc file
 already exists.  Not if I execute it directly (python test_builtin.py),
 or if I delete the .pyc file before importing it and running test_main().

It would be most helpful to open a bug report, with the output from
failing tests.  Can't guess much from the above.  In general, this can
happen if the platform C string-float routines are so poor that

eval(repr(x)) != x

for some float x, because .pyc files store repr(x) for floats in
2.4.1.  The 754 standard requires that eval(repr(x)) == x exactly for
all finite float x, and most platform C string-float routines these
days meet that requirement.

 For now, I'm going to just write this off as a flaky build.  I would
 be surprised if 5 people in the world care, and I'm certainly not one
 of them.  I just thought someone might find it interesting.

There are more than 5 numeric programmers even in the Python world
wink, but I'm not sure there are more than 5 such using BeOS 5.04 on
PowerPC 603e.

 The stalwart few who still use BeOS are mostly using Intel x86 hardware,
 as far as I know, but the first releases were for PowerPC, at first
 on their own hardware and then for PPC Macs until Apple got nervous
 and shut them out of the hardware internals.  They use a Metrowerks
 PPC compiler that of course hasn't seen much development in the last
 6 years, probably a lot longer.

The ultimate cause is most likely in the platform C library's
string-float routines (sprintf, strtod, that kind of thing).
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: removing list comprehensions in Python 3.0

2005-07-09 Thread John Roth
Raymond Hettinger [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 In all probability, both list comprehensions and generator expressions
 will be around in perpetuity.  List comps have been a very successful
 language feature.

 The root of this discussion has been the observation that a list
 comprehension can be expressed in terms of list() and a generator
 expression.   However, the former is faster when you actually want a
 list result and many people (including Guido) like the square brackets.

 After the advent of generators, it seemed for a while that all
 functions and methods that returned lists would eventually return
 iterators instead.  What we are learning is that there is a place for
 both.  It is darned inconvenient to get an iterator when you really
 need a list, when you want to slice the result, when you want to see a
 few elements through repr(), and when you need to loop over the
 contents more than once.

I was wondering about what seemed like an ill-concieved rush to
make everything an iterator. Iterators are, of course, useful but there
are times when you really did want a list.

John Roth


 Raymond Hettinger
 

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Defending Python

2005-07-09 Thread Donn Cave
Quoth Dave Cook [EMAIL PROTECTED]:
| On 2005-07-08, Charlie Calvert [EMAIL PROTECTED] wrote:
|
| I perhaps rather foolishly wrote two article that mentioned Python as a 
| good alternative language to more popular tools such as C# or Java. I 
|
| Sounds like a really hidebound bunch over there.  Good luck.

Nah, just normal.  Evangelism is always wasted on the majority of
listeners, but to the small extent it may succeed it depends on
really acute delineation of the pitch.  It's very hard for people
to hear about something without trying to apply it directly to the
nearest equivalent thing in their own familiar context.  Say good
things about language X, and people will hear you saying give up
using language Y and rewrite everything in language X.  Then they
will conclude that if you would say that, you don't know very much
about their environment.

Donn Cave, [EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Defending Python

2005-07-09 Thread Brian
Raymond Hettinger wrote:
 So, I would recommend Python to these 
  folks as an easily acquired extra skill.


I agree 100% with your statement above.  Python may not be sufficient 
for being the only programming language that one needs to know -- yet, 
it does come in VERY handy for projects that need to perform tasks on 
non-Microsoft operating systems.  :-)

Brian
---
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Defending Python

2005-07-09 Thread Grant Edwards
On 2005-07-09, Brian [EMAIL PROTECTED] wrote:

  folks as an easily acquired extra skill.

 I agree 100% with your statement above.  Python may not be sufficient 
 for being the only programming language that one needs to know -- yet, 
 it does come in VERY handy for projects that need to perform tasks on 
 non-Microsoft operating systems.  :-)

It's also darned handy for projects that need to perform tasks
on Microsft operating systems but you want to do all the
development work under a real OS.

-- 
Grant Edwards   grante Yow!  I'm into SOFTWARE!
  at   
   visi.com
-- 
http://mail.python.org/mailman/listinfo/python-list


About undisclosed recipient

2005-07-09 Thread bartek . rylko
Hi all, Any one got idea about how to set undisclosed recipient? I put
all recipient in BCC field while the To field don't want to leave
blank. but neither fail to place an empty email address nor i don't
want to put my own email address inside. www.bartekrr.info

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: removing list comprehensions in Python 3.0

2005-07-09 Thread George Sakkis
Raymond Hettinger [EMAIL PROTECTED] wrote:

 In all probability, both list comprehensions and generator expressions
 will be around in perpetuity.  List comps have been a very successful
 language feature.

 The root of this discussion has been the observation that a list
 comprehension can be expressed in terms of list() and a generator
 expression.

No, the root of the discussion, in this thread at least, was the answer
to why not dict comprehensions ?, which was along the lines of well,
you can do it in one line by dict(gen_expression).

 However, the former is faster when you actually want a
 list result and many people (including Guido) like the square brackets.

Also dict comprehensions are faster if you actually want a dict result,
set comprehensions for set result, and so on and so forth.

 After the advent of generators, it seemed for a while that all
 functions and methods that returned lists would eventually return
 iterators instead.  What we are learning is that there is a place for
 both.

Altering the result type of existing functions and methods is not
exactly the same with the discussion on the future of list
comprehensions; the latter affects only whether listcomps are special
enough to be granted special syntax support, when there is an
equivalent way to express the same thing. It's funny how one of the
arguments for removing lambda -- you can do the same by defining a
named function -- does not apply for list comprehensions.

 It is darned inconvenient to get an iterator when you really
 need a list, when you want to slice the result, when you want to see a
 few elements through repr(), and when you need to loop over the
 contents more than once.

 Raymond Hettinger

Similar arguments can be given for dict comprehensions as well.

George

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: why UnboundLocalError?

2005-07-09 Thread Bengt Richter
On Sat, 09 Jul 2005 06:17:20 GMT, [EMAIL PROTECTED] (Bengt Richter) wrote:

On Fri, 8 Jul 2005 21:21:36 -0500, Alex Gittens [EMAIL PROTECTED] wrote:

I'm trying to define a function that prints fields of given widths
with specified alignments; to do so, I wrote some helper functions
nested inside of the print function itself. I'm getting an
UnboundLocalError, and after reading the Naming and binding section in
the Python docs, I don't see why.

It's not clear what you are trying to do with a field string that's
wider than the specified width. Do you want to keep printing lines that
have all blank fields except for where there is left-over too-wide remnants? 
E.g.,
if the fields were ['left','left12345','right','12345right'] and the widths 
were [5,5,6,6] and align 'llrr'

should the printout be (first two lines below just for ruler reference)
 1234512345123456123456
 LL
 left left1 right12345r
  2345 ight

or what? I think you can get the above with more concise code :-)
but a minor mod to yours seems to do it:
[...]

Not very tested, but you may enjoy puzzling out a more concise version
(and writing a proper test for it, since it might have bugs ;-)

  def fieldprint(widths, align, fields):
 ... if len(widths) != len(fields) or len(widths)!=len(align):
 ... raise ValueError, 'fieldprint argument mismatch'
 ... align = [getattr(str, j+'just') for j in align]
 ... results = []
 ... while ''.join(fields):
 ... for i, (fld, wid, alg) in enumerate(zip(fields, widths, align)):
 ... results.append(alg(fld[:wid], wid))
 ... fields[i] = fld[wid:]
 ... results.append('\n')
 ... return ''.join(results)
 ...
  print fieldprint([5,5,6,6], 'llrr', ['left', 'left12345', 'right', 
  '12345right'])
 left left1 right12345r
  2345 ight

BTW, since this destroys the fields list, you might want to operate on a copy
(e.g., remaining_fields = fields[:]) internally, or pass a copy to the routine.
Of course, you could put print instead of return in fieldprint and just 
call it
instead of printing what it returns as above, but you didn't show that part in 
your
original example. BTW2, note that str above is the built in string type, and 
it's
not a good idea to use a built in name for a variable the way your original 
code did.

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: calling python procedures from tcl using tclpython

2005-07-09 Thread Jonathan Ellis
Jeff Hobbs wrote:
 chand wrote:
  can anyone help me how to provide the info about the python file
  procedure in the tcl script which uses tclpython i.e., is there a way
  to import that .py file procedure in the tcl script

 currently I have wriiten this tcl code which is not working
 
 package require tclpython
 set interpreter [python::interp new]
 $interpreter eval {def test_function(): arg1,arg2} ;
 python::interp delete $interpreter

 You would call 'import' in the python interpreter, like so:
   $interpreter eval { import testfile }
 assuming it's on the module search path.  Look in the python
 docs about Modules to get all the info you need.

Actually, both your import and the original def problem need to use
exec instead of eval.  Eval works with expressions; for statements you
need exec.

I blogged a brief example of tclpython over here:
http://spyced.blogspot.com/2005/06/tale-of-wiki-diff-implementation.html

-Jonathan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: removing list comprehensions in Python 3.0

2005-07-09 Thread John Roth
George Sakkis [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]

 It's funny how one of the
 arguments for removing lambda -- you can do the same by defining a
 named function -- does not apply for list comprehensions.

Which is a point a number of people have made many times,
with about as much effect as spitting into the wind.

Making a piece of functionality less convenient simply to
satisfy someone's sense of language esthetics doesn't seem
to me, at least, to be a really good idea.

John Roth




 George
 

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: About undisclosed recipient

2005-07-09 Thread Peter Hansen
[EMAIL PROTECTED] wrote:
 Hi all, Any one got idea about how to set undisclosed recipient? I put
 all recipient in BCC field while the To field don't want to leave
 blank. but neither fail to place an empty email address nor i don't
 want to put my own email address inside. www.bartekrr.info

Judging my the page that comes up for that link, or for your other web 
page at bartek.webd.pl, it seems likely you are looking for this 
information just to increase the amount of spam the world suffers from.

If that's not true, please explain why you want to send emails without 
the recipient names showing and without the sender showing.  Otherwise I 
hope none of us go out of our way to help.
-- 
http://mail.python.org/mailman/listinfo/python-list


Should I use if or try (as a matter of speed)?

2005-07-09 Thread Steve Juranich
I know that this topic has the potential for blowing up in my face,
but I can't help asking.  I've been using Python since 1.5.1, so I'm
not what you'd call a n00b.  I dutifully evangelize on the goodness
of Python whenever I talk with fellow developers, but I always hit a
snag when it comes to discussing the finer points of the execution
model (specifically, exceptions).

Without fail, when I start talking with some of the old-timers
(people who have written code in ADA or Fortran), I hear the same
arguments that using if is better than using try.  I think that
the argument goes something like, When you set up a 'try' block, you
have to set up a lot of extra machinery than is necessary just
executing a simple conditional.

I was wondering how true this holds for Python, where exceptions are
such an integral part of the execution model.  It seems to me, that if
I'm executing a loop over a bunch of items, and I expect some
condition to hold for a majority of the cases, then a try block
would be in order, since I could eliminate a bunch of potentially
costly comparisons for each item.  But in cases where I'm only trying
a single getattr (for example), using if might be a cheaper way to
go.

What do I mean by cheaper?  I'm basically talking about the number
of instructions that are necessary to set up and execute a try block
as opposed to an if block.

Could you please tell me if I'm even remotely close to understanding
this correctly?
-- 
Steve Juranich
Tucson, AZ
USA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Lisp development with macros faster than Python development?..

2005-07-09 Thread Kirk Job Sluder
Kay Schluehr [EMAIL PROTECTED] writes:

 Kirk Job Sluder schrieb:
  In what way do lisp macros prevent the creation of modular libraries?
  Common Lisp does does have mechanisms for library namespaces, and in
  practice a macro contained within a library is not that much different
  from a function contained in a library or a class contained in a
  library. Macros just provide another mechanism for creating useful
  domain-specific abstractions.
 
 To be honest I don't understand what a domain-specific abstraction
 could be? What is the benefit of abstractions if they are not
 abstracting from particular domain specific stuff?

The usual trend in higher level languages is to abstract away from the
algorithmic details into domain-specific applications.  So for example,
rather than writing a block of code for handling the regular expression
'[a-zA-Z]+, then a different block of code for the case, 'a|b|c', we
have a regular expression library that packages up the algorithm and the
implementation details into an interface.  The python standard library
is basically a collection of such abstractions.  In python you usually
work with strings as an object, rather than as an array of byte values
interpreted to be linguistic characters located at a specific memory
address as you would in c.

Object oriented programming is all about creating domain-specific
abstractions for data.  This enables us to talk about GUIs as widgits
and frames in addition to filling in pixels on a screen.  Or to talk
about an email Message as a collection of data that will be stored in a
certain format, without having to do sed-like text processing.

  The primary advantage to macros is that
  you can create abstractions with functionality that is not easily
  described as either a function or a class definition.
 
 As long as macros are used to create new language features such as an
 object system like CLOS this technique may be perfectly justified for
 language developers ( ! ) but I still consider it as a bad idea to
 muddle the language development and the application development, that
 seems to be the favourite programming style of Paul Graham. On the
 other hand thinking about language development as a certain application
 domain I find nothing wrong with the idea that it once reaches somehow
 a state of a mature product that should not be altered in arbitrary
 manner for the sake of a large user community.

Well, at this point, Common Lisp has been formally standardized, so
changing the core standard would be very difficult.  There is in fact,
strong resistance to reopening the standards process at this time, based
on the impression that most of what needs to be done, can be
accomplished by developing libraries.  So I think that CL as a mature
product is not altered in an arbitrary manner.

However, from my view, quite a bit of development in python involves
adding new language constructs in the form of classes, functions, and
instance methods, as well as interfaces to C and C++ libraries.  I would
argue this is one of the core strengths of python as a language, the
fact that we are only limited to the builtin functions and standard
library if we choose to be.  As an example, whenever I work with a new
data source, I usually end up creating a class to describe the kinds of
records I get from that data source.  And some functions for things that
I find myself repeating multiple times within a program.  

Macros are just another way to write something once, and use it over and
over again.  

 
 Kay
 

-- 
Kirk Job-Sluder
The square-jawed homunculi of Tommy Hilfinger ads make every day an
existential holocaust.  --Scary Go Round
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Having trouble importing against PP2E files

2005-07-09 Thread Charles Krug
On Fri, 08 Jul 2005 22:43:55 +0300, Elmo Mäntynen [EMAIL PROTECTED]
wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Import Error: no module named PP2E.launchmodes
 
 However if I copy launchmodes.py into my work directory, it imports
 successfully.
 
 Both Examples above and Examples\PP2E contain the __init__.py file.
 Are both Examples and PP2E packages? 

They appear to be, yes.

 In python if a folder is meant to represent a package it should iclude
 the above mentioned file __init__.py and by saying the above your
 suggesting that PP2E is a package inside the package Examples.

That appears to be the case, yes.

 If the above is correct, you should append the pythonpath with
 c:\Python24\ and refer to the wanted .py with Examples.PP2E.launchmodes.
 As such the import statement obviously should be from
 Examples.PP2E.launchmodes import PortableLauncher. If the above isn't
 the case and there is still something unclear about this, reply with a
 more detailed post about the situation.
 

The registry value is this:

C:\Python24\Lib;C:\Python24\DLLs;C:\Python24\Lib\lib-tk;
C:\Python24\Examples\PP2E

I'm not realy sure what other details are relavant.  I've installed from
the Windows .msi package, and appended the directory I want to
PythonPath in the registry, and that doesn't do what I need.

This is WinXP Pro

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python Module Exposure

2005-07-09 Thread Jacob Page
George Sakkis wrote:
 Jacob Page [EMAIL PROTECTED] wrote:
 
George Sakkis wrote:

1. As already noted, ISet is not really descriptive of what the class
does. How about RangeSet ? It's not that long and I find it pretty
descriptive. In this case, it would be a good idea to change Interval
to Range to make the association easier.

The reason I decided not to use the term Range was because that could be
confused with the range function, which produces a discrete set of
integers.  Interval isn't a term used in the standard/built-in library,
AFAIK, so I may stick with it.  Is this sound reasoning?
 
 Yes, it is not unreasonable; I can't argue strongly against Interval.
 Still I'm a bit more in favor of Range and I don't think it is
 particularly confusing with range() because:
 1. Range has to be either qualified with the name of the package (e.g.
 rangesets.Range) or imported as from rangesets import Range, so one
 cannot mistake it for the range builtin.
 2. Most popular naming conventions use lower first letter for functions
 and capital for classes.
 3. If you insist that only RangeSet should be exposed from the module's
 interface and Range (Interval) should be hidden, the potential conflict
 between range() and RangeSet is even less.

Those are pretty good arguments, but after doing some poking around on 
planetmath.org and reading 
http://planetmath.org/encyclopedia/Interval.html, I've now settled on 
Interval, since that seems to be the proper use of the term.

2. The module's helper functions -- which are usually called factory
functions/methods because they are essentially alternative constructors
of ISets -- would perhaps better be defined as classmethods of ISet;
that's a common way to define instance factories in python. Except for
'eq' and 'ne', the rest define an ISet of a single Interval, so they
would rather be classmethods of Interval. Also the function names could
take some improvement; at the very least they should not be 2-letter
abbreviations.

First, as far as having the factory functions create Interval instances
(Range instances in your examples), I was hoping to avoid Intervals
being publically exposed.  I think it's cleaner if the end-programmer
only has to deal with one object interface.
 
 First off, the python convention for names you intend to be 'private'
 is to prefix them with a single underscore, i.e. _Interval, so it was
 not obvious at all by reading the documentation that this was your
 intention. Assuming that Interval is to be exposed, I  found
 Interval.lowerThan(5) a bit more intuitive than
 IntervalSet.lowerThan(5). The only slight problem is the 'ne'/
 allExcept factory which doesn't return a continuous interval and
 therefore cannot be a classmethod in Interval.

If the factories resided in separate classes, it seems like they might 
be less convenient to use.  I wanted these things to be easily 
constructed.  Maybe a good compromise is to implement lessThan and 
greaterThan in both Interval and IntervalSet.

 On whether Interval should be exposed or not: I believe that interval
 is a useful abstraction by itself and has the important property of
 being continuous, which IntervalSet doesn't. 

Perhaps I should add a boolean function for IntervalSet called 
continuous (isContinuous?).

Having a simple
 single-class interface is a valid argument, but it may turn out to be
 restricted later. For example, I was thinking that a useful method of
 IntervalSet would be an iterator over its Intervals:
 for interval in myIntervalSet:
 print interval.min, interval.max

I like the idea of allowing iteration over the Intervals.

 There are several possible use cases where dealing directly with
 intervals would be appropriate or necessary, so it's good to have them
 supported directly by the module.

I think I will keep Interval exposed.  It sort of raises a bunch of 
hard-to-answer design questions having two class interfaces, though. 
For example, would Interval.between(2, 3) + Interval.between(5, 7) raise 
an error (as it currently does) because the intervals are disjoint or 
yield an IntervalSet, or should it not even be implemented?  How about 
subtraction, xoring, and anding?  An exposed class should have a more 
complete interface.

I think that IntervalSet.between(5, 7) | IntervalSet.between(2, 3) is 
more intuitive than IntervalSet(Interval.between(5, 7), 
Interval.between(2, 3)), but I can understand the reverse.  I think I'll 
just support both.

I like the idea of lengthening the factory function names, but I'm not
sure that the factory methods would be better as class methods.
 
 One of the main reason for introducing classmethods was to allow
 alternate constructors that play well with subclasses. So if Interval
 is subclassed, say for a FrozenInterval class,
 FrozenInterval.lowerThan() returns FrozenInterval instances
 automatically.

You've convinced me :).  I sure hate to require the programmer to have 
to type the class name when using a factory 

Re: About undisclosed recipient

2005-07-09 Thread Jeff Epler
You provided far too little information for us to be able to help.

If you are using smtplib, it doesn't even look at message's headers to
find the recipient list; you must use the rcpt() method to specify each
one.  If you are using the sendmail method, the to_addrs list has no
relationship to the headers of the actual message---it simply calls
rcpt() once for each address in to_addrs.  The example in the docstring
doesn't even *have* a To: header in the message!

Jeff


pgpHNq6sWEuR7.pgp
Description: PGP signature
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Should I use if or try (as a matter of speed)?

2005-07-09 Thread ncf
Honestly, I'm rather new to python, but my best bet would be to create
some test code and time it.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python Module Exposure

2005-07-09 Thread George Sakkis
Jacob Page [EMAIL PROTECTED] wrote:

 George Sakkis wrote:
  There are several possible use cases where dealing directly with
  intervals would be appropriate or necessary, so it's good to have them
  supported directly by the module.

 I think I will keep Interval exposed.  It sort of raises a bunch of
 hard-to-answer design questions having two class interfaces, though.
 For example, would Interval.between(2, 3) + Interval.between(5, 7) raise
 an error (as it currently does) because the intervals are disjoint or
 yield an IntervalSet, or should it not even be implemented?  How about
 subtraction, xoring, and anding?  An exposed class should have a more
 complete interface.

 I think that IntervalSet.between(5, 7) | IntervalSet.between(2, 3) is
 more intuitive than IntervalSet(Interval.between(5, 7),
 Interval.between(2, 3)), but I can understand the reverse.  I think I'll
 just support both.

As I see it, there are two main options you have:

1. Keep Intervals immutable and pass all the responsibility of
combining them to IntervalSet. In this case Interval.__add__ would have
to go. This is simple to implement, but it's probably not the most
convenient to the user.

2. Give Interval the same interface with IntervalSet, at least as far
as interval combinations are concerned, so that Interval.between(2,3) |
Interval.greaterThan(7) returns an IntervalSet. Apart from being user
friendlier, an extra benefit is that you don't have to support
factories for IntervalSets, so I am more in favor of this option.

Another hard design problem is how to combine intervals when
inheritance comes to play. Say that you have FrozenInterval and
FrozenIntervalSet subclasses. What should Interval.between(2,3) |
FrozenInterval.greaterThan(7) return ? I ran into this problem for
another datastructure that I wanted it to support both mutable and
immutable versions. I was surprised to find out that:
 s1 = set([1])
 s2 = frozenset([2])
 s1|s2 == s2|s1
True
 type(s1|s2) == type(s2|s1)
False

It make sense why this happens if you think about the implementation,
but I guess most users expect the result of a commutative operation to
be independent of the operands' order. In the worst case, this may lead
to some hard-to-find bugs.

George

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pyo contains absolute paths

2005-07-09 Thread ncf
Python is compiling the files with absolute paths because it is much
faster to load a file when you know where it is, than to have to find
it and then load it.

I'm guessing you're wondering this so you can distribute it compiled or
such? If so, I wouldn't do that in the first place. Python's compiled
files might be version/architecture dependant.

-NcF

-- 
http://mail.python.org/mailman/listinfo/python-list


How long is a piece of string? How big's a unit?

2005-07-09 Thread esther
I'm working on my monthly column for Software Test  Performance
magazine, and I'd like your input. The topic, this time around, is unit
testing (using python or anything else). Care to share some of your
hard-won knowledge with your peers?

In particular, what I'm looking for are experiences and advice about
_developing_ the unit tests. (Managing and running them is something
else again.) How big is a unit? How granular do you get? Do you have a
particular process or checklist that you follow: ensuring that your
tests look at interface, UI, etc.? Do you make a particular effort to
create unit tests for the boundary conditions?

Anecdotes are, of course, enthusiastically encouraged. Tell me what you
learned the hard way about unit testing. What mistakes have you seen
others make? What would you teach someone about the process that might
otherwise come only with experience?

On quoting you: My editors hate it when I quote pookie-boy, whom I met
on some newsgroup. Please let me know -- privately if necessary -- how
I may refer to you in my article. (The usual format is Kim Jones, a
programmer at BigCompany, Inc. in Denver, Colorado.)

I'll be working on this over the next several days, so I look forward
to hearing from you soon.

Esther Schindler
Contributing editor, Software Test  Performance magazine (stpmag.com)

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Should I use if or try (as a matter of speed)?

2005-07-09 Thread [EMAIL PROTECTED]
My shot would be to test it like this on your platform like this:

#!/usr/bin/env python
import datetime, time
t1 = datetime.datetime.now()
for i in [str(x) for x in range(100)]:
if int(i) == i:
i + 1
t2 = datetime.datetime.now()
print t2 - t1
for i in [str(x) for x in range(100)]:
try:
int(i) +1
except:
pass
t3 = datetime.datetime.now()
print t3 - t2

for me (on python 2.4.1 on Linux on a AMD Sempron 2200+) it gives:
0:00:00.000637
0:00:00.000823

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Should I use if or try (as a matter of speed)?

2005-07-09 Thread Roy Smith
Steve Juranich [EMAIL PROTECTED] wrote:
 Without fail, when I start talking with some of the old-timers
 (people who have written code in ADA or Fortran), I hear the same
 arguments that using if is better than using try.

Well, you've now got a failure.  I used to write Fortran on punch cards, so 
I guess that makes me an old-timer, and I don't agree with that argument.

 I think that the argument goes something like, When you set up a 'try' 
 block, you have to set up a lot of extra machinery than is necessary 
 just executing a simple conditional.

That sounds like a very C++ kind of attitude, where efficiency is prized 
above all else, and exception handling is relatively heavy-weight compared 
to a simple conditional.

 What do I mean by cheaper?  I'm basically talking about the number
 of instructions that are necessary to set up and execute a try block
 as opposed to an if block.

Don't worry about crap like that until the whole application is done and 
it's not running fast enough, and you've exhausted all efforts to identify 
algorithmic improvements that could be made, and careful performance 
measurements have shown that the use of try blocks is the problem.

Exceptions are better than returning an error code for several reasons:

1) They cannot be silently ignored by accident.  If you don't catch an 
exception, it bubbles up until something does catch it, or nothing does and 
your program dies with a stack trace.  You can ignore them if you want, but 
you have to explicitly write some code to do that.

2) It separates the normal flow of control from the error processing.  In 
many cases, this makes it easier to understand the program logic.

3) In some cases, they can lead to faster code.  A classic example is 
counting occurances of items using a dictionary:

   count = {}
   for key in whatever:
  try:
 count[key] += 1
  except KeyError:
 count[key] = 1

compared to

   count = {}
   for key in whatever:
  if count.hasKey(key):
 count[key] += 1
  else:
 count[key] = 1

if most keys are going to already be in the dictionary, handling the 
occasional exception will be faster than calling hasKey() for every one.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Should I use if or try (as a matter of speed)?

2005-07-09 Thread Wezzy
[EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 My shot would be to test it like this on your platform like this:
 
 #!/usr/bin/env python
 import datetime, time
 t1 = datetime.datetime.now()
 for i in [str(x) for x in range(100)]:
   if int(i) == i:
   i + 1
 t2 = datetime.datetime.now()
 print t2 - t1
 for i in [str(x) for x in range(100)]:
   try:
   int(i) +1
   except:
   pass
 t3 = datetime.datetime.now()
 print t3 - t2
 
 for me (on python 2.4.1 on Linux on a AMD Sempron 2200+) it gives:
 0:00:00.000637
 0:00:00.000823

PowerBook:~/Desktop wezzy$ python test.py 
0:00:00.001206
0:00:00.002092

Python 2.4.1 Pb15 with Tiger
-- 
Ciao
Fabio
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question from Nathan.

2005-07-09 Thread Nathan Pinno
  Hi all,

  How do I make Python get a def? Is it the get function, or something 
else? I need to know so that I can get a def for that computer 
MasterMind(tm) game that I'm writing.

  BTW, I took your advice, and wrote some definitions for my Giant 
Calculator program. Might make the code easier to read, but harder to code 
because I have to keep going to the top to read the menu. Not that fun, but 
necessary for a smooth program, I guess.

  Nathan Pinno

  Steven D'Aprano [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
   On Sat, 02 Jul 2005 00:25:00 -0600, Nathan Pinno wrote:
 Hi all.
 How do I make the computer generate 4 random numbers for the guess? I 
want
   to know because I'm writing a computer program in Python like the game
   MasterMind.
   First you get the computer to generate one random number. Then you do it
   again three more times.
   If you only need to do it once, you could do it this way:
   import random  # you need this at the top of your program
   x0 = random.random()
   x1 = random.random()
   x2 = random.random()
   x3 = random.random()
   But if you need to do it more than once, best to create a function that
   returns four random numbers in one go.
   def four_random():
  Returns a list of four random numbers.
  L = []  # start with an empty list
  for i in range(4):
  L.append(random.random())
  return L
   and use it this way:
   rand_nums = four_random()
   # rand_nums is a list of four numbers
   print rand_nums[0]  # prints the first random number
   print rand_nums[3]  # prints the last one
   or like this:
   alpha, beta, gamma, delta = four_random()
   # four names for four separate numbers
   Steven.
   http://mail.python.org/mailman/listinfo/python-list




-- 



 Posted via UsenetRevolution.com - Revolutionary Usenet
** HIGH RETENTION ** Specializing in Large Binaries Downloads **
 http://www.UsenetRevolution.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PPC floating equality vs. byte compilation

2005-07-09 Thread Terry Reedy

Tim Peters [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 [Donn Cave]
 I ran into a phenomenon that seemed odd to me, while testing a
 build of Python 2.4.1 on BeOS 5.04, on PowerPC 603e.

 test_builtin.py, for example, fails a couple of tests with errors
 claiming that apparently identical floating point values aren't equal.
 But it only does that when imported, and only when the .pyc file
 already exists.  Not if I execute it directly (python test_builtin.py),
 or if I delete the .pyc file before importing it and running 
 test_main().

This is a known problem with marshalling INFs and/or NANs.  *This* has 
supposedly been fixed for 2.5.  We are assuming that the failure you report 
is for real floats.

 It would be most helpful to open a bug report, with the output from
 failing tests.

And assign to Tim.

In general, this can
 happen if the platform C string-float routines are so poor that

eval(repr(x)) != x
...
 The ultimate cause is most likely in the platform C library's
 string-float routines (sprintf, strtod, that kind of thing).

It would also be helpful if you could do some tests in plain C (no Python) 
testing, for instance, the same values that failed.  Hardly anyone else can 
;-).  If you confirm a problem with the C library, you can close the report 
after opening, leaving it as a note for anyone else working with that 
platform.

Terry J. Reedy



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Should I use if or try (as a matter of speed)?

2005-07-09 Thread Bruno Desthuilliers
[EMAIL PROTECTED] a écrit :
 My shot would be to test it like this on your platform like this:
  
 #!/usr/bin/env python
 import datetime, time

Why not use the timeit module instead ?

 t1 = datetime.datetime.now()
 for i in [str(x) for x in range(100)]:

A bigger range (at least 10/100x more) would probably be better...

   if int(i) == i:

This will never be true, so next line...

   i + 1

...wont never be executed.

 t2 = datetime.datetime.now()
 print t2 - t1
 for i in [str(x) for x in range(100)]:
   try:
   int(i) +1
   except:
   pass

This will never raise, so the addition will always be executed (it never 
will be in the previous loop).

 t3 = datetime.datetime.now()
 print t3 - t2

BTW, you  end up including the time spent printing t2 - t1 in the 
timing, and IO can be (very) costly.

(snip meaningless results)

The test-before vs try-expect strategy is almost a FAQ, and the usual 
answer is that it depends on the hit/misses ratio. If the (expected) 
ratio is high, try-except is better. If it's low, test-before is better.

HTH
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Defending Python

2005-07-09 Thread Bruno Desthuilliers
Grant Edwards a écrit :
 On 2005-07-09, Brian [EMAIL PROTECTED] wrote:
 
 
folks as an easily acquired extra skill.

I agree 100% with your statement above.  Python may not be sufficient 
for being the only programming language that one needs to know -- yet, 
it does come in VERY handy for projects that need to perform tasks on 
non-Microsoft operating systems.  :-)
 
 
 It's also darned handy for projects that need to perform tasks
 on Microsft operating systems but you want to do all the
 development work under a real OS.
 

It's also darned handy for projects that need to perform tasks on 
Microsft operating systems.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Should I use if or try (as a matter of speed)?

2005-07-09 Thread Terry Reedy

Steve Juranich [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 Without fail, when I start talking with some of the old-timers
 (people who have written code in ADA or Fortran), I hear the same
 arguments that using if is better than using try.  I think that
 the argument goes something like, When you set up a 'try' block, you
 have to set up a lot of extra machinery than is necessary just
 executing a simple conditional.

I believe 'setting up a try block' is one bytecode (you can confirm this 
with dis.dis).  It is definitely cheaper than making a function call in an 
if condition.  Catching exceptions is the time-expensive part.  For more, 
see my response to 'witte'.

Terry J. Reedy



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Should I use if or try (as a matter of speed)?

2005-07-09 Thread Thorsten Kampe
* Steve Juranich (2005-07-09 19:21 +0100)
 I know that this topic has the potential for blowing up in my face,
 but I can't help asking.  I've been using Python since 1.5.1, so I'm
 not what you'd call a n00b.  I dutifully evangelize on the goodness
 of Python whenever I talk with fellow developers, but I always hit a
 snag when it comes to discussing the finer points of the execution
 model (specifically, exceptions).
 
 Without fail, when I start talking with some of the old-timers
 (people who have written code in ADA or Fortran), I hear the same
 arguments that using if is better than using try.  I think that
 the argument goes something like, When you set up a 'try' block, you
 have to set up a lot of extra machinery than is necessary just
 executing a simple conditional.
 
 I was wondering how true this holds for Python, where exceptions are
 such an integral part of the execution model.  It seems to me, that if
 I'm executing a loop over a bunch of items, and I expect some
 condition to hold for a majority of the cases, then a try block
 would be in order, since I could eliminate a bunch of potentially
 costly comparisons for each item.  But in cases where I'm only trying
 a single getattr (for example), using if might be a cheaper way to
 go.
 
 What do I mean by cheaper?  I'm basically talking about the number
 of instructions that are necessary to set up and execute a try block
 as opposed to an if block.

Catch errors rather than avoiding them to avoid cluttering your code
with special cases. This idiom is called EAFP ('easier to ask
forgiveness than permission'), as opposed to LBYL ('look before you
leap').

http://jaynes.colorado.edu/PythonIdioms.html
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Should I use if or try (as a matter of speed)?

2005-07-09 Thread Terry Reedy

[EMAIL PROTECTED] [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 My shot would be to test it like this on your platform like this:

 #!/usr/bin/env python
 import datetime, time
 t1 = datetime.datetime.now()
 for i in [str(x) for x in range(100)]:
 if int(i) == i:
 i + 1
 t2 = datetime.datetime.now()
 print t2 - t1
 for i in [str(x) for x in range(100)]:
 try:
 int(i) +1
 except:
 pass
 t3 = datetime.datetime.now()
 print t3 - t2

This is not a proper test since the if condition always fails and the 
addition not done while the try succeeds and the addition is done.  To be 
equivalent, remove the int call in the try part: try: i+1.  This would 
still not a proper test since catching exceptions is known to be expensive 
and try: except is meant for catching *exceptional* conditions, not 
always-bad conditions.  Here is a test that I think more useful:

for n in [1,2,3,4,5,10,20,50,100]:
  # time this
  for i in range(n):
if i != 0: x = 1/i
else: pass
  # versus
  for i in range(n):
try x = 1/i
except ZeroDivisionError: pass

I expect this will show if faster for small n and try for large n.

Terry J. Reedy



-- 
http://mail.python.org/mailman/listinfo/python-list


Yet Another Python Web Programming Question

2005-07-09 Thread Daniel Bickett
This post started as an incredibly long winded essay, but halfway
through I decided that was a terribly bad idea, so I've trimmed it
down dramatically, and put it in the third person (for humor's sake).

Once upon a time a boy named Hypothetical programmed in PHP and made
many a web application.

It would be a long while before he would find Python, and since that
time he would have no desire to ever touch PHP again.

He would, however, be compelled to write a web application again, but
in Python now, of course.

He would read the documentation of Nevow, Zope, and Quixote, and would
find none of them to his liking because:

* They had a learning curve, and he was not at all interested, being
eager to fulfill his new idea for the web app. It was his opinion that
web programming should feel no different from desktop programming.

* They required installation (as opposed to, simply, the placement of
modules), whereas the only pythonic freedom he had on his hosting was
a folder in his /home/ dir that was in the python system path.

* See the first point, over and over again.

All he really wanted was something that managed input (i.e. get, post)
and output (i.e. headers: Content-type:), and he would be satisfied,
because he wasn't an extravagant programmer even when he used PHP.

Python using CGI, for example, was enough for him until he started
getting 500 errors that he wasn't sure how to fix.

He is also interested in some opinions on the best/most carefree way
of interfacing with MySQL databases.

Thanks for your time,
-- 
Daniel Bickett
dbickett at gmail.com
http://heureusement.org/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Should I use if or try (as a matter of speed)?

2005-07-09 Thread John Roth
Thorsten Kampe [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
* Steve Juranich (2005-07-09 19:21 +0100)
 I know that this topic has the potential for blowing up in my face,
 but I can't help asking.  I've been using Python since 1.5.1, so I'm
 not what you'd call a n00b.  I dutifully evangelize on the goodness
 of Python whenever I talk with fellow developers, but I always hit a
 snag when it comes to discussing the finer points of the execution
 model (specifically, exceptions).

 Without fail, when I start talking with some of the old-timers
 (people who have written code in ADA or Fortran), I hear the same
 arguments that using if is better than using try.  I think that
 the argument goes something like, When you set up a 'try' block, you
 have to set up a lot of extra machinery than is necessary just
 executing a simple conditional.

 I was wondering how true this holds for Python, where exceptions are
 such an integral part of the execution model.  It seems to me, that if
 I'm executing a loop over a bunch of items, and I expect some
 condition to hold for a majority of the cases, then a try block
 would be in order, since I could eliminate a bunch of potentially
 costly comparisons for each item.  But in cases where I'm only trying
 a single getattr (for example), using if might be a cheaper way to
 go.

 What do I mean by cheaper?  I'm basically talking about the number
 of instructions that are necessary to set up and execute a try block
 as opposed to an if block.

 Catch errors rather than avoiding them to avoid cluttering your code
 with special cases. This idiom is called EAFP ('easier to ask
 forgiveness than permission'), as opposed to LBYL ('look before you
 leap').

 http://jaynes.colorado.edu/PythonIdioms.html

It depends on what you're doing, and I don't find a one size fits all
approach to be all that useful.

If execution speed is paramount and exceptions are relatively rare,
then the try block is the better approach.

If you simply want to throw an exception, then the clearest way
of writing it that I've ever found is to encapsulate the raise statement
together with the condition test in a subroutine with a name that
describes what's being tested for. Even a name as poor as
HurlOnFalseCondition(condition, exception, parms, message)
can be very enlightening. It gets rid of the in-line if and raise 
statements,
at the cost of an extra method call.

John Roth



In both approaches, you have some
error handling code that is going to clutter up your program flow.


-- 
http://mail.python.org/mailman/listinfo/python-list


ImportError: No module named numarray

2005-07-09 Thread enas khalil
dear all
could you tell me how can i fix this error appears when i try to import modules from nltk 
as follows

from nltk.probability import ConditionalFreqDist
Traceback (most recent call last): File "pyshell#1", line 1, in -toplevel- from nltk.probability import ConditionalFreqDist File "C:\Python24\Lib\site-packages\nltk\probability.py", line 56, in -toplevel- import types, math, numarrayImportError: No module named numarray

thanks __Do You Yahoo!?Tired of spam?  Yahoo! Mail has the best spam protection around http://mail.yahoo.com -- 
http://mail.python.org/mailman/listinfo/python-list

decorators as generalized pre-binding hooks

2005-07-09 Thread Bengt Richter
;-)
We have

@deco
def foo(): pass
as sugar (unless there's an uncaught exception in the decorator) for 
def foo(): pass
foo = deco(foo)

The binding of a class name is similar, and class decorators would seem 
natural, i.e.,

@cdeco
class Foo: pass
for
class Foo: pass
Foo = cdeco(Foo)

What is happening is we are intercepting the binding of some object
and letting the decorator do something to the object before the binding occurs.

So why not

@deco
foo = lambda:pass
equivalent to
foo = deco(lambda:pass)

and from there,
@deco
left-hand-side = right-hand-side
being equivalent to
left-hand-side = deco(right-hand-side)

e.g.,
@range_check(1,5)
a = 42
for
a = range_check(1,5)(42)

or
@default_value(42) 
b = c.e['f']('g')
for
b = default_value(42)(c.e['f']('g'))

Hm, binding-intercept-decoration could be sugar for catching exceptions too,
and passing them to the decorator, e.g., the above could be sugar for

try:
b = default_value(42)(c.e['f']('g'))
except Exception, e:
b = default_value(__exception__=e) # so decorator can check
   # and either return a value or just re-raise 
with raise [Note 1]

This might be useful for plain old function decorators too, if you wanted the 
decorator
to define the policy for substituting something if e.g. a default argument 
evaluation
throws and exception. Thus

@deco
def foo(x=a/b): pass # e.g., what if b==0?
as
try:
def foo(x=a/b): pass # e.g., what if b==0?
foo = deco(foo)
except Exception, e:
if not deco.func_code.co_flags0x08: raise #avoid mysterious unexpected 
keyword TypeError
foo = deco(__exception__=e)

[Note 1:]
Interestingly raise doesn't seem to have to be in the same frame or down-stack, 
so a decorator
called as above could re-raise:

  def deco(**kw):
 ... print kw
 ... raise
 ...
  try: 1/0
 ... except Exception, e: deco(__exception__=e)
 ...
 {'__exception__': exceptions.ZeroDivisionError instance at 0x02EF190C}
 Traceback (most recent call last):
   File stdin, line 2, in ?
   File stdin, line 1, in ?
 ZeroDivisionError: integer division or modulo by zero


orthogonal-musing-ly ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Should I use if or try (as a matter of speed)?

2005-07-09 Thread Thomas Lotze
Steve Juranich wrote:

 I was wondering how true this holds for Python, where exceptions are such
 an integral part of the execution model.  It seems to me, that if I'm
 executing a loop over a bunch of items, and I expect some condition to
 hold for a majority of the cases, then a try block would be in order,
 since I could eliminate a bunch of potentially costly comparisons for each
 item.

Exactly.

 But in cases where I'm only trying a single getattr (for example),
 using if might be a cheaper way to go.

Relying on exceptions is faster. In the Python world, this coding style
is called EAFP (easier to ask forgiveness than permission). You can try
it out, just do something 10**n times and measure the time it takes. Do
this twice, once with prior checking and once relying on exceptions.

And JFTR: the very example you chose gives you yet another choice:
getattr can take a default parameter.

 What do I mean by cheaper?  I'm basically talking about the number of
 instructions that are necessary to set up and execute a try block as
 opposed to an if block.

I don't know about the implementation of exceptions but I suspect most
of what try does doesn't happen at run-time at all, and things get
checked and looked for only if an exception did occur. An I suspect that
it's machine code that does that checking and looking, not byte code.
(Please correct me if I'm wrong, anyone with more insight.)

-- 
Thomas

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Yet Another Python Web Programming Question

2005-07-09 Thread Devan L
Take some time to learn one of the web frameworks. If your host doesn't
already have it, ask your host if they would consider adding it.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Should I use if or try (as a matter of speed)?

2005-07-09 Thread Peter Hansen
Thomas Lotze wrote:
 Steve Juranich wrote:
What do I mean by cheaper?  I'm basically talking about the number of
instructions that are necessary to set up and execute a try block as
opposed to an if block.
 
 I don't know about the implementation of exceptions but I suspect most
 of what try does doesn't happen at run-time at all, and things get
 checked and looked for only if an exception did occur. An I suspect that
 it's machine code that does that checking and looking, not byte code.
 (Please correct me if I'm wrong, anyone with more insight.)

Part right, part confusing.  Definitely try is something that happens 
at run-time, not compile time, at least in the sense of the execution of 
the corresponding byte code.  At compile time nothing much happens 
except a determination of where to jump if an exception is actually 
raised in the try block.

Try corresponds to a single bytecode SETUP_EXCEPT, so from the point of 
view of Python code it is extremely fast, especially compared to 
something like a function call (which some if-tests would do).  (There 
are also corresponding POP_BLOCK and JUMP_FORWARD instructions at the 
end of the try block, and they're even faster, though the corresponding 
if-test version would similarly have a jump of some kind involved.)

Exceptions in Python are checked for all the time, so there's little you 
can do to avoid part of the cost of that.  There is a small additional 
cost (in the C code) when the exceptional condition is actually present, 
of course, with some resulting work to create the Exception object and 
raise it.

Some analysis of this can be done trivially by anyone with a working 
interpreter, using the dis module.

def f():
   try:
 func()
   except:
 print 'ni!'

import dis
dis.dis(f)

Each line of the output represents a single bytecode instruction plus 
operands, similar to an assembly code disassembly.

To go further, get the Python source and skim through the ceval.c 
module, or do that via CVS 
http://cvs.sourceforge.net/viewcvs.py/python/python/dist/src/Python/ceval.c?rev=2.424view=auto
 
, looking for the string main loop.

And, in any case, remember that readability is almost always more 
important than optimization, and you should consider first whether one 
or the other approach is clearly more expressive (for future 
programmers, including yourself) in the specific case involved.

-Peter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ImportError: No module named numarray

2005-07-09 Thread Robert Kern
enas khalil wrote:
 dear all
 could you tell me how can i fix this error appears when i try to import 
 modules from nltk
 as follows
  
 from nltk.probability import  ConditionalFreqDist
 Traceback (most recent call last):
   File pyshell#1, line 1, in -toplevel-
 from nltk.probability import  ConditionalFreqDist
   File C:\Python24\Lib\site-packages\nltk\probability.py, line 56, in 
 -toplevel-
 import types, math, numarray
 ImportError: No module named numarray

Install numarray.

http://www.stsci.edu/resources/software_hardware/numarray

-- 
Robert Kern
[EMAIL PROTECTED]

In the fields of hell where the grass grows high
  Are the graves of dreams allowed to die.
   -- Richard Harter

-- 
http://mail.python.org/mailman/listinfo/python-list


already written optparse callback for range and list arguments?

2005-07-09 Thread Alex Gittens
I would like my program to accept a list of range values on the
command line, like
-a 1
-a 1-10
-a 4,5,2

In the interest of avoiding reinventing the wheel, is there already
available code for a callback that would enable optparse to parse
these as arguments?

Thanks,
Alex
-- 
ChapterZero: http://tangentspace.net/cz/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Yet Another Python Web Programming Question

2005-07-09 Thread Dave Brueck
Daniel Bickett wrote:
 He would read the documentation of Nevow, Zope, and Quixote, and would
 find none of them to his liking because:
 
 * They had a learning curve, and he was not at all interested, being
 eager to fulfill his new idea for the web app. It was his opinion that
 web programming should feel no different from desktop programming.
 
 * They required installation (as opposed to, simply, the placement of
 modules), whereas the only pythonic freedom he had on his hosting was
 a folder in his /home/ dir that was in the python system path.

I've been playing with CherryPy lately, and it's been a lot of fun (for me, it 
meets both the above requirements - obviously anything will have a learning 
curve, but CherryPy's is very small).

-Dave
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Use cases for del

2005-07-09 Thread Scott David Daniels
Ron Adam wrote:
 George Sakkis wrote:
 
 I get:

 None: 0.54952316
 String: 0.498000144958
 is None: 0.45047684
 
 
 What do yo get for name is 'string' expressions?

  'abc' is 'abcd'[:3]
 False

You need to test for equality (==), not identity (is) when
equal things may be distinct.  This is true for floats, strings,
and most things which are not identity-based (None, basic classes).
This is also true for longs and most ints (an optimization that makes
small ints use a single identity can lead you to a mistaken belief
that equal integers are always identical.

  (12345 + 45678) is (12345 + 45678)
 False

'is' tests for identity match.  a is b is roughly equivalent to
id(a) == id(b).  In fact an optimization inside string comparisons
is the C equivalent of return (id(a) == id(b) of len(a) == len(b)
and elements match)

--Scott David Daniels
[EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Yet Another Python Web Programming Question

2005-07-09 Thread Jonathan Ellis
Daniel Bickett wrote:
 He would read the documentation of Nevow, Zope, and Quixote, and would
 find none of them to his liking because:

 * They had a learning curve, and he was not at all interested, being
 eager to fulfill his new idea for the web app. It was his opinion that
 web programming should feel no different from desktop programming.

If you're coming from a PHP background, you'll find Spyce's learning
curve even shallower.

http://spyce.sourceforge.net

-Jonathan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Yet Another Python Web Programming Question

2005-07-09 Thread Luis M. Gonzalez
Try Karrigell ( http://karrigell.sourceforge.net ).
And let me know what you think...

Cheers,
Luis

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: __autoinit__ (Was: Proposal: reducing self.x=x; self.y=y; self.z=z boilerplate code)

2005-07-09 Thread Ralf W. Grosse-Kunstleve
--- Kay Schluehr [EMAIL PROTECTED] wrote:
 Ralf, if you want to modify the class instantiation behaviour you

I don't. I simply want to give the user a choice:

__init__(self, ...) # same as always (no modification)
or
__autoinit__(self, ...) # self.x=x job done automatically and efficiently

If someone overrides both __init__ and __autoinit__, the latter will
have no effect (unless called from the user's __init__).

 should have a look on metaclasses. That's what they are for. It is not
 a particular good idea to integrate a new method into the object base
 class for each accidental idea and write a PEP for it.
 ^^
 ??

The flurry of suggestions indicates that many people have tried to find
some way to reduce the boilerplate burden, even though everybody seems
keen to point out that it is not that big a deal. It is not that big a
deal indeed. But it is bigger than the problem solved by, e.g.,
enumerate, and I believe it can be solved with a comparable effort.

 I provide you an example

Thanks.

 which is actually your use case.

Close, but not quite.

 It doesn't
 change the class hierarchy : the metaclass semantics is not is a as
 for inheritance but customizes as one would expect also for
 decorators.
 
 class autoattr(type):
 '''
 The autoattr metaclass is used to extract auto_xxx parameters from
 the argument-tuple or the keyword arguments of an object
 constructor __init__
 and create object attributes mit name xxx and the value of auto_xxx
 passed
 into __init__
 '''
 def __init__(cls,name, bases, dct):
 super(autoattr,cls).__init__(name,bases,dct)
 old_init = cls.__init__
 defaults = cls.__init__.im_func.func_defaults
 varnames = cls.__init__.im_func.func_code.co_varnames[1:]
 
 def new_init(self,*args,**kwd):
 for var,default in zip(varnames[-len(defaults):],defaults):

This doesn't work in general. You are missing if (defaults is not None):
as a condition for entering the loop.

 if var.startswith(auto_):
 self.__dict__[var[5:]] = default
 for var,arg in zip(varnames,args):
 if var.startswith(auto_):
 self.__dict__[var[5:]] = arg
 for (key,val) in kwd.items():
 if key.startswith(auto_):
 self.__dict__[key[5:]] = val
 old_init(self,*args,**kwd)
 cls.__init__ = new_init

I couldn't easily time it because your approach changes the user
interface (what are implementation details becomes exposed as auto_),
but the code above is sure to introduce a serious runtime penalty.

I stripped your code down to the essence. See attachment.
For the user your approach then becomes:

  class grouping:
__metaclass__ = autoattr
def __init__(self, x, y, z):
  pass

My __autoinit__ suggestion would result in (assuming object supports
this by default):

  class grouping(object):
def __autoinit__(self, x, y, z):
  pass

I think that's far more intuitive.

The timings are:

  overhead: 0.00
  plain_grouping: 0.28
  update_grouping: 0.45
  plain_adopt_grouping: 0.69
  autoinit_grouping: 1.15
  autoattr_grouping: 1.01

Your approach wins out time-wise, mainly since you don't have to repeat
the expensive '*.im_func.func_code.co_varnames[1:] 3 or 4 * getattr +
slicing operation for each object instantiation. However, I am hopeful
that a built-in C implementation could eliminate most if not all
runtime penalties compared to the hand-coded plain_grouping
approach.

Cheers,
Ralf





Sell on Yahoo! Auctions – no fees. Bid on great items.  
http://auctions.yahoo.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: __autoinit__ (Was: Proposal: reducing self.x=x; self.y=y; self.z=z boilerplate code)

2005-07-09 Thread Ralf W. Grosse-Kunstleve
Sorry, I forgot the attachment.




Sell on Yahoo! Auctions – no fees. Bid on great items.  
http://auctions.yahoo.com/import sys, os

class plain_grouping:
  def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z

class update_grouping:
  def __init__(self, x, y, z):
self.__dict__.update(locals())
del self.self

def plain_adopt():
  frame = sys._getframe(1)
  init_locals = frame.f_locals
  self = init_locals[frame.f_code.co_varnames[0]]
  self.__dict__.update(init_locals)
  del self.self

class plain_adopt_grouping:
  def __init__(self, x, y, z):
plain_adopt()

class autoinit(object):

  def __init__(self, *args, **keyword_args):
self.__dict__.update(
  zip(self.__autoinit__.im_func.func_code.co_varnames[1:], args))
self.__dict__.update(keyword_args)
self.__autoinit__(*args, **keyword_args)

class autoinit_grouping(autoinit):

  def __autoinit__(self, x, y, z):
pass

class autoattr(type):
  def __init__(cls, name, bases, dct):
super(autoattr, cls).__init__(name, bases, dct)
old_init = cls.__init__
varnames = old_init.im_func.func_code.co_varnames[1:]
def new_init(self, *args, **keyword_args):
  self.__dict__.update(zip(varnames, args))
  self.__dict__.update(keyword_args)
  old_init(self, *args, **keyword_args)
cls.__init__ = new_init

class autoattr_grouping:
  __metaclass__ = autoattr
  def __init__(self, x, y, z):
pass

try:
  from namespace import Record
except ImportError:
  Record = None
else:
  class record_grouping(Record):
x = None
y = None
z = None

class timer:
  def __init__(self):
self.t0 = os.times()
  def get(self):
tn = os.times()
return (tn[0]+tn[1]-self.t0[0]-self.t0[1])

def time_overhead(n_repeats):
  t = timer()
  for i in xrange(n_repeats):
pass
  return t.get()

def time(method, n_repeats):
  g = method(x=1,y=2,z=3)
  assert g.x == 1
  assert g.y == 2
  assert g.z == 3
  t = timer()
  for i in xrange(n_repeats):
method(x=1,y=2,z=3)
  return t.get()

def time_all(n_repeats=10):
  print overhead: %.2f % time_overhead(n_repeats)
  print plain_grouping: %.2f % time(plain_grouping, n_repeats)
  print update_grouping: %.2f % time(update_grouping, n_repeats)
  print plain_adopt_grouping: %.2f % time(plain_adopt_grouping, n_repeats)
  print autoinit_grouping: %.2f % time(autoinit_grouping, n_repeats)
  print autoattr_grouping: %.2f % time(autoattr_grouping, n_repeats)
  if (Record is not None):
print record_grouping: %.2f % time(record_grouping, n_repeats)

if (__name__ == __main__):
  time_all()
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: __autoinit__ (Was: Proposal: reducing self.x=x; self.y=y; self.z=z boilerplate code)

2005-07-09 Thread Scott David Daniels
Ralf W. Grosse-Kunstleve wrote:
 My initial proposal
 (http://cci.lbl.gov/~rwgk/python/adopt_init_args_2005_07_02.html) didn't
 exactly get a warm welcome...
 
 And Now for Something Completely Different:
 
 class autoinit(object):
 
   def __init__(self, *args, **keyword_args):
 self.__dict__.update(
   zip(self.__autoinit__.im_func.func_code.co_varnames[1:], args))
 self.__dict__.update(keyword_args)
 self.__autoinit__(*args, **keyword_args)
Should be:
 class autoinit(object):
 def __init__(self, *args, **keyword_args):
 for name, value in zip(self.__autoinit__.im_func.func_code.
 co_varnames[1:], args):
 setattr(self, name, value)
 for name, value in keyword_args.items():
 setattr(self, name, value)
 self.__autoinit__(*args, **keyword_args)

Since using setattr will take care of any slots used in other classes.
Not all data is stored in the __dict__.

For example:

 class Example(autoinit):
 __slots__ = 'abc',
 def __autoinit__(self, a=1, abc=1):
 print a, abc

 a = Example(1,2)
 print a.__dict__
 print a.a
 print a.abc


--Scott David Daniels
[EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: __autoinit__ (Was: Proposal: reducing self.x=x; self.y=y; self.z=z boilerplate code)

2005-07-09 Thread Ralf W. Grosse-Kunstleve
--- Scott David Daniels [EMAIL PROTECTED] wrote:
 Should be:
  class autoinit(object):
  def __init__(self, *args, **keyword_args):
  for name, value in zip(self.__autoinit__.im_func.func_code.
  co_varnames[1:], args):
  setattr(self, name, value)
  for name, value in keyword_args.items():
  setattr(self, name, value)
  self.__autoinit__(*args, **keyword_args)

Thanks!
I didn't do it like this out of fear it may be too slow. But it turns out to be
faster:

  overhead: 0.00
  plain_grouping: 0.27
  update_grouping: 0.43
  plain_adopt_grouping: 0.68
  autoinit_grouping: 1.15
  autoinit_setattr_grouping: 1.08 # yours
  autoattr_grouping: 1.06

I am amazed. Very good!

Cheers,
Ralf




Sell on Yahoo! Auctions – no fees. Bid on great items.  
http://auctions.yahoo.com/import sys, os

class plain_grouping:
  def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z

class update_grouping:
  def __init__(self, x, y, z):
self.__dict__.update(locals())
del self.self

def plain_adopt():
  frame = sys._getframe(1)
  init_locals = frame.f_locals
  self = init_locals[frame.f_code.co_varnames[0]]
  self.__dict__.update(init_locals)
  del self.self

class plain_adopt_grouping:
  def __init__(self, x, y, z):
plain_adopt()

class autoinit(object):

  def __init__(self, *args, **keyword_args):
self.__dict__.update(
  zip(self.__autoinit__.im_func.func_code.co_varnames[1:], args))
self.__dict__.update(keyword_args)
self.__autoinit__(*args, **keyword_args)

class autoinit_grouping(autoinit):

  def __autoinit__(self, x, y, z):
pass

class autoinit_setattr(object):
  def __init__(self, *args, **keyword_args):
for name, value in zip(self.__autoinit__.im_func.func_code.
 co_varnames[1:], args):
  setattr(self, name, value)
for name, value in keyword_args.items():
  setattr(self, name, value)
self.__autoinit__(*args, **keyword_args)

class autoinit_setattr_grouping(autoinit_setattr):

  def __autoinit__(self, x, y, z):
pass

class autoattr(type):
  def __init__(cls, name, bases, dct):
super(autoattr, cls).__init__(name, bases, dct)
old_init = cls.__init__
varnames = old_init.im_func.func_code.co_varnames[1:]
def new_init(self, *args, **keyword_args):
  self.__dict__.update(zip(varnames, args))
  self.__dict__.update(keyword_args)
  old_init(self, *args, **keyword_args)
cls.__init__ = new_init

class autoattr_grouping:
  __metaclass__ = autoattr
  def __init__(self, x, y, z):
pass

try:
  from namespace import Record
except ImportError:
  Record = None
else:
  class record_grouping(Record):
x = None
y = None
z = None

class timer:
  def __init__(self):
self.t0 = os.times()
  def get(self):
tn = os.times()
return (tn[0]+tn[1]-self.t0[0]-self.t0[1])

def time_overhead(n_repeats):
  t = timer()
  for i in xrange(n_repeats):
pass
  return t.get()

def time(method, n_repeats):
  g = method(x=1,y=2,z=3)
  assert g.x == 1
  assert g.y == 2
  assert g.z == 3
  t = timer()
  for i in xrange(n_repeats):
method(x=1,y=2,z=3)
  return t.get()

def time_all(n_repeats=10):
  print overhead: %.2f % time_overhead(n_repeats)
  print plain_grouping: %.2f % time(plain_grouping, n_repeats)
  print update_grouping: %.2f % time(update_grouping, n_repeats)
  print plain_adopt_grouping: %.2f % time(plain_adopt_grouping, n_repeats)
  print autoinit_grouping: %.2f % time(autoinit_grouping, n_repeats)
  print autoinit_setattr_grouping: %.2f % time(autoinit_setattr_grouping, 
n_repeats)
  print autoattr_grouping: %.2f % time(autoattr_grouping, n_repeats)
  if (Record is not None):
print record_grouping: %.2f % time(record_grouping, n_repeats)

if (__name__ == __main__):
  time_all()
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Defending Python

2005-07-09 Thread Jorey Bump
Bruno Desthuilliers [EMAIL PROTECTED] wrote in 
news:[EMAIL PROTECTED]:

 Grant Edwards a écrit :
 On 2005-07-09, Brian [EMAIL PROTECTED] wrote:
 
 
folks as an easily acquired extra skill.

I agree 100% with your statement above.  Python may not be sufficient 
for being the only programming language that one needs to know -- yet, 
it does come in VERY handy for projects that need to perform tasks on 
non-Microsoft operating systems.  :-)
 
 
 It's also darned handy for projects that need to perform tasks
 on Microsft operating systems but you want to do all the
 development work under a real OS.
 
 
 It's also darned handy for projects that need to perform tasks on 
 Microsft operating systems.

It's also darned handy for projects that need to perform tasks... Or 
don't... And for other things besides projects...

Anyway, it's darned handy!
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Should I use if or try (as a matter of speed)?

2005-07-09 Thread John Machin
Roy Smith wrote:

 Well, you've now got a failure.  I used to write Fortran on punch cards, 

which were then fed into an OCR gadget? That's an efficient approach -- 
where I was, we had to write the FORTRAN [*] on coding sheets; KPOs 
would then produce the punched cards.

[snip]

 
 3) In some cases, they can lead to faster code.  A classic example is 
 counting occurances of items using a dictionary:
 
count = {}
for key in whatever:
   try:
  count[key] += 1
   except KeyError:
  count[key] = 1
 
 compared to
 
count = {}
for key in whatever:
   if count.hasKey(key):

Perhaps you mean has_key [*].
Perhaps you might like to try

if key in count:

It's believed to be faster (no attribute lookup, no function call).

[snip]

[*] 
humanandcomputerlanguagesshouldnotimhousecaseandwordseparatorsascrutchesbuttheydosogetusedtoit
 
:-)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python Module Exposure

2005-07-09 Thread Jacob Page
George Sakkis wrote:
 Jacob Page [EMAIL PROTECTED] wrote:
 
I think I will keep Interval exposed.  It sort of raises a bunch of
hard-to-answer design questions having two class interfaces, though.
For example, would Interval.between(2, 3) + Interval.between(5, 7) raise
an error (as it currently does) because the intervals are disjoint or
yield an IntervalSet, or should it not even be implemented?  How about
subtraction, xoring, and anding?  An exposed class should have a more
complete interface.

I think that IntervalSet.between(5, 7) | IntervalSet.between(2, 3) is
more intuitive than IntervalSet(Interval.between(5, 7),
Interval.between(2, 3)), but I can understand the reverse.  I think I'll
just support both.
 
 As I see it, there are two main options you have:
 
 1. Keep Intervals immutable and pass all the responsibility of
 combining them to IntervalSet. In this case Interval.__add__ would have
 to go. This is simple to implement, but it's probably not the most
 convenient to the user.
 
 2. Give Interval the same interface with IntervalSet, at least as far
 as interval combinations are concerned, so that Interval.between(2,3) |
 Interval.greaterThan(7) returns an IntervalSet. Apart from being user
 friendlier, an extra benefit is that you don't have to support
 factories for IntervalSets, so I am more in favor of this option.

I selected option one; Intervals are immutable.  However, this doesn't 
mean that __add__ has to go, as that function has no side-effects.  The 
reason I chose option one was because it's uncommon for a mathematical 
operation on two objects to return a different type altogether.

 Another hard design problem is how to combine intervals when
 inheritance comes to play. Say that you have FrozenInterval and
 FrozenIntervalSet subclasses. What should Interval.between(2,3) |
 FrozenInterval.greaterThan(7) return ?

For now, operations will return mutable instances.  They can always be 
frozen later if needs be.
-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: interval module 0.2.0 (alpha)

2005-07-09 Thread Jacob Page
After some feedback from this newsgroup, I've updated and renamed the 
iset module to the interval module.  Many of the names within the module 
have also changed, and I've refactored a lot of the code.  The updated 
version can be found at http://members.cox.net/apoco/interval/, as well 
as a change log.

Again, any suggestions would be greatly appreciated.  I especially want 
to sort out big design-level changes first.  Then I can upgrade the 
project status to beta and try to keep interface compatibility intact.

--
Jacob
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: removing list comprehensions in Python 3.0

2005-07-09 Thread Steven Bethard
Devan L wrote:
import timeit
t1 = timeit.Timer('list(i for i in xrange(10))')
t1.timeit()
 
 27.267753024476576
 
t2 = timeit.Timer('[i for i in xrange(10)]')
t2.timeit()
 
 15.050426800054197
 
t3 = timeit.Timer('list(i for i in xrange(100))')
t3.timeit()
 
 117.61078097914682
 
t4 = timeit.Timer('[i for i in xrange(100)]')
t4.timeit()
 
 83.502424470149151
 
 Hrm, okay, so generators are generally faster for iteration, but not
 for making lists(for small sequences), so list comprehensions stay.

Ahh, thanks. Although, it seems like a list isn't very useful if you 
never iterate over it. ;)

Also worth noting that in Python 3.0 it is quite likely that list 
comprehensions and generator expressions will have the same underlying 
implementation.  So while your tests above satisfy my curiosity 
(thanks!) they're not really an argument for retaining list 
comprehensions in Python 3.0.  And list comprehensions won't go away 
before then because removing them will break loads of existing code.

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: removing list comprehensions in Python 3.0

2005-07-09 Thread Steven Bethard
Raymond Hettinger wrote:
 The root of this discussion has been the observation that a list
 comprehension can be expressed in terms of list() and a generator
 expression.

As George Sakkis already noted, the root of the discussion was actually 
the rejection of the dict comprehensions PEP.

 However, the former is faster when you actually want a list result

I would hope that in Python 3.0 list comprehensions and generator 
expressions would be able to share a large amount of implementation, and 
thus that the speed differences would be much smaller.  But maybe not...

 and many people (including Guido) like the square brackets.
^
|
This --+ of course, is always a valid point. ;)

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Yet Another Python Web Programming Question

2005-07-09 Thread and-google
Daniel Bickett wrote:

 Python using CGI, for example, was enough for him until he started
 getting 500 errors that he wasn't sure how to fix.

Every time you mention web applications on this list, there will
necessarily be a flood of My Favourite Framework Is X posts.

But you* sound like you don't want a framework to take over the
architecture of your app and tell you what to do. And, indeed, you
don't need to do that. There are plenty of standalone modules you can
use - even ones that are masquerading as part of a framework.

I personally use my own input-stage and templating modules, along with
many others, over standard CGI, and only bother moving to a faster
server interface which can support DB connection pooling (such as
mod_python) if it's actually necessary - which is, surprisingly, not
that often. Hopefully if WSGI catches on we will have a better
interface available as standard in the future.

Not quite sure what 500 Errors you're getting, but usually 500s are
caused by unhandled exceptions, which Apache doesn't display the
traceback from (for security reasons). Bang the cgitb module in there
and you should be able to diagnose problems more easily.

 He is also interested in some opinions on the best/most carefree way
 of interfacing with MySQL databases.

MySQLdb works fine for me:

  http://sourceforge.net/projects/mysql-python/

(* - er, I mean, Hypothetical. But Hypothetical is a girl's name!)

-- 
Andrew Clover
mailto:[EMAIL PROTECTED]
http://www.doxdesk.com/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: already written optparse callback for range and list arguments?

2005-07-09 Thread Peter Hansen
Alex Gittens wrote:
 I would like my program to accept a list of range values on the
 command line, like
 -a 1
 -a 1-10
 -a 4,5,2
 
 In the interest of avoiding reinventing the wheel, is there already
 available code for a callback that would enable optparse to parse
 these as arguments?

Doesn't that depend on what you plan to do with them?  Do you want them 
as a special Range object, or a series of integers, or a list of tuples, 
or what?

-Peter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Should I use if or try (as a matter of speed)?

2005-07-09 Thread Steven D'Aprano
On Sat, 09 Jul 2005 23:10:49 +0200, Thomas Lotze wrote:

 Steve Juranich wrote:
 
 I was wondering how true this holds for Python, where exceptions are such
 an integral part of the execution model.  It seems to me, that if I'm
 executing a loop over a bunch of items, and I expect some condition to
 hold for a majority of the cases, then a try block would be in order,
 since I could eliminate a bunch of potentially costly comparisons for each
 item.
 
 Exactly.
 
 But in cases where I'm only trying a single getattr (for example),
 using if might be a cheaper way to go.
 
 Relying on exceptions is faster. In the Python world, this coding style
 is called EAFP (easier to ask forgiveness than permission). You can try
 it out, just do something 10**n times and measure the time it takes. Do
 this twice, once with prior checking and once relying on exceptions.

True, but only sometimes. It is easy to write a test that gives misleading
results.

In general, setting up a try...except block is cheap, but actually calling
the except clause is expensive. So in a test like this:

for i in range(1):
try:
x = mydict[missing key]
except KeyError:
print Failed!

will be very slow (especially if you time the print, which is slow).

On the other hand, this will be very fast:

for i in range(1):
try:
x = mydict[existing key]
except KeyError:
print Failed!

since the except is never called.

On the gripping hand, testing for errors before they happen will be slow
if errors are rare:

for i in range(1):
if i == 0:
print Failed!
else:
x = 1.0/i

This only fails on the very first test, and never again.

When doing your test cases, try to avoid timing things unrelated to the
thing you are actually interested in, if you can help it. Especially I/O,
including print. Do lots of loops, if you can, so as to average away
random delays due to the operating system etc. But most importantly, your
test data must reflect the real data you expect. Are most tests
successful or unsuccessful? How do you know?

However, in general, there are two important points to consider.

- If your code has side effects (eg changing existing objects, writing to
files, etc), then you might want to test for error conditions first.
Otherwise, you can end up with your data in an inconsistent state.

Example:

L = [3, 5, 0, 2, 7, 9]

def invert(L):
Changes L in place by inverting each item.
try:
for i in range(len(L)):
L[i] = 1.0/L[i]
except ZeroDivisionError:
pass 

invert(L)
print L

= [0.333, 0.2, 0, 2, 7, 9]


- Why are you optimizing your code now anyway? Get it working the simplest
way FIRST, then _time_ how long it runs. Then, if and only if it needs to
be faster, should you worry about optimizing. The simplest way will often
be try...except blocks.


-- 
Steven.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: removing list comprehensions in Python 3.0

2005-07-09 Thread Raymond Hettinger
[Steven Bethard]
 I would hope that in Python 3.0 list comprehensions and generator
 expressions would be able to share a large amount of implementation, and
 thus that the speed differences would be much smaller.  But maybe not...

Looking under the hood, you would see that the implementations are
necessarily as different as night and day.  Only the API is similar.


Raymond

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: removing list comprehensions in Python 3.0

2005-07-09 Thread Raymond Hettinger
[Raymond Hettinger]
  It is darned inconvenient to get an iterator when you really
  need a list, when you want to slice the result, when you want to see a
  few elements through repr(), and when you need to loop over the
  contents more than once.

[George Sakkis]
 Similar arguments can be given for dict comprehensions as well.

You'll find that lever arguments carry little weight in Python
language design (well, you did X in place Y so now you have to do it
everywhere even if place Z lacks compelling use cases).

For each variant, the balance is different.  Yes, of course, list
comprehensions have pros and cons similar to set comprehensions, dict
comps, etc.  However, there are marked differences in frequency of use
cases, desirability of having an expanded form, implementation issues,
varying degrees of convenience, etc.

The utility and generality of genexps raises the bar quite high for
these other forms.  They would need to be darned frequent and have a
superb performance advantage.

Take it from the set() and deque() guy, we need set, dict, and deque
comprehensions like we need a hole in the head.  The constructor with a
genexp does the trick just fine.

Why the balance tips the other way for list comps is both subjective
and subtle.  I don't expect to convince you by a newsgroup post.
Rather, I can communicate how one of the core developers perceives the
issue.  IMHO, the current design strikes an optimal balance.

'nuff said,


Raymond

-- 
http://mail.python.org/mailman/listinfo/python-list


importing files from a directory

2005-07-09 Thread spike grobstein
I'm a total Python newbie, so bear with me here...

I'm writing a program that has a user-configurable, module-based
architecture. it's got a directory where modules are stored (.py files)
which subclass one of several master classes.

My plan is to have the program go into the folder called Modules and
load up each file, run the code, and get the class and append it to an
array so I can work with all of the objects later (when I need to
actually use them and whatnot).

What I'm currently doing is:

import os

print Loading modules...
mod_dir = Modules

module_list = [] #empty list...

dir_list = os.listdir(mod_dir)

for item in dir_list:
# strip off the extensions...
if (item == __init__.py):
continue
elif (item[-3:] == '.py'):
mod_name = item[:-3]
elif (item[-4:] == '.pyc'):
mod_name = item[:-4]
else:
continue

print Loading %s... % mod

module_list.append(__import__(Modules.%s % mod))

print Done.


it works more or less like I expect, except that...

A. the first time it runs, blah.py then has a blah.pyc counterpart.
When I run the program again, it imports it twice. Not horrible, but
not what I want. is there any way around this?

B. module_list winds up consisting of items called 'Modules.blah' and
I'd like to have just blah. I realize I could say:

my_module = __import__(Modules.%s % mod)
module_list.append(getattr(my_module, mod))

but...

is there a better way to accomplish what I'm trying to do?

tia.



...spike

-- 
http://mail.python.org/mailman/listinfo/python-list


Python Forum

2005-07-09 Thread Throne Software
Throne Software has opened up a Python Forum at:

http://www.thronesoftware.com/forum/

Join us!

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python Forum

2005-07-09 Thread Devan L
I see a total of 12 posts and 8 users.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Should I use if or try (as a matter of speed)?

2005-07-09 Thread Jorey Bump
Steve Juranich [EMAIL PROTECTED] wrote in 
news:[EMAIL PROTECTED]:

 I was wondering how true this holds for Python, where exceptions are
 such an integral part of the execution model.  It seems to me, that if
 I'm executing a loop over a bunch of items, and I expect some
 condition to hold for a majority of the cases, then a try block
 would be in order, since I could eliminate a bunch of potentially
 costly comparisons for each item.  But in cases where I'm only trying
 a single getattr (for example), using if might be a cheaper way to
 go.
 
 What do I mean by cheaper?  I'm basically talking about the number
 of instructions that are necessary to set up and execute a try block
 as opposed to an if block.
 
 Could you please tell me if I'm even remotely close to understanding
 this correctly?

*If* I'm not doing a lot of things once, I *try* to do one thing a lot. 

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: removing list comprehensions in Python 3.0

2005-07-09 Thread Steven Bethard
Raymond Hettinger wrote:
 [Steven Bethard]
 
I would hope that in Python 3.0 list comprehensions and generator
expressions would be able to share a large amount of implementation, and
thus that the speed differences would be much smaller.  But maybe not...
 
 Looking under the hood, you would see that the implementations are
 necessarily as different as night and day.  Only the API is similar.

Necessarily?  It seems like list comprehensions *could* be implemented 
as a generator expression passed to the list constructor.  They're not 
now, and at the moment, changing them to work this way seems like a bad 
idea because list comprehensions would take a performance hit.  But I 
don't understand why the implementations are *necessarily* different. 
Could you explain?

STeVe

P.S. The dis.dis output for list comprehensions makes what they're doing 
pretty clear.  But dis.dis doesn't seem to give me as much information 
when looking at a generator expression:

py def ge(items):
... return (item for item in items if item)
...
py dis.dis(ge)
   2   0 LOAD_CONST   1 (code object generator 
expression at 0116FD20, file interactive input, line 2)
   3 MAKE_FUNCTION0
   6 LOAD_FAST0 (items)
   9 GET_ITER
  10 CALL_FUNCTION1
  13 RETURN_VALUE

I tried to grep through the dist\src directories for what a generator 
expression code object looks like, but without any luck.  Any chance you 
could point me in the right direction?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Yet Another Python Web Programming Question

2005-07-09 Thread Daniel Bickett
I neglected to mention an important fact, and that is the fact that I
am limited to Apache, which elminates several suggestions (that are
appreciated none-the-less).
-- 
Daniel Bickett
dbickett at gmail.com
http://heureusement.org/
-- 
http://mail.python.org/mailman/listinfo/python-list


bsddb environment lock failure

2005-07-09 Thread Barry
I have python2.4.1 installed on two machines:
-- one is Fedora core 1, where the bsddb module works fine 
-- one is Redhat ES 3.0, and I installed mysql 4.1 (and
mysql-python2.1) after putting the newer python on the machine.

python2.2, which came with Redhat ES, works fine, so I suppose I
messed up the build.

I much appreciate any insight in how to fix this. 

Barry

Here are some details:

When trying to open (or create) a db, I get this error

  File /opt/Python-2.4.1/Lib/bsddb/__init__.py, line 285, in hashopen
e = _openDBEnv()
  File /opt/Python-2.4.1/Lib/bsddb/__init__.py, line 339, in _openDBEnv
e.open('.', db.DB_PRIVATE | db.DB_CREATE | db.DB_THREAD | db.DB_INIT_LOCK | 
db.DB_INIT_MPOOL)
bsddb._db.DBError: (38, 'Function not implemented -- process-private: unable to 
initialize environment lock: Function not implemented')

I tried rebuilding python2.4.1, but 'make test' shows bsddb errors.

 First, there is:

test_bsddb3 skipped -- Use of the `bsddb' resource not enabled

test_whichdb shows the environment lock error

At the end, it's anydbm, bsddb, shelve, and whichdb failed.

I don't which of the differences in setup are important here, but both
python.2.4.1 are installed in /opt

both have libdb-4.1, although slightly different:

The Fedora machine has libdb from db4-4.1.25-14.rpm
The Redhat, from db4-4.1.25-8.1 -- this one also has libdb_cxx-3.1.so
- libdb_cxx-3.3.so
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: __autoinit__ (Was: Proposal: reducing self.x=x; self.y=y; self.z=z boilerplate code)

2005-07-09 Thread Kay Schluehr
  I stripped your code down to the essence. See attachment.
 For the user your approach then becomes:

   class grouping:
 __metaclass__ = autoattr
 def __init__(self, x, y, z):
   pass

No. This is clearly NOT what I had in mind. I translated your original
proposal which introduced a punctuation syntax '.x' for constructor
parameters forcing the interpreter to create equally named object
attributes into a naming convention that can be handled by a metaclass
customizer. The grouping.__init__ above does exacly nothing according
to my implementation. I would never accept dropping fine-tuning
capabilities. The auto_ prefix is all the declarative magic.

 My __autoinit__ suggestion would result in (assuming object supports
 this by default):

   class grouping(object):
 def __autoinit__(self, x, y, z):
   pass

 I think that's far more intuitive.

Being intuitive is relative to someones intuition. 

Kay

-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   >