Re: Another newbie question

2005-12-10 Thread Mike Meyer
Steven D'Aprano <[EMAIL PROTECTED]> writes:
> On Sat, 10 Dec 2005 22:56:12 -0500, Mike Meyer wrote:
>>> Really, I don't think this makes a good poster child for your "attribute
>>> mutators make life more difficult" campaign...;-)
>> The claim is that there exists cases where that's true. This cases
>> demonstrates the existence of such cases. That the sample is trivial
>> means the difficulty is trivial, so yeah, it's a miserable poster
>> child. But it's a perfectly adequate existence proof.
> Huh?
> As I see it:
> Claim: doing X makes Y hard.

Harder, not hard.

> Here is an example of doing X where Y is easy.

Y is very easy in any case. Making it incrementally harder doesn't
make it hard - it's still very easy.

> Perhaps I've missed some subtle meaning of the terms "demonstrates" and
> "existence proof".

I think you missed the original claim.

  http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question

2005-12-10 Thread Mike Meyer
[EMAIL PROTECTED] (Alex Martelli) writes:
> Mike Meyer <[EMAIL PROTECTED]> wrote:
>> > Really, I don't think this makes a good poster child for your "attribute
>> > mutators make life more difficult" campaign...;-)
>> The claim is that there exists cases where that's true. This cases
>> demonstrates the existence of such cases. That the sample is trivial
>> means the difficulty is trivial, so yeah, it's a miserable poster
>> child. But it's a perfectly adequate existence proof.
> You appear to be laboring under the serious misapprehension that you
> have demonstrate any DIFFICULTY whatsoever in writing mutators
> (specifically attribute-setters).  Let me break the bad news to you as
> diplomatically as I can: you have not.  All that your cherished example
> demonstrates is: if you're going to write a method, that method will
> need a body of at least one or two statements - in this case, I've shown
> (both in the single concrete example, and in the generalized code) that
> IF a set of attributes is interesting enough to warrant building a new
> instance based on them (if it is totally uninteresting instead, then
> imagining that you have to allow such attributes to be MUTATED on an
> existing instance, while forbidding them to be ORIGINALLY SET to create
> a new instance, borders on the delusional -- what cases would make the
> former but not the latter an important functionality?!), THEN
> implementing mutators (setters for those attributes) is trivially EASY
> (the converse of DIFFICULT) -- the couple of statements in the attribute
> setters' bodies are so trivial that they're obviously correct, assuming
> just correctness of the factory and the state-copying methods.

It's not my cherished example - it actually came from someone
else. That you can change the requirements so that there is no extra
work is immaterial - all you've done is shown that there are examples
where that don't require extra work. I never said that such examples
didn't exist. All you've shown - in both the single concrete example
and in a generalized case - is that any requirement can be changed so
that it doesn't require any extra work. This doesn't change the fact
that such cases exist, which is all that I claimed was the case.

   http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question

2005-12-10 Thread Mike Meyer
Steven D'Aprano <[EMAIL PROTECTED]> writes:
> On Sat, 10 Dec 2005 13:33:25 -0500, Mike Meyer wrote:
>> Steven D'Aprano <[EMAIL PROTECTED]> writes:
 In particular,
 you can get most of your meaningless methods out of a properly
 designed Coordinate API. For example, add/sub_x/y_ord can all be
 handled with move(delta_x = 0, delta_y = 0).
>>>
>>> Here is my example again:
>>>
>>> [quote]
>>> Then, somewhere in my application, I need twice the 
>>> value of the y ordinate. I would simply say:
>>>
>>> value = 2*pt.y
>>> [end quote]
>>>
>>> I didn't say I wanted a coordinate pair where the y ordinate was double
>>> that of the original coordinate pair. I wanted twice the y ordinate, which
>>> is a single real number, not a coordinate pair.
>>   
>> Here you're not manipulating the attribute to change the class -
>> you're just using the value of the attribute. That's what they're
>> there for.
>
> [bites tongue to avoid saying a rude word]
>
> That's what I've been saying all along!
>
> But according to the "Law" of Demeter, if you take it seriously, I
> mustn't/shouldn't do that, because I'm assuming pt.y will always have a
> __mul__ method, which is "too much coupling". My Coordinate class
> must/should create wrapper functions like this:

I think you've misunderstood the LoD. In particular, 2 * pt.y doesn't
necessarily involve violating the LOD, if it's (2).__add__(pt.y). If
it's pt.y.__add__(2), then it would. But more on that later.

>>> Do you lie awake at nights worrying that in Python 2.6 sys.stdout will
>>> be renamed to sys.standard_output, and that it will no longer have a
>>> write() method? According to the "law" of Demeter, you should, and the
>>> writers of the sys module should have abstracted the fact that stdout
>>> is a file away by providing a sys.write_to_stdout() function. That is
>>> precisely the sort of behaviour which I maintain is unnecessary.
>> 
>> And that's not the kind of behavior I'm talking about here, nor is it
>> the kind of behavior that the LoD is designed to help you with (those
>> are two different things).
>
> How are they different? Because one is a class and the other is a module?
> That's a meaningless distinction: you are still coupled to a particular
> behaviour of something two levels away. If the so-called Law of Demeter
> makes sense for classes, it makes sense for modules too.

And here's where I get to avoid saying a rude word. I'm not going to
chase down my original quote, but it was something along the lines of
"You shouldn't reach through multiple levels of attribute to change
things like that, it's generally considered a bad design". You asked
why, and I responded by pointing to the LoD, because it covers that,
and the underlying reasons are mostly right. I was being lazy, and
took an easy out - and got punished for it by winding up in the
position of defending the LoD.

My problem with the original code wasn't that it violated the LoD; it
was that it was reaching into the implementation in the process, and
manipulating attributes to do things that a well-designed API would do
via methods of the object.

The LoD forces you to uncouple your code from your clients, and
provide interfaces for manipulating your object other than by mucking
around with your attribute. I consider this a good thing. However, it
also prevents perfectly reasonable behavior, and there we part
company.

And of course, it doesn't ensure good design. As you demonstrated, you
can translate the API "manipulate my guts by manipulating my
attributes" into an LoD compliant API by creating a collection
meaningless methods. If the API design was bad to begin with, changing
the syntax doesn't make it good. What's a bad idea hefre is exposing
parts of your implementation to clients so they can control your
state. Whether you do that with a slew of methods for mangling the
implementation, or just grab the attribute and use it is
immaterial. The LoD tries to deal with this by outlawing such
manipulation. People respond by mechanically translating the design
into a form that follows the law.  Mechanically translating a bad
design into compliance with a design law doesn't make it a good
design.

Instead of using a vague, simple example with a design we don't agree
on, let's try taking a solid example that we both (I hope) agree is
good, and changing it to violate encapsulation.

Start with dict. Dict has an argumentless method, which means we could
easily express it as an attribute: keys. I'm going to treat it as an
attribute for this discussion, because it's really immaterial to the
point (and would work in some languages), but not to people's
perceptions.

Given that, what should mydict.keys.append('foobar') do? Given the
current implementation, it appends 'foobar' to a list that started
life as a list of the keys of mydict. It doesn't do anything to
mydict; in particular, the next time you reference mydict.keys, you
won't get back the list. This is a good design. If
mydict.keys.ap

Re: Another newbie question

2005-12-10 Thread Paul Rubin
Steven D'Aprano <[EMAIL PROTECTED]> writes:
> The fact that sys is a module and not a class is a red herring. If the
> "Law" of Demeter makes sense for classes, it makes just as much sense for
> modules as well -- it is about reducing coupling between pieces of code,
> not something specific to classes. 

I don't see that.  If a source line refers to some module you can get
instantly to the module's code.  But you can't tell where any given
class instance comes from.  That's one of the usual criticisms of OOP,
that the flow of control is obscured compared with pure procedural
programming.

> One dot good, two dots bad.

Easy to fix.  Instead of sys.stdout.write(...) say

  from sys import stdout

from then on you can use stdout.write(...) instead of sys.stdout.write.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Using XML w/ Python...

2005-12-10 Thread James
XPath is the least painful way of doing it.

Here are some samples with various libraries for XPath
http://www.oreillynet.com/pub/wlg/6225

Read XPath basics here
http://www.w3schools.com/xpath/default.asp

It is not practical and perhaps not polite to expect people write
tutorials just for you and send by email. There are a lot of tutorials
on the web on this. Just use Google.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question

2005-12-10 Thread Steven D'Aprano
On Sat, 10 Dec 2005 15:46:35 +, Antoon Pardon wrote:

>> Do you lie awake at nights worrying that in Python 2.6 sys.stdout will be
>> renamed to sys.standard_output, and that it will no longer have a write()
>> method? According to the "law" of Demeter, you should, and the writers of
>> the sys module should have abstracted the fact that stdout is a file away
>> by providing a sys.write_to_stdout() function.
> 
> I find this a strange interpretation.
> 
> sys is a module, not an instance. Sure you can use the same notation
> and there are similarities but I think the differences are more
> important here.

The fact that sys is a module and not a class is a red herring. If the
"Law" of Demeter makes sense for classes, it makes just as much sense for
modules as well -- it is about reducing coupling between pieces of code,
not something specific to classes. 

The point of the "Law" of Demeter is to protect against changes in objects
more than one step away from the caller. You have some code that wants to
write to stdout, which you get from the sys module -- that puts sys one
step away, so you are allowed to rely on the published interface to sys,
but not anything further away than that: according to the so-called "law",
you shouldn't/mustn't rely on things more than one step away from the
calling code.

One dot good, two dots bad.

Assuming that stdout will always have a write() method is "bad" because it
couples your code to a particular implementation of stdout: it assumes
that it will always be a file-like object with a write method. What if the
maintainer of sys decides to change it?

Arguing that "this will never happen, it would break too much code" is
*not* good enough, not for the Law of Demeter zealots -- they will argue
that the only acceptable way to code is to create an interface to the
stdout object one level away from the calling code. Instead of calling
sys.stdout.write() (unsafe, what if the stdout object changes?) you must
use something like sys.write_to_stdout() (only one level away).

The fact that people can and do break the "Law" of Demeter all the time,
with no harmful effects, shows that it is no law at all. It is a
*guideline*, and as a guideline I've heard worse ideas than "keep your
options open". That's what it really boils down to: if you specify an
interface of helper functions, you can change your implementation, at the
expense of doing a lot extra work now just in case you will need it later.
But that's not a virtue in and of itself -- it is only good if you
actually intend to change your implementation, or at least think you might
want to some day, and then only if the work needed to write your
boilerplate is less than the work needed to adapt to the changed
implementation.

[snip]

>> But I *want* other classes to poke around inside my implementation.
>> That's a virtue, not a vice. My API says:
>>
>> "In addition to the full set of methods which operate on the coordinate
>> as a whole, you can operate on the individual ordinates via instance.x
>> and instance.y which are floats."
> 
> Yikes. I would never do that. Doing so would tie my code unnecesary
> close to yours and would make it too difficult to change to an other
> class with a different implementation like one using tuples or lists
> instead of a seperate x and y instances.

Do you really think that my class and some other class written by
another person will have the same API? If you change from my class to
another class, the chances are that the interfaces will be different
unless the second class writer deliberately emulated my class interface.

To class users, there is *no* difference in consequences between me
changing my published API by removing named attributes x and y from my
class, and me changing my published API by removing or changing methods.


>> Your API says:
>>
>> "In addition to the full set of methods which operate on the coordinate
>> as a whole, you can operate on the individual ordinates via methods
>> add_x, add_y, mult_x, mult_y, sub_x, sub_y, rsub_x, rsub_y, div_x,
>> div_y, rdiv_x, rdiv_y, exp_x, exp_y, rexp_x, rexp_y...; the APIs of
>> these methods are: ... "
> 
> Who in heavens name would need those? Maybe there is no x or y because
> the implementation uses a list or a tuple, maybe the implementation uses
> polar coordinates because that is more usefull for the application it
> was planned for.

And maybe it isn't a Coordinate class at all, hmmm?

An ordinary, Cartesian, real-valued Coordinate is a pair of ordinates, an
X and Y ordinates. That's what it *is* -- a coordinate class without X and
Y ordinates isn't a coordinate class, regardless of how they are
implemented (via properties, named attributes, or a whole bucketful of
helper functions).

I'm not interested in polar coordinates, lists, dicts, red-black trees,
complex-valued infinite dimensional vectors, byte streams or any other
class. If I wanted one of those, I'd write *that* class and I wouldn't
need to access the X and 

Re: Another newbie question

2005-12-10 Thread Steven D'Aprano
On Sat, 10 Dec 2005 22:56:12 -0500, Mike Meyer wrote:

[snip]

>> Really, I don't think this makes a good poster child for your "attribute
>> mutators make life more difficult" campaign...;-)
> 
> The claim is that there exists cases where that's true. This cases
> demonstrates the existence of such cases. That the sample is trivial
> means the difficulty is trivial, so yeah, it's a miserable poster
> child. But it's a perfectly adequate existence proof.

Huh?

As I see it:

Claim: doing X makes Y hard.
Here is an example of doing X where Y is easy.
Therefore that example proves that doing X makes Y hard.

Perhaps I've missed some subtle meaning of the terms "demonstrates" and
"existence proof".



-- 
Steven.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ANN: Dao Language v.0.9.6-beta is release!

2005-12-10 Thread Steven D'Aprano
On Sat, 10 Dec 2005 22:01:07 +, Zeljko Vrba wrote:

> On 2005-12-10, Tom Anderson <[EMAIL PROTECTED]> wrote:
>>
>> ED IS THE STANDARD TEXT EDITOR.
>>
> And:
>   INDENTATION
>   SUCKS
>   BIG
>   TIME.
> 
> Using indentation without block termination markers is opposite of the way we
> write spoken language, terminating each sentence with . 

That's not true at all.

We terminate sentences with punctuation because sentences can and
frequently do go over multiple lines. Sentences are independent of
indentation. But paragraphs are not. We terminate paragraphs with
whitespace, sometimes including indentation. A very common method of
terminating paragraphs in written English is to indent the first line of a
new paragraph, with no vertical whitespace between them.

We also terminate items in lists, sometimes without punctuation, by a new
line:

item one
item two
item three

and indicate long items by a change in indentation:

item one
item two is an extremely long item 
which goes over two or more 
physical lines
item three

We sometimes indicate a block of text -- which may be one or more
paragraphs -- purely with indentation:

Blocks of quoted text are frequently delimited by a 
blank line at the top and bottom of the block, and 
indentation on the left margin. The indentation is 
necessary because the block of text may include 
multiple paragraphs.
On the other hand, vertical white space between 
paragraphs is optional. It is a common convention to
flag new paragraphs with an indentation, or if already
indented, an extra indentation.
It is even possible to have multiple levels of 
   quoting. According to Professor Joe Expert:

As a general rule, one should never indent 
more than two levels deep, even if this means
avoiding quoting text which quotes a quotation.


We also delimit larger sections of novels with whitespace. A common
convention is to use an indent and no vertical space to delimit
paragraphs, and three blank lines to delimit a block of text (for example,
when changing the point of view character). Only if that end of block
occurs at the end of the page is it replaced with punctuation,
usually a series of three asterisks.


So, in summary, your argument that block markers are necessary in English
is wrong. Only sentences use start/end markers. Words are delimited by
whitespace, paragraphs and larger blocks of text use either whitespace,
indentation or both.



-- 
Steven.

-- 
http://mail.python.org/mailman/listinfo/python-list


Using XML w/ Python...

2005-12-10 Thread Jay
OK, I have this  XML doc, i dont know much about XML, but what i want
to do is take certain parts of the XML doc, such as  blah
 and take just that and put onto a text doc. Then same thing
doe the  part. Thats about it, i checked out some of the xml
modules but dont understand how to use them. Dont get parsing, so if
you could please explain working with XML and python to me. Email me at
[EMAIL PROTECTED]

Aim- jayjay08balla
MSN- [EMAIL PROTECTED]
Yahoo- raeraefad72


Thx

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ANN: Dao Language v.0.9.6-beta is release!

2005-12-10 Thread Steven D'Aprano
On Sat, 10 Dec 2005 16:34:13 +, Tom Anderson wrote:

> On Sat, 10 Dec 2005, Sybren Stuvel wrote:
> 
>> Zeljko Vrba enlightened us with:
>>
>>> Find me an editor which has folds like in VIM, regexp search/replace 
>>> within two keystrokes (ESC,:), marks to easily navigate text in 2 
>>> keystrokes (mx, 'x), can handle indentation-level matching as well as 
>>> VIM can handle {}()[], etc.  And, unlike emacs, respects all (not just 
>>> some) settings that are put in its config file. Something that works 
>>> satisfactorily out-of-the box without having to learn a new programming 
>>> language/platform (like emacs).
>>
>> Found it! VIM!
> 
> ED IS THE STANDARD TEXT EDITOR.


Huh! *Real* men edit their text files by changing bits on the hard disk by
hand with a magnetized needle.


-- 
Steven.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Managing import statements

2005-12-10 Thread Kent Johnson
Jean-Paul Calderone wrote:
> On Sat, 10 Dec 2005 13:40:12 -0500, Kent Johnson <[EMAIL PROTECTED]> 
>> Do any of these tools (PyLint, PyChecker, pyflakes) work with Jython? To
>> do so they would have to work with Python 2.1, primarily...
> 
> Pyflakes will *check* Python 2.1, though you will have to run pyflakes 
> itself using Python 2.3 or newer.

In that case will it freak when it sees imports that aren't valid in 
CPython, like

import java
from javax.swing import JLabel

or is it doing syntactic analysis without trying to load the referenced 
modules?

Hmm, maybe I should just try it...

Thanks,
Kent
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question

2005-12-10 Thread Alex Martelli
Mike Meyer <[EMAIL PROTECTED]> wrote:

> [EMAIL PROTECTED] (Alex Martelli) writes:
> > def setRho(self, rho):
> > c = self.fromPolar(rho, self.getTheta())
> > self.x, self.y = c.x, c.y
> > def setTheta(self, theta):
> > c = self.fromPolar(self.getRho(), theta)
> > self.x, self.y = c.x, c.y
> >
> > That's the maximum possible "difficulty" (...if THIS was a measure of
> > real "difficulty" in programming, I doubt our jobs would be as well paid
> > as they are...;-) -- it's going to be even less if we need anyway to
> > have a method to copy a CoordinatePair instance from another, such as
> 
> It's a trivial example. Incremental extra work is pretty much
> guaranteed to be trivial as well.

You appear not to see that this triviality generalizes.  Given any set
of related attributes that among them determine non-redundantly and
uniquely the value of an instance (mathematically equivalent to forming
a primary key in a normal-form relational table), if it's at all
interesting to let those attributes be manipulated for a mutable
instance, if must be at least as important to offer an alternative ctor
or factory to create the instance from those attributes (and that
applies at least as strongly to the case of immutable instances).

Given that you have such a factory, *whatever its internal complexity*,
the supplementary amount of work to produce a setter for any subset of
the given attributes, whatever the internal representation of state used
for the instance, is bounded and indeed trivial:
a. create a new instance by calling the factory with the values of the
attributes being those of the existing instance (for attributes which
are not being changed by the current method) and the new value being set
(for attributes which are being set by the current method);
b. copy the internal state (whatever its representation may be) from the
new instance to the existing one (for many cases of classes with mutable
instances, you will already have a 'copyFrom' method doing this, anyway,
because it's useful in so many other usage contexts).

That's it -- you're done.  No *DIFFICULTY* -- indeed, a situation close
enough to boilerplate that if I found myself writing a framework with
multiple such classes I'd seriously consider refactoring it upwards into
a custom metaclass or the like, just because I dislike boilerplate as a
matter of principle.

 
> > Really, I don't think this makes a good poster child for your "attribute
> > mutators make life more difficult" campaign...;-)
> 
> The claim is that there exists cases where that's true. This cases
> demonstrates the existence of such cases. That the sample is trivial
> means the difficulty is trivial, so yeah, it's a miserable poster
> child. But it's a perfectly adequate existence proof.

You appear to be laboring under the serious misapprehension that you
have demonstrate any DIFFICULTY whatsoever in writing mutators
(specifically attribute-setters).  Let me break the bad news to you as
diplomatically as I can: you have not.  All that your cherished example
demonstrates is: if you're going to write a method, that method will
need a body of at least one or two statements - in this case, I've shown
(both in the single concrete example, and in the generalized code) that
IF a set of attributes is interesting enough to warrant building a new
instance based on them (if it is totally uninteresting instead, then
imagining that you have to allow such attributes to be MUTATED on an
existing instance, while forbidding them to be ORIGINALLY SET to create
a new instance, borders on the delusional -- what cases would make the
former but not the latter an important functionality?!), THEN
implementing mutators (setters for those attributes) is trivially EASY
(the converse of DIFFICULT) -- the couple of statements in the attribute
setters' bodies are so trivial that they're obviously correct, assuming
just correctness of the factory and the state-copying methods.


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Managing import statements

2005-12-10 Thread Jean-Paul Calderone
On Sat, 10 Dec 2005 11:54:47 -0700, Shane Hathaway <[EMAIL PROTECTED]> wrote:
>Jean-Paul Calderone wrote:
>> On Sat, 10 Dec 2005 02:21:39 -0700, Shane Hathaway <[EMAIL PROTECTED]> wrote:
>>>How about PyLint / PyChecker?  Can I configure one of them to tell me
>>>only about missing / extra imports?  Last time I used one of those
>>>tools, it spewed excessively pedantic warnings.  Should I reconsider?
>>
>>
>> I use pyflakes for this: .  The 
>> *only* things it tells me about are modules that are imported but never used 
>> and names that are used but not defined.  It's false positive rate is 
>> something like 1 in 10,000.
>
>That's definitely a good lead.  Thanks.
>
>> This is something I've long wanted to add to pyflakes (or as another feature 
>> of pyflakes/emacs integration).
>
>Is there a community around pyflakes?  If I wanted to contribute to it,
>could I?
>

A bit of one.  Things are pretty quiet (since pyflakes does pretty 
much everything it set out to do, and all the bugs seem to have been 
fixed (holy crap I'm going to regret saying that)), but if you mail
[EMAIL PROTECTED] with questions/comments/patches, or open a 
ticket in the tracker for a fix or enhancement, someone will 
definitely pay attention.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: pygene0.12 - Genetic Programming&Algorithms Library

2005-12-10 Thread aum
Hi all,

This announcement supersedes an earlier announcement of pygene.

pygene 0.2 now supports genetic programming, in addition to the classical
Mendelian genetic algorithms of the earlier version. I thank the
respondents to the earlier announcement for inspiring me to implement GP
functionality.

http://www.freenet.org.nz/python/pygene

Feedback and suggestions most welcome.

pygene does not claim to be any better or worse than the existing python
genetic algorithms and genetic programming libraries. out there. It does
follow a certain style of OO API design which will appeal to some more
than others. For me, I find it easiest to work with - but that's just the
author being biased.

-- 

Cheers
aum


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question

2005-12-10 Thread Steven D'Aprano
On Sat, 10 Dec 2005 13:33:25 -0500, Mike Meyer wrote:

> Steven D'Aprano <[EMAIL PROTECTED]> writes:
>>> In particular,
>>> you can get most of your meaningless methods out of a properly
>>> designed Coordinate API. For example, add/sub_x/y_ord can all be
>>> handled with move(delta_x = 0, delta_y = 0).
>>
>> Here is my example again:
>>
>> [quote]
>> Then, somewhere in my application, I need twice the 
>> value of the y ordinate. I would simply say:
>>
>> value = 2*pt.y
>> [end quote]
>>
>> I didn't say I wanted a coordinate pair where the y ordinate was double
>> that of the original coordinate pair. I wanted twice the y ordinate, which
>> is a single real number, not a coordinate pair.
>   
> Here you're not manipulating the attribute to change the class -
> you're just using the value of the attribute. That's what they're
> there for.

[bites tongue to avoid saying a rude word]

That's what I've been saying all along!

But according to the "Law" of Demeter, if you take it seriously, I
mustn't/shouldn't do that, because I'm assuming pt.y will always have a
__mul__ method, which is "too much coupling". My Coordinate class
must/should create wrapper functions like this:

class Coordinate:
def __init__(self, x, y):
self.x = x; self.y = x
def mult_y(self, other):
return self.y * other

so I am free to change the implementation (perhaps I stop using named
attributes, and use a tuple of two items instead).

I asked whether people really do this, and was told by you that they not
only do but that they should ("only with a better API design").

So we understand each other, I recognise that abstraction is a valuable
tool, and can be important. What I object to is taking a reasonable
guideline "try to keep coupling to the minimum amount practical" into an
overblown so-called law "you should always write wrapper functions to hide
layers more than one level deep, no matter how much extra boilerplate code
you end up writing".



>> The wise programmer
>> will recognise which classes have implementations likely to change, and
>> code defensively by using sufficient abstraction and encapsulation to
>> avoid later problems.
> 
> Except only the omniscennt programmer can do that perfectly.

I'm not interested in perfection, because it is unattainable. I'm
interested in balancing the needs of many different conflicting
requirements. The ability to change the implementation of my class after
I've released it is only one factor out of many. Others include initial
development time and cost, bugs, ease of maintenance, ease of
documentation, how complex an API do I expect my users to learn,
convenience of use, and others.

[snip]

>> Do you lie awake at nights worrying that in Python 2.6 sys.stdout will
>> be renamed to sys.standard_output, and that it will no longer have a
>> write() method? According to the "law" of Demeter, you should, and the
>> writers of the sys module should have abstracted the fact that stdout
>> is a file away by providing a sys.write_to_stdout() function. That is
>> precisely the sort of behaviour which I maintain is unnecessary.
> 
> And that's not the kind of behavior I'm talking about here, nor is it
> the kind of behavior that the LoD is designed to help you with (those
> are two different things).

How are they different? Because one is a class and the other is a module?
That's a meaningless distinction: you are still coupled to a particular
behaviour of something two levels away. If the so-called Law of Demeter
makes sense for classes, it makes sense for modules too.

[snip]

>> "In addition to the full set of methods which operate on the coordinate
>> as a whole, you can operate on the individual ordinates via instance.x
>> and instance.y which are floats."
> 
> That's an API which makes changing the object more difficult. It may be
> the best API for the case at hand, but you should be aware of the
> downsides.

Of course. We agree there. But it is a trade-off that can (not must, not
always) be a good trade-off, for many (not all) classes. One of the
advantages is that it takes responsibility for specifying every last thing
about ordinates within a coordinate pair out of my hands. They are floats,
that's all I need to say.

If you think I'm arguing that abstraction is always bad, I'm not. But it
is just as possible to have too much abstraction as it is to have too
little.


[snip]

> Again, this is *your* API, not mine. You're forcing an ugly, obvious API
> instead of assuming the designer has some smidgen of ability.

But isn't that the whole question? Should programmers follow slavishly the
so-called Law of Demeter to the extremes it implies, even at the cost of
writing ugly, unnecessary, silly code, or should they treat it as a
guideline, to be obeyed or not as appropriate?

Doesn't Python encourage the LoD to be treated as a guideline, by allowing
class designers to use public attributes instead of forcing them to write
tons of boilerplate code like some other lan

Re: Another newbie question

2005-12-10 Thread Mike Meyer
[EMAIL PROTECTED] (Alex Martelli) writes:
> def setRho(self, rho):
> c = self.fromPolar(rho, self.getTheta())
> self.x, self.y = c.x, c.y
> def setTheta(self, theta):
> c = self.fromPolar(self.getRho(), theta)
> self.x, self.y = c.x, c.y
>
> That's the maximum possible "difficulty" (...if THIS was a measure of
> real "difficulty" in programming, I doubt our jobs would be as well paid
> as they are...;-) -- it's going to be even less if we need anyway to
> have a method to copy a CoordinatePair instance from another, such as

It's a trivial example. Incremental extra work is pretty much
guaranteed to be trivial as well.

> Really, I don't think this makes a good poster child for your "attribute
> mutators make life more difficult" campaign...;-)

The claim is that there exists cases where that's true. This cases
demonstrates the existence of such cases. That the sample is trivial
means the difficulty is trivial, so yeah, it's a miserable poster
child. But it's a perfectly adequate existence proof.

 http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Managing import statements

2005-12-10 Thread Jean-Paul Calderone
On Sat, 10 Dec 2005 13:40:12 -0500, Kent Johnson <[EMAIL PROTECTED]> wrote:
>Jean-Paul Calderone wrote:
>> On Sat, 10 Dec 2005 02:21:39 -0700, Shane Hathaway
>> <[EMAIL PROTECTED]> wrote:
>>> How about PyLint / PyChecker?  Can I configure one of them to tell me
>>> only about missing / extra imports?  Last time I used one of those
>>> tools, it spewed excessively pedantic warnings.  Should I reconsider?
>>
>>
>> I use pyflakes for this: .
>> The *only* things it tells me about are modules that are imported but
>> never used and names that are used but n
>
>Do any of these tools (PyLint, PyChecker, pyflakes) work with Jython? To
>do so they would have to work with Python 2.1, primarily...

Pyflakes will *check* Python 2.1, though you will have to run pyflakes 
itself using Python 2.3 or newer.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Proposal: Inline Import

2005-12-10 Thread Robert Kern
Bengt Richter wrote:

> Are you willing to type a one-letter prefix to your .re ? E.g.,
> 
>  >>> class I(object):
>  ... def __getattr__(self, attr):
>  ... return __import__(attr)

[snip]

> There are special caveats re imports in threads, but otherwise
> I don't know of any significant downsides to importing at various
> points of need in the code. The actual import is only done the first time,
> so it's effectively just a lookup in sys.modules from there on.
> Am I missing something?

Packages.

-- 
Robert Kern
[EMAIL PROTECTED]

"In the fields of hell where the grass grows high
 Are the graves of dreams allowed to die."
  -- Richard Harter

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question

2005-12-10 Thread Erik Max Francis
Paul Rubin wrote:

> Right, you could use properties to make point.x get the real part of
> an internal complex number.  But now you're back to point.x being an
> accessor function; you've just set things up so you can call it
> without parentheses, like in Perl.  E.g.
> 
> a = point.x
> b = point.x
> assert (a is b)# can fail
> 
> for that matter
> 
> assert (point.x is point.x) 
> 
> can fail.  These attributes aren't "member variables" any more.

Which is perfectly fine, since testing identity with `is' in this 
context is not useful.

-- 
Erik Max Francis && [EMAIL PROTECTED] && http://www.alcyone.com/max/
San Jose, CA, USA && 37 20 N 121 53 W && AIM erikmaxfrancis
   Never use two words when one will do best.
   -- Harry S. Truman
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question

2005-12-10 Thread Alex Martelli
Paul Rubin  wrote:
   ...
> Right, you could use properties to make point.x get the real part of
> an internal complex number.  But now you're back to point.x being an
> accessor function; you've just set things up so you can call it
> without parentheses, like in Perl.  E.g.
> 
> a = point.x
> b = point.x
> assert (a is b)# can fail

Sure -- there's no assurance of 'is' (although the straightforward
implementation in today's CPython would happen to satisfy the assert).

But similarly, nowhere in the Python specs is there any guarantee that
for any complex number c, c.real is c.real (although &c same as above).
So what?  'is', for immutables like floats, is pretty useless anyway.


> for that matter
> 
> assert (point.x is point.x) 
> 
> can fail.  These attributes aren't "member variables" any more.

They are *syntactically*, just like c.real for a complex number c: no
more, no less.  I'm not sure why you're so focused on 'is', here.  But
the point is, you could, if you wished, enable "point.x=23" even if
point held its x/y values as a complex -- just, e.g.,
  def setX(self, x):
x.c = complex(x, self.y)
[[or use x.c.imag as the second argument if you prefer, just a style
choice]].


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Proposal: Inline Import

2005-12-10 Thread Bengt Richter
On Fri, 09 Dec 2005 12:24:59 -0700, Shane Hathaway <[EMAIL PROTECTED]> wrote:

>Here's a heretical idea.
>
>I'd like a way to import modules at the point where I need the 
>functionality, rather than remember to import ahead of time.  This might 
>eliminate a step in my coding process.  Currently, my process is I 
>change code and later scan my changes to make matching changes to the 
>import statements.   The scan step is error prone and time consuming. 
>By importing inline, I'd be able to change code without the extra scan step.
>
>Furthermore, I propose that the syntax for importing inline should be an 
>expression containing a dot followed by an optionally dotted name.  For 
>example:
>
>   name_expr = .re.compile('[a-zA-Z]+')
>
>The expression on the right causes "re.compile" to be imported before 
>calling the compile function.  It is similar to:
>
>   from re import compile as __hidden_re_compile
>   name_expr = __hidden_re_compile('[a-zA-Z]+')
>
>The example expression can be present in any module, regardless of 
>whether the module imports the "re" module or assigns a different 
>meaning to the names "re" or "compile".
>
>I also propose that inline import expressions should have no effect on 
>local or global namespaces, nor should inline import be affected by 
>local or global namespaces.  If users want to affect a namespace, they 
>must do so with additional syntax that explicitly assigns a name, such as:
>
>   compile = .re.compile

Are you willing to type a one-letter prefix to your .re ? E.g.,

 >>> class I(object):
 ... def __getattr__(self, attr):
 ... return __import__(attr)
 ...
 >>> I = I()
 >>> name_expr = I.re.compile('[a-zA-Z+]')
 >>> name_expr
 <_sre.SRE_Pattern object at 0x02EF4AC0>
 >>> compile = I.re.compile
 >>> compile
 
 >>> pi = I.math.pi
 >>> pi
 3.1415926535897931
 >>> I.math.sin(pi/6)
 0.49994

Of course it does cost you some overhead that you could avoid.

>In the interest of catching errors early, it would be useful for the 
>Python parser to produce byte code that performs the actual import upon 
>loading modules containing inline import expressions.  This would catch 
>misspelled module names early.  If the module also caches the imported 
>names in a dictionary, there would be no speed penalty for importing 
>inline rather than importing at the top of the module.
>
>I believe this could help many aspects of the language:
>
>- The coding workflow will improve, as I mentioned.
>
>- Code will become more self-contained.  Self-contained code is easier 
>to move around or post as a recipe.
>
>- There will be less desire for new builtins, since modules will be just 
>as accessible as builtins.
>
>Thoughts?
>
There are special caveats re imports in threads, but otherwise
I don't know of any significant downsides to importing at various
points of need in the code. The actual import is only done the first time,
so it's effectively just a lookup in sys.modules from there on.
Am I missing something?

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question

2005-12-10 Thread Paul Rubin
[EMAIL PROTECTED] (Alex Martelli) writes:
> > I could imagine using Python's built-in complex numbers to represent
> > 2D points.  They're immutable, last I checked.  I don't see a big
> > conflict.
> 
> No big conflict at all -- as I recall, last I checked, computation on
> complex numbers was optimized enough to make them an excellent choice
> for 2D points' internal representations.  I suspect you wouldn't want to
> *expose* them as such (e.g. by inheriting) but rather wrap them, because
> referring to the .real and .imag "coordinates" of a point (rather than
> .x and .y) IS rather weird.  Wrapping would also leave you the choice of
> making 2D coordinates a class with mutable instances, if you wish,
> reducing the choice of a complex rather than two reals to a "mere
> implementation detail";-).

Right, you could use properties to make point.x get the real part of
an internal complex number.  But now you're back to point.x being an
accessor function; you've just set things up so you can call it
without parentheses, like in Perl.  E.g.

a = point.x
b = point.x
assert (a is b)# can fail

for that matter

assert (point.x is point.x) 

can fail.  These attributes aren't "member variables" any more.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Proposal: Inline Import

2005-12-10 Thread Alex Martelli
Erik Max Francis <[EMAIL PROTECTED]> wrote:

> Shane Hathaway wrote:
> 
> > Let me fully elaborate the heresy I'm suggesting: I am talking about
> > inline imports on every other line of code.  The obvious implementation
> > would drop performance by a double digit percentage.
> 
> Module importing is already idempotent.  If you try to import an 
> already-imported module, inline or not, the second (or subsequent) 
> imports are no-operations.

Hmmm, yes, but they're rather SLOW no-operations...:

Helen:~ alex$ python -mtimeit -s'import sys' 'import sys'
10 loops, best of 3: 3.52 usec per loop

Now this is just a humble ultralight laptop, to be sure, but still, to
put the number in perspective...:

Helen:~ alex$ python -mtimeit -s'import sys' 'sys=23'
1000 loops, best of 3: 0.119 usec per loop

...we ARE talking about a factor of 30 or so slower than elementary
assignments (I'm wondering whether this may depend on import hooks, or,
what else...).


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question

2005-12-10 Thread Alex Martelli
Bernhard Herzog <[EMAIL PROTECTED]> wrote:
   ...
> > and y, obviously.  However, a framework for 2D geometry entirely based
> > on immutable-instance classes would probably be unwieldy
> 
> Skencil's basic objects for 2d geometry, points and transformations, are
> immutable.  It works fine.  Immutable object have the great advantage of
> making reasoning about the code much easier as the can't change behind
> your back.

Yes, that's important -- on the flip side, you may, in some cases, wish
you had mutable primitives for performance reasons (I keep daydreaming
about adding mutable-number classes to gmpy...;-)


> More complex objects such as poly bezier curves are mutable in Skencil,
> and I'm not sure anymore that that was a good design decision.  In most
> cases where bezier curve is modified the best approach is to simply
> build a new bezier curve anyway.  Sort of like list-comprehensions make
> it easier to "modify" a list by creating a new list based on the old
> one.

True, not for nothing were list comprehensions copied from the
functional language Haskell -- they work wonderfully well with immutable
data, unsurprisingly;-).  However, what if (e.g.) one anchor point
within the spline is being moved interactively?  I have no hard data,
just a suspicion that modifying the spline may be more efficient than
generating and tossing away a lot of immutable splines...


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question

2005-12-10 Thread Alex Martelli
Paul Rubin  wrote:
   ...
> I could imagine using Python's built-in complex numbers to represent
> 2D points.  They're immutable, last I checked.  I don't see a big
> conflict.

No big conflict at all -- as I recall, last I checked, computation on
complex numbers was optimized enough to make them an excellent choice
for 2D points' internal representations.  I suspect you wouldn't want to
*expose* them as such (e.g. by inheriting) but rather wrap them, because
referring to the .real and .imag "coordinates" of a point (rather than
.x and .y) IS rather weird.  Wrapping would also leave you the choice of
making 2D coordinates a class with mutable instances, if you wish,
reducing the choice of a complex rather than two reals to a "mere
implementation detail";-).

The only issue I can think of: I believe (I could be wrong) that a
Python implementation might be built with complex numbers disabled (just
like, e.g., it might be built with unicode disabled).  If that's indeed
the case, I might not want to risk, for the sake of a little
optimization, my 2D geometry framework not working on some little
cellphone or PDA or whatever...;-)


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question

2005-12-10 Thread Alex Martelli
Mike Meyer <[EMAIL PROTECTED]> wrote:
   ...
> Take our much-abused coordinate example, and assume you've exposed the
> x and y coordinates as attributes.
> 
> Now we have a changing requirement - we want to get to make the polar
> coordinates available. To keep the API consistent, they should be
> another pair of attributes, r and theta. Thanks to Pythons nice
> properties, we can implement these with a pair of getters, and compute
> them on the fly.
> 
> If x and y can't be manipulated individually, you're done. If they
> can, you have more work to do. If nothing else, you have to decide
> that you're going to provide an incomplete interface, in that users
> will be able to manipulate the object with some attributes but not
> others for no obvious good reason. To avoid that, you'll have to add
> code to run the coordinate transformations in reverse, which wouldn't
> otherwise be needed. Properties make this possible, which is a great
> thing.

Properties make this _easier_ (but you could do it before properties
were added to Python, via __setattr__ -- just less conveniently and
directly) -- just as easy as setX, setY, setRho, and setTheta would (in
fact, we're likely to have some of those methods under our properties,
so the difference is all in ease of USE, for the client code, not ease
of IMPLEMENTATION, compared to setter-methods).

If we keep the internal representation in cartesian coordinates
(attributes x and y), and decide that it would interfere with the
class's usefulness to have rho and theta read-only (i.e., that it IS
useful for the user of the class to be able to manipulate them
directly), we do indeed need to "add code" -- the setter methods setRho
and setTheta.  But let's put that in perspective. If we instead wanted
to make the CoordinatePair class immutable, we'd STILL have to offer an
alternative constructor or factory-function -- if it's at all useful to
manipulate rho and theta in a mutable class, it must be at least as
useful to be able to construct an immutable version from rho and theta,
after all.  So, we ARE going to have, say, a classmethod (note: all the
code in this post is untested)...:

class CoordinatePair(object):
def fromPolar(cls, rho, theta):
assert rho>=0
return cls(rho*math.cos(theta), rho*math.sin(theta))
fromPolar = classmethod(fromPolar)
# etc etc, the rest of this class

well, then, how much more code are we adding, to implement setRho and
setTheta when we decide to make our class mutable?  Here...:

def setRho(self, rho):
c = self.fromPolar(rho, self.getTheta())
self.x, self.y = c.x, c.y
def setTheta(self, theta):
c = self.fromPolar(self.getRho(), theta)
self.x, self.y = c.x, c.y

That's the maximum possible "difficulty" (...if THIS was a measure of
real "difficulty" in programming, I doubt our jobs would be as well paid
as they are...;-) -- it's going to be even less if we need anyway to
have a method to copy a CoordinatePair instance from another, such as

def copyFrom(self, other):
self.x, self.y = other.x, other.y

since then the above setters also become no-brainer oneliners a la:

def setRho(self, rho):
self.copyFrom(self.fromPolar(rho, self.getTheta()))

and you might choose to further simplify this method's body to

self.copyFrom(self.fromPolar(rho, self.theta))

since self.theta is going to be a property whose accessor half is the
above-used self.getTheta (mostly a matter of style choice here).


Really, I don't think this makes a good poster child for your "attribute
mutators make life more difficult" campaign...;-)


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: slice notation as values?

2005-12-10 Thread Bengt Richter
On 10 Dec 2005 12:07:12 -0800, "Devan L" <[EMAIL PROTECTED]> wrote:

>
>Antoon Pardon wrote:
>> On 2005-12-10, Duncan Booth <[EMAIL PROTECTED]> wrote:
>[snip]
>> >> I also think that other functions could benefit. For instance suppose
>> >> you want to iterate over every second element in a list. Sure you
>> >> can use an extended slice or use some kind of while. But why not
>> >> extend enumerate to include an optional slice parameter, so you could
>> >> do it as follows:
>> >>
>> >>   for el in enumerate(lst,::2)
>> >
>> > 'Why not'? Because it makes for a more complicated interface for something
>> > you can already do quite easily.
>>
>> Do you think so? This IMO should provide (0,lst[0]), (2,lst[2]),
>> (4,lst[4]) ...
>>
>> I haven't found a way to do this easily. Except for something like:
>>
>> start = 0:
>> while start < len(lst):
>>   yield start, lst[start]
>>   start += 2
>>
>> But if you accept this, then there was no need for enumerate in the
>> first place. So eager to learn something new, how do you do this
>> quite easily?
>
 lst = ['ham','eggs','bacon','spam','foo','bar','baz']
 list(enumerate(lst))[::2]
>[(0, 'ham'), (2, 'bacon'), (4, 'foo'), (6, 'baz')]
>
>No changes to the language necessary.
>
Or, without creating the full list intermediately,

 >>> lst = ['ham','eggs','bacon','spam','foo','bar','baz']
 >>> import itertools
 >>> list(itertools.islice(enumerate(lst), 0, None, 2))
 [(0, 'ham'), (2, 'bacon'), (4, 'foo'), (6, 'baz')]

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: reddit.com rewritten in Python

2005-12-10 Thread BartlebyScrivener
More

http://reddit.com/blog/2005/12/on-lisp.html

and more

http://www.findinglisp.com/blog/2005/12/reddit-and-lisp-psychosis.html

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question

2005-12-10 Thread Bernhard Herzog

[EMAIL PROTECTED] (Alex Martelli) writes:
> You could make a case for a "2D coordinate" class being "sufficiently
> primitive" to have immutable instances, of course (by analogy with
> numbers and strings) -- in that design, you would provide no mutators,
> and therefore neither would you provide setters (with any syntax) for x
> and y, obviously.  However, a framework for 2D geometry entirely based
> on immutable-instance classes would probably be unwieldy

Skencil's basic objects for 2d geometry, points and transformations, are
immutable.  It works fine.  Immutable object have the great advantage of
making reasoning about the code much easier as the can't change behind
your back.

More complex objects such as poly bezier curves are mutable in Skencil,
and I'm not sure anymore that that was a good design decision.  In most
cases where bezier curve is modified the best approach is to simply
build a new bezier curve anyway.  Sort of like list-comprehensions make
it easier to "modify" a list by creating a new list based on the old
one.

   Bernhard

-- 
Intevation GmbH http://intevation.de/
Skencil   http://skencil.org/
Thuban  http://thuban.intevation.org/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: OO in Python? ^^

2005-12-10 Thread Heiko Wundram
Paul Boddie wrote:
> Heiko Wundram wrote:
>> Matthias Kaeppler wrote:
>> > 
> 
> Well, unless you are (or he is) in with the GNOME crowd, C probably
> isn't really the object-oriented language acting as inspiration here.

Pardon this glitch, I corrected it in a followup-post somewhere along the
line, it's been some time since I've last used C/C++ for more than just
Python module programming and as such the term C has come to be synonymous
to "everything not Python" for me. ;-)

> [Zen of Python]

I find pointing to the ZoP pretty important, especially for people who start
to use the language. I know the hurdle that you have to overcome when you
grew up with a language which forces static typing on you (I learnt Pascal
as my first language, then used C, C++ and Java extensively for quite some
time before moving on to Perl and finally to Python), and when I started
using Python I had just the same feeling of "now why doesn't Python do this
like C++ does it, I loose all my security?" or something similar.

What got me thinking was reading the ZoP and seeing the design criteria for
the language. That's what actually made me realize why Python is the way it
is, and since that day I am at ease with the design decisions because I can
rationally understand and grip them and use the design productively. Look
at namespaces: I always understood what a namespace was (basically a
dictionary), but it took Tim Peters three lines

"""
Simple is better than complex.
Flat is better than nested.
Namespaces are one honking great idea -- let's do more of those!
"""

to actually get a hint at what the designer thought about when he
implemented namespaces as they are now, with the simplicity that they
actually have. It's always better to follow the designers thoughts about
something he implemented than to just learn that something is the way it is
in a certain language.

I still have that awkward feeling for Perl. TIMTOWTDI just doesn't cut it
when it's yelled at me, I still can't see a single coherent vision which
Larry Wall followed when he designed the language. That's why I decided to
drop it. ;-)

Maybe I'm assuming things by thinking that others also follow my line of
thought, but I've actually had very positive responses so far when telling
people that a certain feature is a certain way and then pointing them to
the ZoP, they all pretty much told me after a certain time of thought that
"the decision made sense now."

--- Heiko.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question

2005-12-10 Thread Paul Rubin
[EMAIL PROTECTED] (Alex Martelli) writes:
> You could make a case for a "2D coordinate" class being "sufficiently
> primitive" to have immutable instances, of course (by analogy with
> numbers and strings) -- in that design, you would provide no mutators,
> and therefore neither would you provide setters (with any syntax) for x
> and y, obviously.  However, a framework for 2D geometry entirely based
> on immutable-instance classes would probably be unwieldy

I could imagine using Python's built-in complex numbers to represent
2D points.  They're immutable, last I checked.  I don't see a big
conflict.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: OO in Python? ^^

2005-12-10 Thread Leif K-Brooks
Heiko Wundram wrote:
> Fredrik Lundh wrote:
>>Matthias Kaeppler wrote:
>>>polymorphism seems to be missing in Python
>>
>>QOTW!
> 
> Let's have some UQOTW: the un-quote of the week! ;-)

+1
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ANN: Dao Language v.0.9.6-beta is release!

2005-12-10 Thread Rick Wotnaz
Zeljko Vrba <[EMAIL PROTECTED]> wrote in
news:[EMAIL PROTECTED]: 

> On 2005-12-10, Tom Anderson <[EMAIL PROTECTED]> wrote:
>>
>> ED IS THE STANDARD TEXT EDITOR.
>>
> And:
>  INDENTATION
>   SUCKS
>BIG
>  TIME.
> 
> Using indentation without block termination markers is 
opposite
> of the way we write spoken language, terminating each 
sentence
> with . Ever wondered why we use such things in written 
language,
> when people are much better in guessing what the writer 
wanted
> to say then computers? 
> 

I believe I may have seen cases in written "spoken 
language" where paragraphs were indented, or otherwise 
separated with whitespace. It's even possible that I've 
seen some examples of written languages that use no periods 
at all! And what's more, I've seen more than one *computer* 
language that uses no terminating periods! Why, it boggles 
the mind. 

Despite the arguments advanced by those whose previous 
computer languages used braces and semicolons, there 
actually are more ways to separate complete statements than 
with punctuation. 

Make a grocery list. Do you terminate each item with 
punctuation? Write a headline for a newspaper. Is 
punctuation always included? Read a mediaeval manuscript. 
Do you find punctuation? Whitespace? How about Egyptian 
hieroglyphs, Chinese ideograms, Ogham runes? 

Because you're accustomed to one set of conventions, you 
may find Python's set strange at first. Please try it, and 
don't fight it. See if your objections don't fade away. If 
you're like most Python newbies, you'll stop thinking about 
brackets before long, and if you're like a lot of us, 
you'll wonder what those funny squiggles mean when you are 
forced to revert to one of those more primitive languages.

-- 
rzed
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: OO in Python? ^^

2005-12-10 Thread Paul Boddie
Heiko Wundram wrote:
> Matthias Kaeppler wrote:
> > 

Well, unless you are (or he is) in with the GNOME crowd, C probably
isn't really the object-oriented language acting as inspiration here.

[Zen of Python]

Of course the ZoP (Zen of Python) is deep guidance for those
languishing in some design dilemma or other, but not exactly helpful,
concrete advice in this context. (I'm also getting pretty jaded with
the recent trend of the ZoP being quoted almost once per thread on
comp.lang.python, mostly as a substitute for any real justification of
Python's design or any discussion of the motivations behind its
design.) That said, the questioner does appear to be thinking of
object-oriented programming from a statically-typed perspective, and
I'd agree that, ZoP or otherwise, a change in perspective and a
willingness to accept other, equally legitimate approaches to
object-orientation will lead to a deeper understanding and appreciation
of the Python language.

Anyway, it appears that the questioner is confusing declarations with
instantiation, amongst other things:

> And how do I formulate polymorphism in Python? Example:
>
> class D1(Base):
> def foo(self):
> print "D1"
>
> class D2(Base):
> def foo(self):
> print "D2"
>
> obj = Base() # I want a base class reference which is polymorphic

Well, here one actually gets a reference to a Base object. I know that
in C++ or Java, you'd say, "I don't care exactly what kind of Base-like
object I have right now, but I want to be able to hold a reference to
one." But in Python, this statement is redundant: names/variables
potentially refer to objects of any type; one doesn't need to declare
what type of objects a name will refer to.

> if ():
> obj = D1()
> else:
> obj = D2()

Without the above "declaration", this will just work. If one needs an
instance of D1, one will assign a new D1 object to obj; otherwise, one
will assign a new D2 object to obj. Now, when one calls the foo method
on obj, Python will just find whichever implementation of that method
exists on obj and call it. In fact, when one does call the method, some
time later in the program, the object held by obj doesn't even need to
be instantiated from a related class: as long as the foo method exists,
Python will attempt to invoke it, and this will even succeed if the
arguments are compatible.

All this is quite different to various other object-oriented languages
because many of them use other mechanisms to find out whether such a
method exists for any object referred to by the obj variable. With such
languages, defining a base class with the foo method and defining
subclasses with that method all helps the compiler to determine whether
it is possible to find such a method on an object referred to by obj.
Python bypasses most of that by doing a run-time check and actually
looking at what methods are available just at the point in time a
method is being called.

> I could as well leave the whole inheritance stuff out and the program would 
> still work
> (?).

Correct. Rewinding...

> Does inheritance in Python boil down to a mere code sharing?

In Python, inheritance is arguably most useful for "code sharing", yes.
That said, things like mix-in classes show that this isn't as
uninteresting as one might think.

Paul

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: OO in Python? ^^

2005-12-10 Thread Martin Christensen
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

> "Matthias" == Matthias Kaeppler <[EMAIL PROTECTED]> writes:
Matthias> sorry for my ignorance, but after reading the Python
Matthias> tutorial on python.org, I'm sort of, well surprised about
Matthias> the lack of OOP capabilities in python. Honestly, I don't
Matthias> even see the point at all of how OO actually works in
Matthias> Python.

It's very common for Python newbies, especially those with backgrounds
in languages such as C++, Java etc. to not really 'get' the Python way
of handling types until they've had a fair amount of experience with
Python. If you want to program Pythonically, you must first unlearn a
number of things.

For instance, in e.g. the Java tradition, if a function needs a
triangle object, it'll take a triangle object as an argument. If it
can handle any type of shape, it'll either take a shape base class
instance as an argument or there'll be some kind of shape interface that
it can take. Argument types are strictly controlled. Not so with
Python. A Python solution will typically take any type of object as an
argument so long as it behaves as expected, and if it doesn't, we deal
with the resulting exception (or don't, depending on what we're trying
to accomplish). For instance, if the function from before that wants a
shape really only needs to call an area method, anything with an area
method can be used successfully as an argument.

Some have dubbed this kind of type check 'duck typing': if it walks
like a duck and quacks like a duck, chances are it'll be a duck. To
those who are used to (more or less) strong, static type checks, this
will seem a reckless approach, but it really works rather well, and
subtle type errors are, in my experience, as rare in Python as in any
other language. In my opinion, the tricks the C*/Java people
occasionally do to get around the type system, such as casting to the
fundamental object type, are worse because they're seldom expected and
resulting errors thus typically more subtle.

In my very first post on this news group a number of years ago, I
asked for an equivalent of Java's interfaces. The only reply I got was
that I didn't need them. While the reason was very obvious, even with
what I knew about Python, it still took a while to sink in. From what
I can tell, you're in somewhat the same situation, and the two of us
are far from unique. As I said in the beginning, Python newbies with a
background in statically typed languages typically have a lot to
unlearn, but in my opinion, it's well worth it.


Martin
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (GNU/Linux)
Comment: Using Mailcrypt+GnuPG 

iEYEARECAAYFAkOba8oACgkQYu1fMmOQldXzcgCg0JEGTEG7xC/yAx8C1VFO8H1R
LWwAnRJ8AxHBe8YoHcDC5oGRfYaPHTfX
=HdTR
-END PGP SIGNATURE-
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: OO in Python? ^^

2005-12-10 Thread Tony Nelson
In article <[EMAIL PROTECTED]>,
 Matthias Kaeppler <[EMAIL PROTECTED]> wrote:
 ...
> obj = Base() # I want a base class reference which is polymorphic

obj now refers to an instance of Base.

> if ():
> obj =  D1()

obj now refers to an instance of D1().  The Base instance is 
unreferenced.

> else:
> obj = D2()

obj now refers to an instance of D2().  The Base instance is 
unreferenced.

Note that there is no code path that results in obj still referring to 
an instance of Base.  Unless making a Base had side effects, there is no 
use in the first line.


> I could as well leave the whole inheritance stuff out and the program 
> would still work (?).

That program might.


> Please give me hope that Python is still worth learning :-/

Python has inheritance and polymorphism, implemented via dictionaries.  
Python's various types of namespace are implemented with dictionaries.

Type this in to the Python interpreter:

class Base:
def foo(self):
print 'in Base.foo'

class D1(Base):
def foo(self):
print 'in D1.foo'
Base.foo(self)

class D2(Base):
def foo(self):
print 'in D2.foo'
Base.foo(self)

def makeObj():
return needD1 and D1() or D2()

needD1 = True
makeObj().foo()

needD1 = False
makeObj().foo()

TonyN.:'[EMAIL PROTECTED]
  '  
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: OO in Python? ^^

2005-12-10 Thread Heiko Wundram
Brian Beck wrote:
>> 
>> class D1(Base):
>>def foo(self):
>>   print "D1"
>> 
>> class D2(Base):
>>def foo(self):
>>   print "D2"
>> obj = Base() # I want a base class reference which is polymorphic
>> if ():
>>obj =  D1()
>> else:
>>obj = D2()
> 
> I have no idea what you're trying to do here and how it relates to
> polymorphism.
> 

He's translating C++ code directly to Python. obj = Base() creates a
variable of type Base(), to which you can assign different object types (D
(), D2()) which implement the Base interface (are derived from Base).
Err... At least I think it's what this code is supposed to mean...

In C++ you'd do:

Base *baseob;

if(  ) {
baseob = (Base*)new D1();
} else {
baseob = (Base*)new D2();
}

baseob->foo();

(should, if foo is declared virtual in Base, produce "d1" for D1, and "d2"
for D2)

At least IIRC, it's been quite some time since I programmed C++... ;-)
*shudder*

--- Heiko.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Dectecting dir changes

2005-12-10 Thread Fredrik Lundh
"chuck" wrote:

> While I do appreciate the suggestions but I have to say that if the
> twisted folks spent half the time writing documentation as they do code
> - twisted would probably get used a lot more Python folks.  Didn't get
> much encouragement/assistance from the twisted irc channel either.
> Perhaps the fella I chatted with hadn't had his coffee yet ;)

or maybe it's a medication issue.  your experience isn't exactly unique:

http://twistedmatrix.com/pipermail/twisted-python/2005-May/010380.html

"In this e-mail thread, someone wrote that people who say Twisted
has no docs ought to be stabbed in the face.  Obviously, this was
intended as humor, but I believe that this social custom, along with
as other related social customs in Twistedland, deter people from
some other cultures from participating."





-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Dectecting dir changes

2005-12-10 Thread chuck
I do need to stick to FTP though as indicated I could run it on a
different port.  Limit comes more from the client side capabilities.

Did some reading about twisted and I now understand that things in
general are single threaded.

I started working my way through the twisted finger tutorial.  While it
appears that it may be useful for someone writing a new protocol it
doesn't seem to useful for someone just trying to hook into an existing
one.

While I do appreciate the suggestions but I have to say that if the
twisted folks spent half the time writing documentation as they do code
- twisted would probably get used a lot more Python folks.  Didn't get
much encouragement/assistance from the twisted irc channel either.
Perhaps the fella I chatted with hadn't had his coffee yet ;)

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question

2005-12-10 Thread Mike Meyer
[EMAIL PROTECTED] (Alex Martelli) writes:
> Mike Meyer <[EMAIL PROTECTED]> wrote:
>> > "In addition to the full set of methods which operate on the coordinate as
>> > a whole, you can operate on the individual ordinates via instance.x and
>> > instance.y which are floats."
>> That's an API which makes changing the object more difficult. It may
>> be the best API for the case at hand, but you should be aware of the
>> downsides.
> Since x and y are important abstractions of a "2-D coordinate", I
> disagree that exposing them makes changing the object more difficult, as
> long of course as I can, if and when needed, change them into properties
> (or otherwise obtain similar effects -- before we had properties in
> Python, __setattr__ was still quite usable in such cases, for example,
> although properties are clearly simpler and more direct).

Exposing them doesn't make making changes more difficult. Allowing
them to be used to manipulate the object makes some changes more
difficult. Properties makes the set of such changes smaller, but it
doesn't make them vanish.

Take our much-abused coordinate example, and assume you've exposed the
x and y coordinates as attributes.

Now we have a changing requirement - we want to get to make the polar
coordinates available. To keep the API consistent, they should be
another pair of attributes, r and theta. Thanks to Pythons nice
properties, we can implement these with a pair of getters, and compute
them on the fly.

If x and y can't be manipulated individually, you're done. If they
can, you have more work to do. If nothing else, you have to decide
that you're going to provide an incomplete interface, in that users
will be able to manipulate the object with some attributes but not
others for no obvious good reason. To avoid that, you'll have to add
code to run the coordinate transformations in reverse, which wouldn't
otherwise be needed. Properties make this possible, which is a great
thing.

http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: OO in Python? ^^

2005-12-10 Thread Heiko Wundram
Fredrik Lundh wrote:
> Matthias Kaeppler wrote:
>> polymorphism seems to be missing in Python
> 
> QOTW!

Let's have some UQOTW: the un-quote of the week! ;-)

--- Heiko.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: OO in Python? ^^

2005-12-10 Thread Brian Beck
Matthias Kaeppler wrote:
> class Base:
>def foo(self): # I'd like to say that children must implement foo
>   pass

def foo(self):
 raise NotImplementedError("Subclasses must implement foo")

Now calling foo on a child instance will fail if it hasn't implemented foo.

> And how do I formulate polymorphism in Python? Example:
> 
> class D1(Base):
>def foo(self):
>   print "D1"
> 
> class D2(Base):
>def foo(self):
>   print "D2"
> obj = Base() # I want a base class reference which is polymorphic
> if ():
>obj =  D1()
> else:
>obj = D2()

I have no idea what you're trying to do here and how it relates to 
polymorphism.

-- 
Brian Beck
Adventurer of the First Order
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: OO in Python? ^^

2005-12-10 Thread Fredrik Lundh
Matthias Kaeppler wrote:

> polymorphism seems to be missing in Python

QOTW!





-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ANN: pygene - genetic algorithms package

2005-12-10 Thread eXt
Erik Max Francis wrote:
> Unfortunately I can't give a precise date.  If I have the time, a 
> polished working system with at least the basics should only take a week 
> or so to finish up.  Unfortunately, I have a big deadline coming up in 
> my day job, so I'm probably not going to get much time to work on it for 
> the next week or two.  Hopefully I'll have a basic system ready by New 
> Year's, but I can't really make any promises.
Sure, I understand You :). Thanks for answer.

>  The best way to encourage 
> me to get it done is probably to keep me talking about it :-).
As I did :)


-- 
eXt
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: OO in Python? ^^

2005-12-10 Thread Heiko Wundram
Matthias Kaeppler wrote:
> 

Let this enlighten your way, young padawan:

[EMAIL PROTECTED] ~/gtk-gnutella-downloads $ python
Python 2.4.2 (#1, Oct 31 2005, 17:45:13)
[GCC 3.4.4 (Gentoo 3.4.4-r1, ssp-3.4.4-1.0, pie-8.7.8)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import this
The Zen of Python, by Tim Peters

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
>>>

And, _do_ (don't only read) the tutorial, and you'll understand why the
short example code you posted isn't pythonic, to say the least:
http://www.python.org/doc/2.4.2/tut/tut.html
and why inheritance in Python is necessary, but on a whole different level
of what you're thinking.

Oh, and on a last note, if you're german, you might as well join
de.comp.lang.python.

--- Heiko.
-- 
http://mail.python.org/mailman/listinfo/python-list


OO in Python? ^^

2005-12-10 Thread Matthias Kaeppler
Hi,

sorry for my ignorance, but after reading the Python tutorial on 
python.org, I'm sort of, well surprised about the lack of OOP 
capabilities in python. Honestly, I don't even see the point at all of 
how OO actually works in Python.

For one, is there any good reason why I should ever inherit from a 
class? ^^ There is no functionality to check if a subclass correctly 
implements an inherited interface and polymorphism seems to be missing 
in Python as well. I kind of can't imagine in which circumstances 
inheritance in Python helps. For example:

class Base:
def foo(self): # I'd like to say that children must implement foo
   pass

class Child(Base):
pass # works

Does inheritance in Python boil down to a mere code sharing?

And how do I formulate polymorphism in Python? Example:

class D1(Base):
def foo(self):
   print "D1"

class D2(Base):
def foo(self):
   print "D2"

obj = Base() # I want a base class reference which is polymorphic
if ():
obj =  D1()
else:
obj = D2()

I could as well leave the whole inheritance stuff out and the program 
would still work (?).

Please give me hope that Python is still worth learning :-/

Regards,
Matthias
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ANN: Dao Language v.0.9.6-beta is release!

2005-12-10 Thread Fredrik Lundh
Zeljko Vrba wrote:

> Using indentation without block termination markers is opposite of the way we
> write spoken language, terminating each sentence with . Ever wondered why
> we use such things in written language, when people are much better in
> guessing what the writer wanted to say then computers?

Interesting.  Python's use of indentation comes from ABC, which based the
design partially on extensive testing on human beings.  Humans often use
indentation for grouping, and colons to introduce a new level or group are
at least as common.  In fact, most humans can understand the structure
of a Python program even if they've never programmed before.

I guess writers don't use indentation to group text on your planet.





-- 
http://mail.python.org/mailman/listinfo/python-list


Re: double underscore attributes?

2005-12-10 Thread Xavier Morel
[EMAIL PROTECTED] wrote:
> Could someone explain the use of __add__ (and similar double underscore
> attributes) and what their use is.
> 
> Bob
> 
Methods with double leading and trailing underscores are basically 
"magic methods" with specific meanings for the Python interpreter.

You're not supposed to use them directly (in most cases) as they are 
wrapped by syntactic sugar or part of protocol's implementation.

__add__, for example, is the definition of the "+" operator, using 
"(5).__add__(8)" is exactly the same as using "5+8".

To get a list of most of these magic methods, check the Python 
documentation on the "operator" module.

These magic methods allow you to emulate numbers, sequences, iterators, 
or even callables (allowing you to use an object as a function). Be 
really careful with them though, one of the things that plagued (and 
still plague) C++ is the abuse of operators overloading and modification 
of their meaning.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ANN: pygene - genetic algorithms package

2005-12-10 Thread Erik Max Francis
eXt wrote:

> I'm really happy to see that someone is working on Python based GP 
> implementation :) I'm currently trying to get into GP world (I'm the GP 
> newbie you talked about :P) and, as I'm a Python programmer, I look 
> towards Python based solutions. Unfortunately there are no active Python 
> GP projects (maybe except Pyro, but I'm not sure how GP is implemented 
> there) so I'm forced to play with Java based systems, which isn't what I 
> like.
> 
> Are you able to give any approximated date of PSI release?

Unfortunately I can't give a precise date.  If I have the time, a 
polished working system with at least the basics should only take a week 
or so to finish up.  Unfortunately, I have a big deadline coming up in 
my day job, so I'm probably not going to get much time to work on it for 
the next week or two.  Hopefully I'll have a basic system ready by New 
Year's, but I can't really make any promises.  The best way to encourage 
me to get it done is probably to keep me talking about it :-).

-- 
Erik Max Francis && [EMAIL PROTECTED] && http://www.alcyone.com/max/
San Jose, CA, USA && 37 20 N 121 53 W && AIM erikmaxfrancis
   You are my martyr / I'm a vestige of a revolution
   -- Lamya
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: calculate system disk space

2005-12-10 Thread Neil Hodgson
Heiko Wundram:

> Under Windows you don't have sparse files though, so there
> are no fields ...

Does too!
http://msdn.microsoft.com/library/en-us/fileio/fs/fsctl_set_sparse.asp
They're fairly rare though.

Neil
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ANN: Dao Language v.0.9.6-beta is release!

2005-12-10 Thread Zeljko Vrba
On 2005-12-10, Tom Anderson <[EMAIL PROTECTED]> wrote:
>
> ED IS THE STANDARD TEXT EDITOR.
>
And:
INDENTATION
SUCKS
BIG
TIME.

Using indentation without block termination markers is opposite of the way we
write spoken language, terminating each sentence with . Ever wondered why
we use such things in written language, when people are much better in
guessing what the writer wanted to say then computers?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: binascii.crc32 results not matching

2005-12-10 Thread Heiko Wundram
Larry Bates wrote:
> 
>
> The algorithm looks very much like the source code for
> binascii.crc32 (but I'm not a C programmer).

Well... As you have access to the code, you might actually just create a
thin Python-Wrapper around this so that you can get comparable results. In
case you're unable to do so, send me the C-file (I'm not so keen on
copy-pasting code which was reformatted due to mail), and I'll send you an
extension module back.

--- Heiko.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question

2005-12-10 Thread Mike Meyer
[EMAIL PROTECTED] (Alex Martelli) writes:
> Mike Meyer <[EMAIL PROTECTED]> wrote:
>...
>> Well, the hard-core solution is to note that your class doesn't really
>> deal with the type Bar, but deals with a subtype of Bar for which x >
>> 23 in all cases. Since types are represented by classes, you should
>> subclass Bar so you have a class that represents this subtype. The
>> class is trivial (with Eiffel conventions):
>> class RESTRICTED_BAR
>>inherits BAR
>>invariant x > 23
>> END
> Yes, but then once again you have to "publicize" something (an aspect of
> a class invariant) which should be dealt with internally

Contracts are intended to be public; they are part of the the class's
short form, which is the part that's intended for public consumption.
If your vision of invariants is that they are for internal use only,
and clients don't need to know them, then you probably ought to be
considering another language.

> also, this approach does not at all generalize to "bar1.x>23 OR
> bar2.x>23" and any other nontrivial constraint involving expressions
> on more than attributes of a single instance's attribute and
> compile-time constants.  So, besides "hard-coreness", this is just
> too limited to serve.

I believe it's the best solution for the case at hand. It causes the
violation of the invariant to be caught as early as possible. As I
mentioned elsewhere, it's not suitable for all cases, so you have to
use other, possibly less effective, tools.

>> > So, one invariant that had better hold to ensure a certain instance foo
>> > of Foo is not about to crash, may be, depending on how Foo's detailed
>> > structual geometry is, something like:
>> >
>> >   foo.beam1.force_transferred_A <= foo.pier1.max_load_top AND
>> >   foo.beam1.force_transferred_B <= foo.girder1.max_load_A
>> >
>> > The natural place to state this invariant is in class Foo, by expressing
>> > 'foo' as 'self' in Python (or omitting it in languages which imply such
>> > a lookup, of course).
>> 
>> I don't think that's the natural place. It's certainly one place to
>> consider, and may be the best one. However, it might work equally well
>> to use preconditions on the methods that add the beam and pier to Foo
>> to verify that the beam and pier in question are valid. If the
>> attributes of the beam and pier can't change, this would be the right
>> way to do it.
>
> What ever gave you the impression that the loads on beams and piers (and
> therefore the forces they transfer) "can't change"?

Your incomplete specification of the problem. You didn't say whether
or not they could change, so I pointed out what might - key word, that
- be a better solution for a more complete specification.

>> > If I'm not allowed (because you think "it's silly"!) to express a class
>> > invariant in terms of attributes of the attributes of an instance of
>> > that class, I basically have to write tons of boilerplate, violating
>> > encapsulation, to express what are really attributes of attributes of
>> > foo "as if" they were attributes of foo directly, e.g.
>> [...]
>> > (etc).  Changing a lot of dots into underscores -- what a way to waste
>> > programmer time!  And all to NO advantage, please note, since:
>> If you knew it was going to be to no advantage, why did you write the
>> boilerplate? That's also pretty silly. Care to provide reasons for
>> your wanting to do this?
>
> If I had to program under a styleguide which enforces the style
> preferences you have expressed, then the stupid boilerplate would allow
> my program to be accepted by the stylechecker, thus letting my code be
> committed into the source control system; presumably that would be
> necessary for me to keep my job (thus drawing a salary) or getting paid
> for my consultancy billed hours.  Just like, say, if your styleguide
> forbade the use of vowels in identifiers, I might have a tool to convert
> such vowels into consonants before I committed my code.  I'm not saying
> there cannot be monetary advantage for me to obey the deleterious and
> inappropriate rules of any given arbitrary styleguide: it may be a
> necessary condition for substantial monetary gains or other preferments.
> I'm saying there is no advantage whatsoever to the organization as a
> whole in imposing arbitrary constraints such as, "no vowels in
> identifiers", or, "no access to attributes of attributes in invariants".

True. But if you think this is an arbitary constraint, why did you
impose it in the first place?

>> >> of. Invariants are intended to be used to check the state of the
>> >> class, not the state of arbitary other objects. Doing the latter
>> >> requires that you have to check the invariants of every object pretty
>> >> much every time anything changes.
>> > ...in the end the invariant DOES have to be checked when anything
>> > relevant changes, anyway, with or without the silly extra indirection.
>> No, it doesn't have to be checked. Even invariants that don't suffer
>> from this don't have

Re: Proposal: Inline Import

2005-12-10 Thread Erik Max Francis
Shane Hathaway wrote:

> Let me fully elaborate the heresy I'm suggesting: I am talking about 
> inline imports on every other line of code.  The obvious implementation 
> would drop performance by a double digit percentage.

Module importing is already idempotent.  If you try to import an 
already-imported module, inline or not, the second (or subsequent) 
imports are no-operations.

-- 
Erik Max Francis && [EMAIL PROTECTED] && http://www.alcyone.com/max/
San Jose, CA, USA && 37 20 N 121 53 W && AIM erikmaxfrancis
   Golf is a good walk spoiled.
   -- Mark Twain
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: double underscore attributes?

2005-12-10 Thread Dan Bishop
[EMAIL PROTECTED] wrote:
...
> Every time I use dir(some module) I get a lot of attributes with double
> underscore, for example __add__. Ok, I thought __add__ must be a method
> which I can apply like this
...
> I tried
> >>> help(5.__add__)
>
> but got
> SyntaxError: invalid syntax

That's because the parser thinks "5." is a float, rather than the
integer 5 with a dot after it.  If you want to refer to an attribute of
an integer literal, you can use "(5).__add__" or "5 .__add__" (with a
space).

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: double underscore attributes?

2005-12-10 Thread Steven Bethard
[EMAIL PROTECTED] wrote:
> Every time I use dir(some module) I get a lot of attributes with double
> underscore, for example __add__. Ok, I thought __add__ must be a method
> which I can apply like this
> 
5.__add(8)
> 
> However Python responded
> SyntaxError: invalid syntax
> 
> I tried
> 
help(5.__add__)
> 
> but got
> SyntaxError: invalid syntax

Note that these SyntaxErrors are due to how Python parses floats:

 >>> 5.
5.0
 >>> 5.__add__(8)
Traceback (  File "", line 1
 5.__add__(8)
 ^
SyntaxError: invalid syntax
 >>> (5).__add__(8)
13

HTH,

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: slice notation as values?

2005-12-10 Thread Steven Bethard
Antoon Pardon wrote:
> On 2005-12-10, Steven Bethard <[EMAIL PROTECTED]> wrote:
> 
>>Antoon Pardon wrote:
>>
>>>So lets agree that tree['a':'b'] would produce a subtree. Then
>>>I still would prefer the possibility to do something like:
>>>
>>>  for key in tree.iterkeys('a':'b')
>>>
>>>Instead of having to write
>>>
>>>  for key in tree['a':'b'].iterkeys()
>>>
>>>Sure I can now do it like this:
>>>
>>>  for key in tree.iterkeys('a','b')
>>>
>>>But the way default arguments work, prevents you from having
>>>this work in an analague way as a slice.
>>
>>How so?  Can't you just pass the *args to the slice contstructor?  E.g.::
>>
>> def iterkeys(self, *args):
>> keyslice = slice(*args)
>> ...
>>
>>Then you can use the slice object just as you would have otherwise.
> 
> This doesn't work for a number of reasons,
> 
> 1) 
slice()
> 
> Traceback (most recent call last):
>   File "", line 1, in ?
>   TypeError: slice expected at least 1 arguments, got 0

I wasn't sure whether or not the slice argument was optional. 
Apparently it's intended to be, so you have to make one special case:

def iterkeys(self, *args):
 keyslice = args and slice(*args) or slice(None, None, None)

> 2) It doens't give a clear way to indicate the following
>kind of slice: tree.iterkeys('a':). Because of the
>follwing:
> 
slice('a')
> slice(None, 'a', None)
> 
>which would be equivallent to tree.iterkeys(:'a')

Well, it certainly gives a way to indicate it:

 tree.iterkeys(None, 'a')

Whether or not it's a "clear" way is too subjective of a topic for me to 
get into.  That's best left to Guido[1].  My point is that it *does* 
work, and covers (or can be slightly altered to cover) all the 
functionality you want.  That doesn't mean you have to like the API for 
it, of course.

STeVe

[1] By which I mean that you should submit a PEP on the idea, and let 
Guido decide which way is prettier.  Just be sure to give all the 
equivalent examples - i.e. calling the slice constructor with the 
appropriate arguments.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: problems binding a dictionary

2005-12-10 Thread Dody Suria Wijaya
[EMAIL PROTECTED] wrote:
> this has to be a very silly thing.
> 
> I have a function foo taking a dictionary as parameters. i.e.: def
> foo(**kwargs): pass
> when I call foo(param1='blah',param2='bleh',param3='blih') everything
> is fine.
> but when I do:
 def foo(**kwargs):
> ... pass
> ...
 d=dict(param1='blah',param2='bleh',param3='blih')
 foo(d)
> 
> I get:
> 
> Traceback (most recent call last):
>   File "", line 1, in ?
> TypeError: foo() takes exactly 0 arguments (1 given)
> 
> Why? how do I pass the dictionary *d* to foo()?
> Thanks,
> 
> - Josh.
> 

simply because your parameter definition expect keyword parameter 
passing, and you are passing a paramater by placement. You should call 
the function like this:

f(mykey=d)

or, since d is dict, you could use d's key/vals as keyword parameter:
f(**d)

which equivalent to doing:
f(param1='blah',param2='bleh',param3='blih')


--
  dsw
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question

2005-12-10 Thread Alex Martelli
Mike Meyer <[EMAIL PROTECTED]> wrote:

> > "In addition to the full set of methods which operate on the coordinate as
> > a whole, you can operate on the individual ordinates via instance.x and
> > instance.y which are floats."
> 
> That's an API which makes changing the object more difficult. It may
> be the best API for the case at hand, but you should be aware of the
> downsides.

Since x and y are important abstractions of a "2-D coordinate", I
disagree that exposing them makes changing the object more difficult, as
long of course as I can, if and when needed, change them into properties
(or otherwise obtain similar effects -- before we had properties in
Python, __setattr__ was still quite usable in such cases, for example,
although properties are clearly simpler and more direct).

You could make a case for a "2D coordinate" class being "sufficiently
primitive" to have immutable instances, of course (by analogy with
numbers and strings) -- in that design, you would provide no mutators,
and therefore neither would you provide setters (with any syntax) for x
and y, obviously.  However, a framework for 2D geometry entirely based
on immutable-instance classes would probably be unwieldy (except in a
fully functional language); as long as we have a language whose normal
style allows data mutation, we'll probably fit better into it by
allowing mutable geometrical primitives at some level -- and as soon as
the mutable primitives are reached, "settable attributes" and their
syntax and semantics come to the fore again...


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: double underscore attributes?

2005-12-10 Thread LocaWapp
Hi Bob,
see this my example:

>>> class A:
... def __add__(a,b):
... print a,b
...
>>> a = A()
>>> b = A()
>>> a + b
<__main__.A instance at 0x80c5a74> <__main__.A instance at 0x80c6234>
>>>

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: slice notation as values?

2005-12-10 Thread Brian Beck
Antoon Pardon wrote:
> Will it ever be possible to write things like:
> 
>   a = 4:9

I made a silly recipe to do something like this a while ago, not that 
I'd recommend using it. But I also think it wouldn't be too far-fetched
to allow slice creation using a syntax like the above...

http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/415500

-- 
Brian Beck
Adventurer of the First Order
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: double underscore attributes?

2005-12-10 Thread Wojciech Mula
[EMAIL PROTECTED] wrote:
> Now I went to Python Library Reference and searched for "__add__" but
> got zero hits.

http://python.org/doc/2.4.2/ref/specialnames.html
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to find the type ...

2005-12-10 Thread Lad
Thank you ALL for help and explanation
Regards,
L.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question

2005-12-10 Thread Alex Martelli
Mike Meyer <[EMAIL PROTECTED]> wrote:
   ...
> Well, the hard-core solution is to note that your class doesn't really
> deal with the type Bar, but deals with a subtype of Bar for which x >
> 23 in all cases. Since types are represented by classes, you should
> subclass Bar so you have a class that represents this subtype. The
> class is trivial (with Eiffel conventions):
> 
> class RESTRICTED_BAR
>inherits BAR
>invariant x > 23
> END

Yes, but then once again you have to "publicize" something (an aspect of
a class invariant) which should be dealt with internally; also, this
approach does not at all generalize to "bar1.x>23 OR bar2.x>23" and any
other nontrivial constraint involving expressions on more than
attributes of a single instance's attribute and compile-time constants.
So, besides "hard-coreness", this is just too limited to serve.


> > So, one invariant that had better hold to ensure a certain instance foo
> > of Foo is not about to crash, may be, depending on how Foo's detailed
> > structual geometry is, something like:
> >
> >   foo.beam1.force_transferred_A <= foo.pier1.max_load_top AND
> >   foo.beam1.force_transferred_B <= foo.girder1.max_load_A
> >
> > The natural place to state this invariant is in class Foo, by expressing
> > 'foo' as 'self' in Python (or omitting it in languages which imply such
> > a lookup, of course).
> 
> I don't think that's the natural place. It's certainly one place to
> consider, and may be the best one. However, it might work equally well
> to use preconditions on the methods that add the beam and pier to Foo
> to verify that the beam and pier in question are valid. If the
> attributes of the beam and pier can't change, this would be the right
> way to do it.

What ever gave you the impression that the loads on beams and piers (and
therefore the forces they transfer) "can't change"?  That would be a
pretty weird way to design a framework for structural modeling in any
language except a strictly functional (immutable-data) one, and I've
already pointed out that functional languages, thanks to their immutable
data approach, are very different from ones (like Eiffel or Python)
where data routinely does get changed.


> > If I'm not allowed (because you think "it's silly"!) to express a class
> > invariant in terms of attributes of the attributes of an instance of
> > that class, I basically have to write tons of boilerplate, violating
> > encapsulation, to express what are really attributes of attributes of
> > foo "as if" they were attributes of foo directly, e.g.
> [...]
> > (etc).  Changing a lot of dots into underscores -- what a way to waste
> > programmer time!  And all to NO advantage, please note, since:
> 
> If you knew it was going to be to no advantage, why did you write the
> boilerplate? That's also pretty silly. Care to provide reasons for
> your wanting to do this?

If I had to program under a styleguide which enforces the style
preferences you have expressed, then the stupid boilerplate would allow
my program to be accepted by the stylechecker, thus letting my code be
committed into the source control system; presumably that would be
necessary for me to keep my job (thus drawing a salary) or getting paid
for my consultancy billed hours.  Just like, say, if your styleguide
forbade the use of vowels in identifiers, I might have a tool to convert
such vowels into consonants before I committed my code.  I'm not saying
there cannot be monetary advantage for me to obey the deleterious and
inappropriate rules of any given arbitrary styleguide: it may be a
necessary condition for substantial monetary gains or other preferments.
I'm saying there is no advantage whatsoever to the organization as a
whole in imposing arbitrary constraints such as, "no vowels in
identifiers", or, "no access to attributes of attributes in invariants".


> >> of. Invariants are intended to be used to check the state of the
> >> class, not the state of arbitary other objects. Doing the latter
> >> requires that you have to check the invariants of every object pretty
> >> much every time anything changes.
> > ...in the end the invariant DOES have to be checked when anything
> > relevant changes, anyway, with or without the silly extra indirection.
> 
> No, it doesn't have to be checked. Even invariants that don't suffer
> from this don't have to be checked. It would be nice if every
> invariant was checked every time it might be violated, but that's not
> practical. If checking relationships between attributes attributes is
> the best you can do, you do that, knowing that instead of an invariant
> violation raising an exception after the code that violates it, the
> exception may raised after the first method of your class that is
> called after the invariant is violated. That's harder to debug than
> the other way, but if it's the best you can get, it's the best you can
> get.

Let's silently gloss on the detail that calling "invariant" something
that is in fact not guaran

double underscore attributes?

2005-12-10 Thread bobueland
Entering
>>>dir(5)

I get
['__abs__', '__add__', '__and__', '__class__', '__cmp__', '__coerce__',
'__delattr__', '__div__', '__divmod__', '__doc__', '__float__',
'__floordiv__', '__getattribute__', '__getnewargs__', '__hash__',
'__hex__', '__init__', '__int__', '__invert__', '__long__',
'__lshift__', '__mod__', '__mul__', '__neg__', '__new__',
'__nonzero__', '__oct__', '__or__', '__pos__', '__pow__', '__radd__',
'__rand__', '__rdiv__', '__rdivmod__', '__reduce__', '__reduce_ex__',
'__repr__', '__rfloordiv__', '__rlshift__', '__rmod__', '__rmul__',
'__ror__', '__rpow__', '__rrshift__', '__rshift__', '__rsub__',
'__rtruediv__', '__rxor__', '__setattr__', '__str__', '__sub__',
'__truediv__', '__xor__']

Every time I use dir(some module) I get a lot of attributes with double
underscore, for example __add__. Ok, I thought __add__ must be a method
which I can apply like this
>>> 5.__add(8)

However Python responded
SyntaxError: invalid syntax

I tried
>>> help(5.__add__)

but got
SyntaxError: invalid syntax

However when I tried with a list
>>> help([5,6].__add__)

I got
Help on method-wrapper object:

__add__ = class method-wrapper(object)
 |  Methods defined here:
 |
 |  __call__(...)
 |  x.__call__(...) <==> x(...)
 |
 |  __getattribute__(...)
 |  x.__getattribute__('name') <==> x.name

Not that I understand much of this but at least I got some response.

Now I went to Python Library Reference and searched for "__add__" but
got zero hits.

Could someone explain the use of __add__ (and similar double underscore
attributes) and what their use is.

Bob

-- 
http://mail.python.org/mailman/listinfo/python-list


locawapp-001.zip

2005-12-10 Thread LocaWapp
LocaWapp: localhost web applications V.0.0.1 (2005 Dec 10)

Copyright (C) 2005 RDJ
This code is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY.

http://LocaWapp.blogspot.com



- Run with:

python run.py

- or:

python run.py 8081

- and browse with:

http://localhost:8080/locawapp/main.py

- or:

http://localhost:8081/locawapp/main.py

- If this is good for you, then you can help me developing with:

HTML + CSS + JS + PYTHON = LocaWapp
---


- Put your application in root:

[your_application]
[locawapp]
__init__.py
common.py
main.py
[static]
logo.gif
main.css
main.js
README.TXT
run.py

- Your application must to have "init" and "main" (for convention):

[your_application]
__init__.py
main.py

- main.py is a web application, then it has "locawapp_main" function:

def locawapp_main(request):
[...]
return [string]

- See locawapp.main.py and locawapp.common.py

- Send me your comment, thanks :-)
- Bye

http://LocaWapp.blogspot.com



class Server:
def __init__(self,port):
self.resp = None
self.session = {'_stopServer':False}

s = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
s.bind(('127.0.0.1',port))
s.listen(5)
while 1:
self.resp = ''
conn,addr = s.accept()
data = ''
while 1:
tmp = conn.recv(BUFFER)
if tmp == '':
break
data += tmp
end = data.find('\r\n\r\n')
if end != -1:
data2 =
data[:end].split('\r\n')
method = data2[0].split(' ')
method[1] =
urllib.unquote(method[1])
header = {}
for r in data2[1:]:
rr = r.split(': ')
header[rr[0]] = rr[1]
if method[0] == 'GET':

self.getResp(method,header,None)
elif method[0] == 'POST':
cnt_len =
int(header['Content-Length'])
content = data[end+4:]
while len(content) <
cnt_len:
tmp =
conn.recv(BUFFER)
if tmp == '':
break
content += tmp
content =
content[:cnt_len] # for MSIE 5.5 that append \r\n
content =
urllib.unquote(content)

self.getResp(method,header,content)
break
conn.send(self.resp)
conn.close()
if self.session['_stopServer']:
break
s.close()

def getResp(self,method,header,content):
url = method[1].split('?')
path = url[0][1:]
ext = os.path.splitext(path)
try:
if ext[1] == '.py':
get = {}
if len(url) == 2:
get = self.getParams(url[1])
post = {}
if content != None:
post = self.getParams(content)
request = {
'header':header,
'get':get,
'post':post,
'session':self.session
}
exec 'import ' +
ext[0].replace('/','.') + ' as app'
response = app.locawapp_main(request)
self.resp =
self.makeResp(200,'text/html',response['content'])
else:
try:
f = open(path,'rb')
fs = f.read()
f.close()
self.resp =
self.makeResp(200,self.getCntType(ext[1]),fs)
except IOError:
   

Re: slice notation as values?

2005-12-10 Thread Devan L

Antoon Pardon wrote:
> On 2005-12-10, Duncan Booth <[EMAIL PROTECTED]> wrote:
[snip]
> >> I also think that other functions could benefit. For instance suppose
> >> you want to iterate over every second element in a list. Sure you
> >> can use an extended slice or use some kind of while. But why not
> >> extend enumerate to include an optional slice parameter, so you could
> >> do it as follows:
> >>
> >>   for el in enumerate(lst,::2)
> >
> > 'Why not'? Because it makes for a more complicated interface for something
> > you can already do quite easily.
>
> Do you think so? This IMO should provide (0,lst[0]), (2,lst[2]),
> (4,lst[4]) ...
>
> I haven't found a way to do this easily. Except for something like:
>
> start = 0:
> while start < len(lst):
>   yield start, lst[start]
>   start += 2
>
> But if you accept this, then there was no need for enumerate in the
> first place. So eager to learn something new, how do you do this
> quite easily?

>>> lst = ['ham','eggs','bacon','spam','foo','bar','baz']
>>> list(enumerate(lst))[::2]
[(0, 'ham'), (2, 'bacon'), (4, 'foo'), (6, 'baz')]

No changes to the language necessary.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: binascii.crc32 results not matching

2005-12-10 Thread Raymond L. Buvel
Larry Bates wrote:


Looking over the code, it seems very inefficient and hard to understand.
 You really should check out the following.

http://sourceforge.net/projects/crcmod/

It will allow you to generate efficient CRC functions for use in Python
and in C or C++.  The only thing you need to input is the polynomial,
the bit ordering, and the starting value.  The unit test gives a number
of polynomials including the one you are using which is:

polynomial: 0x104C11DB7, bit reverse algorithm

If you are just looking for a utility on Linux to do nightly checking of
files, I strongly recommend md5sum.  My timing tests indicate that the
MD5 algorithm is comparable or slightly faster than a 32-bit CRC and
certainly faster than the code you are trying to port.  It also has the
advantage of being a standard Linux command so you don't need to code
anything.

Ray
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ANN: pygene - genetic algorithms package

2005-12-10 Thread eXt
Erik Max Francis wrote:
> Sure thing.  Obviously I'll post an announcement here when it's ready.
I'm really happy to see that someone is working on Python based GP 
implementation :) I'm currently trying to get into GP world (I'm the GP 
newbie you talked about :P) and, as I'm a Python programmer, I look 
towards Python based solutions. Unfortunately there are no active Python 
GP projects (maybe except Pyro, but I'm not sure how GP is implemented 
there) so I'm forced to play with Java based systems, which isn't what I 
like.

Are you able to give any approximated date of PSI release?

Cheers


-- 
eXt

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: calculate system disk space

2005-12-10 Thread Heiko Wundram
[EMAIL PROTECTED] wrote:
> A little somehting I rigged up when I found the Python call to be Linux
> specific:

os.stat isn't Linux-specific, isn't even Unix-specific, works just fine
under Windows. Under Windows you don't have sparse files though, so there
are no fields which give you the block-size of the device or the
block-count of a file, just the st_size field which gives you the data size
of the file.

--- Heiko.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Proposal: Inline Import

2005-12-10 Thread Shane Hathaway
Benji York wrote:
> Shane Hathaway wrote:
> 
>> Benji York wrote:
>>
>>> OK, good.  You won't have to worry about that. :)
>>
>>
>> You didn't give a reason for disliking it.
> 
> 
> Oh, I don't particularly dislike it.  I hadn't come up with a reason to 
> like or dislike it, other than a predilection for the status quo.

I definitely sympathize.  I don't want to see random changes in Python, 
either.  The inline import idea was a bit intriguing when it surfaced, 
though. :-)

Shane
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Managing import statements

2005-12-10 Thread Shane Hathaway
Chris Mellon wrote:
> On 12/10/05, Shane Hathaway <[EMAIL PROTECTED]> wrote:
>>I'm surprised this problem isn't more familiar to the group.  Perhaps
>>some thought I was asking a newbie question.  I'm definitely a newbie in
>>the sum of human knowledge, but at least I've learned some tiny fraction
>>of it that includes Python, DRY, test-first methodology, OOP, design
>>patterns, XP, and other things that are commonly understood by this
>>group.  Let's move beyond that.  I'm looking for ways to gain just a
>>little more productivity, and improving the process of managing imports
>>could be low-hanging fruit.
> 
> It is probably because most people don't regularly switch that much
> code around, or use that many modules. I think the fact that you move
> that much code between modules is probably a code smell in and of
> itself - you're clearly moving and changing responsibilities.
> Refactoring in smaller chunks, less extreme refactoring, correcting
> the imports as you go, and avoiding inline imports without a really
> good reason will probably help you a lot more than a new syntax.

That's an insightful suggestion, but I'm pretty sure I'm doing it right. 
:-)  I fill my mind with the way the code should look and behave when 
it's done, then I go through the code and change everything that doesn't 
match the picture.  As the code moves closer to production and bugs 
become more expensive, I add a step to the process where I perform more 
formal refactoring, but that process takes much longer.  The problems 
caused by the informal process are less expensive than the formal process.

> Yes. Spend an afternoon looking at PyLints options, get rid of the
> stuff you don't like, and use it regularly. PyDev has PyLint
> integration, which is nice.

Ok, thanks.

> I don't think I've ever imported more than a dozen modules into any
> given file. I rarely find it neccesary to move huge chunks of code
> between my modules - the biggest such I did was when I moved from a
> single-file proof of concept to a real modular structure, and that
> only took me an hour or so. If I'd done it modularly to start with it
> would have been fine.

Well, code moves a lot in the immature phases of large applications like 
Zope and distributed systems.

Shane
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Proposal: Inline Import

2005-12-10 Thread Benji York
Shane Hathaway wrote:
> Benji York wrote:
> 
>> OK, good.  You won't have to worry about that. :)
> 
> You didn't give a reason for disliking it.

Oh, I don't particularly dislike it.  I hadn't come up with a reason to 
like or dislike it, other than a predilection for the status quo.
--
Benji York
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question

2005-12-10 Thread Mike Meyer
[EMAIL PROTECTED] (Alex Martelli) writes:
> Mike Meyer <[EMAIL PROTECTED]> wrote:
>...
>> >> it. Nothing you do with zim.foo or zim.foo.bar can change the state of
>> >> zim. The only invariants you need to check are bar's, which you do at
>> >> the exit to it's baz method.
>> > So foo's class is not allowed to have as its invariant any formula
>> > depending on the attributes of its attribute bar, such as "bar.x>23" or
>> > the like?
>> Of course you can do such things. But it's a silly thing to do. That
> I guess this is the crux of our disagreement -- much like, it seems to
> me, your disagreement with Xavier and Steven on the other half of this
> thread, as I'll try to explain in the following.
>> invariant should be written as x > 23 for the class bar is an instance
> Let's, for definiteness, say that bar is an instance of class Bar.  Now,
> my point is that absolutely not all instances of Bar are constrained to
> always have their x attribute >23 -- in general, their x's can vary all
> over the place; rather, the constraint applies very specifically to this
> one instance of Bar -- the one held by foo (an instance of Foo) as foo's
> attribute bar.

Well, the hard-core solution is to note that your class doesn't really
deal with the type Bar, but deals with a subtype of Bar for which x >
23 in all cases. Since types are represented by classes, you should
subclass Bar so you have a class that represents this subtype. The
class is trivial (with Eiffel conventions):

class RESTRICTED_BAR
   inherits BAR
   invariant x > 23
END

> So, one invariant that had better hold to ensure a certain instance foo
> of Foo is not about to crash, may be, depending on how Foo's detailed
> structual geometry is, something like:
>
>   foo.beam1.force_transferred_A <= foo.pier1.max_load_top AND
>   foo.beam1.force_transferred_B <= foo.girder1.max_load_A
>
> The natural place to state this invariant is in class Foo, by expressing
> 'foo' as 'self' in Python (or omitting it in languages which imply such
> a lookup, of course).

I don't think that's the natural place. It's certainly one place to
consider, and may be the best one. However, it might work equally well
to use preconditions on the methods that add the beam and pier to Foo
to verify that the beam and pier in question are valid. If the
attributes of the beam and pier can't change, this would be the right
way to do it.

> If I'm not allowed (because you think "it's silly"!) to express a class
> invariant in terms of attributes of the attributes of an instance of
> that class, I basically have to write tons of boilerplate, violating
> encapsulation, to express what are really attributes of attributes of
> foo "as if" they were attributes of foo directly, e.g.
[...]
> (etc).  Changing a lot of dots into underscores -- what a way to waste
> programmer time!  And all to NO advantage, please note, since:

If you knew it was going to be to no advantage, why did you write the
boilerplate? That's also pretty silly. Care to provide reasons for
your wanting to do this?

>> of. Invariants are intended to be used to check the state of the
>> class, not the state of arbitary other objects. Doing the latter
>> requires that you have to check the invariants of every object pretty
>> much every time anything changes.
> ...in the end the invariant DOES have to be checked when anything
> relevant changes, anyway, with or without the silly extra indirection.

No, it doesn't have to be checked. Even invariants that don't suffer
from this don't have to be checked. It would be nice if every
invariant was checked every time it might be violated, but that's not
practical. If checking relationships between attributes attributes is
the best you can do, you do that, knowing that instead of an invariant
violation raising an exception after the code that violates it, the
exception may raised after the first method of your class that is
called after the invariant is violated. That's harder to debug than
the other way, but if it's the best you can get, it's the best you can
get.

> But besides the wasted work, there is a loss of conceptual integrity: I
> don't WANT Foo to have to expose the internal details that beam1's
> reference point A transfers the force to pier1's top, etc etc, elevating
> all of these internal structural details to the dignity of attributes of
> Foo.  Foo should expose only its externally visible attributes: loads
> and forces on all the relevant points, geometric details of the
> exterior, and structural parameters that are relevant for operating
> safety margins, for example.

You're right. Choosing to do that would be a bad idea. I have no idea
why you would do that in any language I'm familiar with. I'd be
interested in hearing about the language you use that requires you to
do that.

>> Invariants are a tool. Used wisely, they make finding and fixing some
>> logic bugs much easier than it would be otherwise. Used unwisely, they
>> don't do anything but make the cod

Re: Proposal: Inline Import

2005-12-10 Thread Thomas Heller
Shane Hathaway <[EMAIL PROTECTED]> writes:

> Xavier Morel wrote:
>> Shane Hathaway wrote:
>>
>>>Thoughts?
>>  >>> import re; name_expr = re.compile('[a-zA-Z]+')
>>  >>> name_expr
>> <_sre.SRE_Pattern object at 0x00F9D338>
>>  >>>
>> the import statement can be called anywhere in the code, why would
>> you add strange syntactic sugar that doesn't actually bring anything?
>
> That syntax is verbose and avoided by most coders because of the speed
> penalty.  It doesn't replace the idiom of importing everything at the
> top of the module.
>
> What's really got me down is the level of effort required to move code
> between modules.  After I cut 100 lines from a 500 line module and
> paste them to a different 500 line module, I have to examine every
> import in both modules as well as examine the code I moved for missing
> imports. And I still miss a lot of cases.  My test suite catches a lot
> of the mistakes, but it can't catch everything.

I understand this use case.

You can use pychecker to find NameErrors without actually running the
code.  Unfortunately, it doesn't (at least not always) find imports
which are not needed.

Thomas
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Managing import statements

2005-12-10 Thread Shane Hathaway
Jean-Paul Calderone wrote:
> On Sat, 10 Dec 2005 02:21:39 -0700, Shane Hathaway <[EMAIL PROTECTED]> wrote:
>>How about PyLint / PyChecker?  Can I configure one of them to tell me
>>only about missing / extra imports?  Last time I used one of those
>>tools, it spewed excessively pedantic warnings.  Should I reconsider?
> 
> 
> I use pyflakes for this: .  The 
> *only* things it tells me about are modules that are imported but never used 
> and names that are used but not defined.  It's false positive rate is 
> something like 1 in 10,000.

That's definitely a good lead.  Thanks.

> This is something I've long wanted to add to pyflakes (or as another feature 
> of pyflakes/emacs integration).

Is there a community around pyflakes?  If I wanted to contribute to it, 
could I?

Shane
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: calculate system disk space

2005-12-10 Thread [EMAIL PROTECTED]
A little somehting I rigged up when I found the Python call to be Linux
specific:

"""
mount_list
Taste the system and return a list of mount points.
On UNIX this will return what a df will return
On DOS based systems run through a list of common drive letters and
test them
to see if a mount point exists. Whether a floppy or CDROM on DOS is
currently active may present challenges.
Curtis W. Rendon 6/27/200 v.01
  6/27/2004 v.1 using df to make portable, and some DOS tricks to get
active
drives. Will try chkdsk on DOS to try to get drive size as statvfs()
doesn't exist on any system I have access to...

"""
import sys,os,string
from stat import *

def mount_list():
  """
  returns a list of mount points
  """


doslist=['a:\\','b:\\','c:\\','d:\\','e:\\','f:\\','g:\\','h:\\','i:\\','j:\\','k:\\','l:\\','m:\\','n:\\','o:\\','p:\\','q:\\','r:\\','s:\\','t:\\','u:\\','v:\\','w:\\','x:\\','y:\\','z:\\']
  mount_list=[]

  """
  see what kind of system
  if UNIX like
 use os.path.ismount(path) from /... use df?
  if DOS like
 os.path.exists(path) for  a list of common drive letters
  """
  if sys.platform[:3] == 'win':
#dos like
doslistlen=len(doslist)
for apath in doslist:
  if os.path.exists(apath):
#maybe stat check first... yeah, it's there...
if os.path.isdir(apath):
  mode = os.stat(apath)
  try:
 dummy=os.listdir(apath)
 mount_list.append(apath)
  except:
 continue
else:
  continue
return (mount_list)

  else:
#UNIX like
"""
AIX and SYSV are somewhat different than the GNU/BSD df, try to
catch
them. This is for AIX, at this time I don't have a SYS5 available
to see
what the sys.platform returns... CWR
"""
if 'aix' in sys.platform.lower():
  df_file=os.popen('df')
  while True:
df_list=df_file.readline()
if not df_list:
  break #EOF
dflistlower = df_list.lower()
if 'filesystem' in dflistlower:
  continue
if 'proc' in dflistlower:
  continue

file_sys,disc_size,disc_avail,disc_cap_pct,inodes,inodes_pct,mount=df_list.split()
mount_list.append(mount)

else:
  df_file=os.popen('df')
  while True:
df_list=df_file.readline()
if not df_list:
  break #EOF
dflistlower = df_list.lower()
if 'filesystem' in dflistlower:
  continue
if 'proc' in dflistlower:
  continue

file_sys,disc_size,disc_used,disc_avail,disc_cap_pct,mount=df_list.split()
mount_list.append(mount)

return (mount_list)



"""
have another function that returns max,used for each...
   maybe in discmonitor
"""
def size(mount_point):
  """
  """
  if sys.platform[:3] == 'win':
#dos like
dos_cmd='dir /s '+ mount_point
check_file=os.popen(dos_cmd)
while True:
  check_list=check_file.readline()
  if not check_list:
break #EOF
  if 'total files listed' in check_list.lower():
check_list=check_file.readline()

if 'file' in check_list.lower():
  if 'bytes' in check_list.lower():
numfile,filtxt,rawnum,junk=check_list.split(None,3)
total_used=string.replace(rawnum,',','')
#return (0,int(total_size),int(total_size))
#break
check_list=check_file.readline()
if 'dir' in check_list.lower():
  if 'free' in check_list.lower():
numdir,dirtxt,rawnum,base,junk=check_list.split(None,4)
multiplier=1
if 'mb' in base.lower():
  multiplier=100
if 'kb' in base.lower():
  multiplier=1000

rawnum=string.replace(rawnum,',','')
free_space=float(rawnum)*multiplier
#print
(0,int(free_space)+int(total_used),int(total_used))
return
(0,int(free_space)+int(total_used),int(total_used))
  else:
continue



  else:
#UNIX like
"""
AIX and SYSV are somewhat different than the GNU/BSD df, try to
catch
them. This is for AIX, at this time I don't have a SYS5 available
to see
what the sys.platform returns... CWR
"""
df_cmd = 'df '+ mount_point
if 'aix' in sys.platform.lower():
  df_file=os.popen(df_cmd)
  while True:
df_list=df_file.readline()
if not df_list:
  break #EOF
dflistlower = df_list.lower()
if 'filesystem' in dflistlower:
  continue
if 'proc' in dflistlower:
  continue

file_sys,disc_size,disc_avail,disc_cap_pct,inodes,inodes_pct,mount=df_list.split()
return(0,int(disc_size),int(disc_size)-int(disc_avail))

else:
  df_file=os.popen(df_cmd)
  while True:
df_list=df_file.readline()
if not df_list:
  break #EOF
dflistlower = df_list.lower()

Re: Managing import statements

2005-12-10 Thread Kent Johnson
Jean-Paul Calderone wrote:
> On Sat, 10 Dec 2005 02:21:39 -0700, Shane Hathaway 
> <[EMAIL PROTECTED]> wrote:
>> How about PyLint / PyChecker?  Can I configure one of them to tell me
>> only about missing / extra imports?  Last time I used one of those
>> tools, it spewed excessively pedantic warnings.  Should I reconsider?
> 
> 
> I use pyflakes for this: .  
> The *only* things it tells me about are modules that are imported but 
> never used and names that are used but n

Do any of these tools (PyLint, PyChecker, pyflakes) work with Jython? To 
do so they would have to work with Python 2.1, primarily...

Thanks,
Kent
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question

2005-12-10 Thread Mike Meyer
Steven D'Aprano <[EMAIL PROTECTED]> writes:
>> In particular,
>> you can get most of your meaningless methods out of a properly
>> designed Coordinate API. For example, add/sub_x/y_ord can all be
>> handled with move(delta_x = 0, delta_y = 0).
>
> Here is my example again:
>
> [quote]
> Then, somewhere in my application, I need twice the 
> value of the y ordinate. I would simply say:
>
> value = 2*pt.y
> [end quote]
>
> I didn't say I wanted a coordinate pair where the y ordinate was double
> that of the original coordinate pair. I wanted twice the y ordinate, which
> is a single real number, not a coordinate pair.
  
Here you're not manipulating the attribute to change the class -
you're just using the value of the attribute. That's what they're
there for.

>> And you've once again missed the point. The reason you don't
>> manipulate the attributes directly is because it violates
>> encapsulation, and tightens the coupling between your class and the
>> classes it uses. It means you see the implementation details of the
>> classes you are using, meaning that if that changes, your class has to
>> be changed to match.
> Yes. And this is a potential problem for some classes. The wise programmer
> will recognise which classes have implementations likely to change, and
> code defensively by using sufficient abstraction and encapsulation to
> avoid later problems.

Except only the omniscennt programmer can do that perfectly. The
experienced programmers knows that requiments change over the lifetime
of a project, including things that the customer swears on a stack of
holy books will never change.

> Do you lie awake at nights worrying that in Python 2.6 sys.stdout will be
> renamed to sys.standard_output, and that it will no longer have a write()
> method? According to the "law" of Demeter, you should, and the writers of
> the sys module should have abstracted the fact that stdout is a file away
> by providing a sys.write_to_stdout() function.
> That is precisely the sort of behaviour which I maintain is unnecessary.

And that's not the kind of behavior I'm talking about here, nor is it
the kind of behavior that the LoD is designed to help you with (those
are two different things).

>>> The bad side of the Guideline of Demeter is that following it requires
>>> you to fill your class with trivial, unnecessary getter and setter
>>> methods, plus methods for arithmetic operations, and so on.
>> 
>> No, it requires you to actually *think* about your API, instead of
>> just allowing every class to poke around inside your implementation.
>
> But I *want* other classes to poke around inside my implementation.
> That's a virtue, not a vice. My API says:
>
> "In addition to the full set of methods which operate on the coordinate as
> a whole, you can operate on the individual ordinates via instance.x and
> instance.y which are floats."

That's an API which makes changing the object more difficult. It may
be the best API for the case at hand, but you should be aware of the
downsides.

> Your API says:

Actually, this is *your* API.

> "In addition to the full set of methods which operate on the coordinate as
> a whole, you can operate on the individual ordinates via methods add_x,
> add_y, mult_x, mult_y, sub_x, sub_y, rsub_x, rsub_y, div_x, div_y, rdiv_x,
> rdiv_y, exp_x, exp_y, rexp_x, rexp_y...; the APIs of these methods are: ... "

That would be a piss-poor API design.  Any designer who knows what they
are doing should be able to turn out a better API than that given a
reasonable set of real-world requirements.

> My class is written, tested and complete before you've even decided on
> your API. And you don't even really get the benefit of abstraction: I have
> two public attributes (x and y) that I can't change without breaking other
> people's code, you've got sixteen-plus methods that you can't change
> without breaking other people's code.
> The end result is that your code is *less* abstract than mine: your code
> has to specify everything about ordinates: they can be added, they can be
> subtracted, they can be multiplied, they can be printed, and so on. That's
> far more concrete and far less abstract than mine, which simply says
> ordinates are floats, and leave the implementation of floats up to Python.

Again, this is *your* API, not mine. You're forcing an ugly, obvious
API instead of assuming the designer has some smidgen of ability. I've
already pointed out one trivial way to deal with this, and there are
others.

 http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Proposal: Inline Import

2005-12-10 Thread Mike Meyer
Shane Hathaway <[EMAIL PROTECTED]> writes:
> Let me fully elaborate the heresy I'm suggesting: I am talking about
> inline imports on every other line of code.  The obvious
> implementation would drop performance by a double digit percentage.

No, it wouldn't. The semantics of import pretty much require that the
drop in performance would most likely be negligible.

>> And your proposal is doing the import anyway, just under the
>> hood. How will you avoid the same penalty?
> The more complex implementation, which I suggested in the first
> message, is to maintain a per-module dictionary of imported objects
> (distinct from the global namespace.)  This would make inline imports
> have almost exactly the same runtime cost as a global namespace lookup.

If you put an import near every reference to a module, then each
import would "have almost exactly the same runtime cost as a global
namespace lookup." Your per-module dictionary of imported object
doesn't represent a significant improvement in module lookup time.
The extra cost comes from having to look up the module in the
namespace after you import it. However, the actual import has at most
the same runtime cost as looking up the module name, and may cost
noticably less. These costs will be swamped by the lookup cost for
non-module symbols in most code. If looking up some symbols is a
noticable part of your run-time the standard fix is to bind the
objects you are finding into your local namespace. Import already
allows this, with "from foo import bar". That will make references to
the name run as much fater than your proposed inline import than it
runs faster than doing an import before every line that references a
module.

In summary, the performance hit from doing many imports may be
significant compared to the cost only doing one import, but that still
represents only a small fraction of the total runtime of most code. In
the cases where that isn't the case, we already have a solution
available with better performance than any of the previously discussed
methods.

  http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: binascii.crc32 results not matching

2005-12-10 Thread Fredrik Lundh
Larry Bates wrote:

> def CalculateCrc(buffer, crc):

/snip/

> The algorithm looks very much like the source code for
> binascii.crc32 (but I'm not a C programmer).

does your Python version give the right result ?   if so, the following
might be somewhat helpful:

def test1(text, crc=0):

# larry's code
result = CalculateCrc(text, crc)
return hex(result[0])

def test2(text, crc=0):

import Image

# using pil's crc32 api
a = (crc >> 16) ^ 65535
b = (crc & 65535) ^ 65535
a, b = Image.core.crc32(text, (a, b))
a ^= 65535
b ^= 65535
return "0x%04X%04XL" % (a, b)

def test(text):
print test1(text), test2(text)

test("hello")
test("goodbye")

this prints

0xF032519BL 0xF032519BL
0x90E3070AL 0x90E3070AL

no time to sort out the int/long mess for binascii.crc32, but it pro-
bably needs the same tweaking as PIL (which handles the CRC as
two 16-bit numbers, to avoid sign hell).





-- 
http://mail.python.org/mailman/listinfo/python-list


Re: problems binding a dictionary

2005-12-10 Thread Sam Pointon
You're not 'exploding' the dict to the param1='blah' etc form - you-re
actually passing it in as a single dict object. To solve this, add a **
to the front of a dict you want to explode in a function, just as you'd
add a * to explode a sequence.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: problems binding a dictionary

2005-12-10 Thread vida00

[EMAIL PROTECTED] wrote:
> this has to be a very silly thing.
>
> I have a function foo taking a dictionary as parameters. i.e.: def
> foo(**kwargs): pass
> when I call foo(param1='blah',param2='bleh',param3='blih') everything
> is fine.
> but when I do:
> >>> def foo(**kwargs):
> ... pass
> ...
> >>> d=dict(param1='blah',param2='bleh',param3='blih')
> >>> foo(d)
>
> I get:
>
> Traceback (most recent call last):
>   File "", line 1, in ?
> TypeError: foo() takes exactly 0 arguments (1 given)
>
> Why? how do I pass the dictionary *d* to foo()?
> Thanks,
> 
> - Josh.

I mean, short of defining as foo(*args), or foo(dict).

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Thoughts on object representation in a dictionary

2005-12-10 Thread Paul Rubin
"py" <[EMAIL PROTECTED]> writes:
> Well the other thing is that I am allowed to store strings in this
> dictionary...so I can't just store the Engine and Body object and later
> use them.  this is just a requirement (which i dont understand
> either)...but its what I have to do.

Probably so that the object store can later be moved offline, with the
shelve module or DB API or something like that.

> So my question is there a good way of storing this information in a
> nested dictionary such that if I receive this dictionary I could easily
> pull it apart and create Engine and Body objects with the information?
> Any suggestions at all?  keep in mind I am limited to using a
> dictionary of strings...thats it.

The spirit of the thing would be serialize the objects, maybe with
cPickle, as Steven suggests.  A kludgy way to thwart the policy
might be to store the objects in a list instead of a dictionary:
x[0], x[1], etc.  Then have a dictionary mapping keys to subscripts
in the list.  The subscripts would be stringified ints: '0','1',... .
-- 
http://mail.python.org/mailman/listinfo/python-list


problems binding a dictionary

2005-12-10 Thread vida00
this has to be a very silly thing.

I have a function foo taking a dictionary as parameters. i.e.: def
foo(**kwargs): pass
when I call foo(param1='blah',param2='bleh',param3='blih') everything
is fine.
but when I do:
>>> def foo(**kwargs):
... pass
...
>>> d=dict(param1='blah',param2='bleh',param3='blih')
>>> foo(d)

I get:

Traceback (most recent call last):
  File "", line 1, in ?
TypeError: foo() takes exactly 0 arguments (1 given)

Why? how do I pass the dictionary *d* to foo()?
Thanks,

- Josh.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: slice notation as values?

2005-12-10 Thread Antoon Pardon
On 2005-12-10, Duncan Booth <[EMAIL PROTECTED]> wrote:
> Antoon Pardon wrote:
>> In general I use slices over a tree because I only want to iterate
>> over a specific subdomain of the keys. I'm not iterested in make
>> a tree over the subdomain. Making such a subtree would be an
>> enormous waste of resources. 
>
> Probably not unless you have really large data structures. If you have 
> something like a dbhash which would be inefficient to copy then both the 
> original view of the data structure and the slices could share the same 
> data. Creating a slice doesn't *have* to copy anything just so long as the 
> semantics are clear.
>
>> With slice notation you could have the following two cases:
>> 
>>   for key in tree.iterkeys('a':)
>> 
>>   for key in tree.iterkeys(:'b')
>
>  x['a':] is short for x['a':None]
>  x[:'b'] is short for x[None:'b']

That is beside the point. The user doesn't have to know that
in order to use slices. In point of fact I think that if
tomorrow they changed the default to something different
not a single program would break.

>> But you can't do
>> 
>>   for key in tree.iterkeys('a',)
>> 
>> or more regrettably, you can't do:
>> 
>>   for key in tree.iterkeys(,'b')
>
> If your datatype defines iterkeys to accept start and end arguments then 
> you can do:
>
>for key in tree.iterkeys('a',None)
>for key in tree.iterkeys(None,'b')
>
> which is directly equivalent to the slices, or you can do:
>
>for key in tree.iterkeys(start='a')
>for key in tree.iterkeys(stop='b')
>
> which is more explicit.

Yes we could do all that. The question remains why we should burden
the user with all this extra information he has to know now in order
to use this method, while there is a clear notation he can use
without all this.

The user doesn't has to type:

  lst[5:None] or lst[None:7],

Neither has he to type something like
 
  lst[start=5] or lst[stop=7]


There are circumstances when the user needs to provide slice
information to a function. Why not allow him to use the
same notation as he can use in subscribtion. What reason
is there to limit a specific notation for a value to
specific circumstances. To me this looks as arbitrary
as would bracket notation not be allowed as an argument
but that you would be obligated to use list((a,b,...)) in
calls instead of [a,b,...]

It wouldn't prevent you to do anything from what you can
do now with python, it would only make a number of things
unnecesary cumbersome.

So yes, my proposal will not allow you to do anything you
can;t already do now. It would just allow you to do a number
of things in a less cumbersome way.

>> I also think that other functions could benefit. For instance suppose
>> you want to iterate over every second element in a list. Sure you
>> can use an extended slice or use some kind of while. But why not
>> extend enumerate to include an optional slice parameter, so you could
>> do it as follows:
>> 
>>   for el in enumerate(lst,::2)
>
> 'Why not'? Because it makes for a more complicated interface for something 
> you can already do quite easily.

Do you think so? This IMO should provide (0,lst[0]), (2,lst[2]),
(4,lst[4]) ...

I haven't found a way to do this easily. Except for something like:

start = 0:
while start < len(lst):
  yield start, lst[start]
  start += 2

But if you accept this, then there was no need for enumerate in the
first place. So eager to learn something new, how do you do this
quite easily?

-- 
Antoon Pardon


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: binascii.crc32 results not matching

2005-12-10 Thread Larry Bates
Peter Hansen wrote:
> Larry Bates wrote:
> 
>> I'm trying to get the results of binascii.crc32
>> to match the results of another utility that produces
>> 32 bit unsigned CRCs.  
> 
> 
> What other utility?  As Tim says, there are many CRC32s... the
> background notes on this one happen to stumble out at the top of the
> list in response to googling for "zip file crc32 checksum polynomial",
> though I'm sure there are easier ways.  The binascii docs say its CRC32
> is compatible with the Zip file checksum, but they don't describe it
> further.
> 
> Generally CRCs are described in terms of their "polynomial", though just
> quoting that isn't sufficient to describe their behaviour, but if you
> happen to know the polynomial for your utility, someone else can
> probably point you to a more appropriate routine, or perhaps explain
> what you were doing wrong if the binascii one is actually the right one..
> 
> -Peter
> 
It was a .DLL written by an employee that has long since
left the company.  We want to move the code to Linux for
nightly checking of files.  I don't know what to do but
post some long code.  See below:

/
INCLUDES
 /
#include 
#include 
#include 
#include 
#include 
#include "filecrc.h"

/
   ModuleName: filecrc.c
   Author: Syscon Computers - Modified by Barry Weck
   Project:PowerBuilder External File CRC function
   Date created:   May. 18, 1999
   Last Modified:  Jun. 09, 1999
   Module Owner:   Syscon Computers

   Module description:

  This module implements a algorithm for calculating the CRC (Cyclic
  Redundency Check) of a binary file.  This function is meant to be
  compiled as a 16 bit Microsoft Windows (Tm) DLL with either the 16 bit C
  compilers of Borland or Microsoft.  Compilation under other other
  compilers has not been tested.  The requirement of "16-bit DLL" is
  nessesitated by the version of PowerBuilder used by syscon.

  This module was written by a third party and was then modified by
  subcontractor Barry Weck.  The code is copyrighted and owned by Syscon
  Computers, Inc. of Tuscaloosa Alabama (C) 1999.


 /
/
DATA
 /
static unsigned long ccitt_32[256] =
{
   0xUL, 0x77073096UL, 0xee0e612cUL, 0x990951baUL, 0x076dc419UL,
0x706af48fUL, 0xe963a535UL, 0x9e6495a3UL,
   0x0edb8832UL, 0x79dcb8a4UL, 0xe0d5e91eUL, 0x97d2d988UL, 0x09b64c2bUL,
0x7eb17cbdUL, 0xe7b82d07UL, 0x90bf1d91UL,
   0x1db71064UL, 0x6ab020f2UL, 0xf3b97148UL, 0x84be41deUL, 0x1adad47dUL,
0x6ddde4ebUL, 0xf4d4b551UL, 0x83d385c7UL,
   0x136c9856UL, 0x646ba8c0UL, 0xfd62f97aUL, 0x8a65c9ecUL, 0x14015c4fUL,
0x63066cd9UL, 0xfa0f3d63UL, 0x8d080df5UL,
   0x3b6e20c8UL, 0x4c69105eUL, 0xd56041e4UL, 0xa2677172UL, 0x3c03e4d1UL,
0x4b04d447UL, 0xd20d85fdUL, 0xa50ab56bUL,
   0x35b5a8faUL, 0x42b2986cUL, 0xdbbbc9d6UL, 0xacbcf940UL, 0x32d86ce3UL,
0x45df5c75UL, 0xdcd60dcfUL, 0xabd13d59UL,
   0x26d930acUL, 0x51de003aUL, 0xc8d75180UL, 0xbfd06116UL, 0x21b4f4b5UL,
0x56b3c423UL, 0xcfba9599UL, 0xb8bda50fUL,
   0x2802b89eUL, 0x5f058808UL, 0xc60cd9b2UL, 0xb10be924UL, 0x2f6f7c87UL,
0x58684c11UL, 0xc1611dabUL, 0xb6662d3dUL,
   0x76dc4190UL, 0x01db7106UL, 0x98d220bcUL, 0xefd5102aUL, 0x71b18589UL,
0x06b6b51fUL, 0x9fbfe4a5UL, 0xe8b8d433UL,
   0x7807c9a2UL, 0x0f00f934UL, 0x9609a88eUL, 0xe10e9818UL, 0x7f6a0dbbUL,
0x086d3d2dUL, 0x91646c97UL, 0xe6635c01UL,
   0x6b6b51f4UL, 0x1c6c6162UL, 0x856530d8UL, 0xf262004eUL, 0x6c0695edUL,
0x1b01a57bUL, 0x8208f4c1UL, 0xf50fc457UL,
   0x65b0d9c6UL, 0x12b7e950UL, 0x8bbeb8eaUL, 0xfcb9887cUL, 0x62dd1ddfUL,
0x15da2d49UL, 0x8cd37cf3UL, 0xfbd44c65UL,
   0x4db26158UL, 0x3ab551ceUL, 0xa3bc0074UL, 0xd4bb30e2UL, 0x4adfa541UL,
0x3dd895d7UL, 0xa4d1c46dUL, 0xd3d6f4fbUL,
   0x4369e96aUL, 0x346ed9fcUL, 0xad678846UL, 0xda60b8d0UL, 0x44042d73UL,
0x33031de5UL, 0xaa0a4c5fUL, 0xdd0d7cc9UL,
   0x5005713cUL, 0x270241aaUL, 0xbe0b1010UL, 0xc90c2086UL, 0x5768b525UL,
0x206f85b3UL, 0xb966d409UL, 0xce61e49fUL,
   0x5edef90eUL, 0x29d9c998UL, 0xb0d09822UL, 0xc7d7a8b4UL, 0x59b33d17UL,
0x2eb40d81UL, 0xb7bd5c3bUL, 0xc0ba6cadUL,
   0xedb88320UL, 0x9abfb3b6UL, 0x03b6e20cUL, 0x74b1d29aUL, 0xead54739UL,
0x9dd277afUL, 0x04db2615UL, 0x73dc1683UL,
   0xe3630b12UL, 0x94643b84UL, 0x0d6d6a3eUL, 0x7a6a5aa8UL, 0xe40ecf0bUL,
0x9309ff9dUL, 0x0a00ae27UL, 0x7d079eb1UL,
   0xf00f9344UL, 0x8708a3d2UL, 0x1e01f268UL, 0x6906c2feUL, 0xf762575dUL,
0x806567cbUL, 0x196c3671UL, 0x6e6b06e7UL,
   0xfed41b76UL, 0x89d32be0UL, 0x10da7a5aUL, 0x67dd4accUL, 0xf9b9df6fUL,
0x8ebeeff9UL, 0x17b7be43UL, 0x60b08ed5UL,
   0xd6d6a3e8UL, 0xa1d1937eUL, 0x38d8c2c4UL, 0x4fdff252UL, 0x

Re: ANN: Dao Language v.0.9.6-beta is release!

2005-12-10 Thread Tom Anderson
On Sat, 10 Dec 2005, Sybren Stuvel wrote:

> Zeljko Vrba enlightened us with:
>
>> Find me an editor which has folds like in VIM, regexp search/replace 
>> within two keystrokes (ESC,:), marks to easily navigate text in 2 
>> keystrokes (mx, 'x), can handle indentation-level matching as well as 
>> VIM can handle {}()[], etc.  And, unlike emacs, respects all (not just 
>> some) settings that are put in its config file. Something that works 
>> satisfactorily out-of-the box without having to learn a new programming 
>> language/platform (like emacs).
>
> Found it! VIM!

ED IS THE STANDARD TEXT EDITOR.

tom

-- 
Argumentative and pedantic, oh, yes. Although it's properly called
"correct" -- Huge
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Managing import statements

2005-12-10 Thread Jean-Paul Calderone
On Sat, 10 Dec 2005 02:21:39 -0700, Shane Hathaway <[EMAIL PROTECTED]> wrote:
>Let's talk about the problem I really want help with.  I brought up a
>proposal earlier, but it was only half serious.  I realize Python is too
>sacred to accept such a heretical change. ;-)
>
>Here's the real problem: maintaining import statements when moving
>sizable blocks of code between modules is hairy and error prone.
>
>I move major code sections almost every day.  I'm constantly
>restructuring the code to make it clearer and simpler, to minimize
>duplication, and to meet new requirements.  To give you an idea of the
>size I'm talking about, just today I moved around 600 lines between
>about 8 modules, resulting in a 1400 line diff.  It wasn't just
>cut-n-paste, either: nearly every line I moved needed adjustment to work
>in its new context.
>
>While moving and adjusting the code, I also adjusted the import
>statements.  When I finished, I ran the test suite, and sure enough, I
>had missed some imports.  While the test suite helps a lot, it's
>prohibitively difficult to cover all code in the test suite, and I had

I don't know about this :)

>lingering doubts about the correctness of all those import statements.
>So I examined them some more and found a couple more mistakes.
>Altogether I estimate I spent 20% of my time just examining and fixing
>import statements, and who knows what other imports I missed.
>
>I'm surprised this problem isn't more familiar to the group.  Perhaps
>some thought I was asking a newbie question.  I'm definitely a newbie in
>the sum of human knowledge, but at least I've learned some tiny fraction
>of it that includes Python, DRY, test-first methodology, OOP, design
>patterns, XP, and other things that are commonly understood by this
>group.  Let's move beyond that.  I'm looking for ways to gain just a
>little more productivity, and improving the process of managing imports
>could be low-hanging fruit.
>
>So, how about PyDev?  Does it generate import statements for you?  I've
>never succeeded in configuring PyDev to perform autocompletion, but if
>someone can say it's worth the effort, I'd be willing to spend time
>debugging my PyDev configuration.
>
>How about PyLint / PyChecker?  Can I configure one of them to tell me
>only about missing / extra imports?  Last time I used one of those
>tools, it spewed excessively pedantic warnings.  Should I reconsider?

I use pyflakes for this: .  The 
*only* things it tells me about are modules that are imported but never used 
and names that are used but not defined.  It's false positive rate is something 
like 1 in 10,000.

>
>Is there a tool that simply scans each module and updates the import
>statements, subject to my approval?  Maybe someone has worked on this,
>but I haven't found the right Google incantation to discover it.

This is something I've long wanted to add to pyflakes (or as another feature of 
pyflakes/emacs integration).

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: newbie question

2005-12-10 Thread Dody Suria Wijaya
How about this:

   python your_program.py examle.txt

Bermi wrote:
> 
> how i can link it to read my file examle.txt?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: small inconsistency in ElementTree (1.2.6)

2005-12-10 Thread Damjan
>>> ascii strings and unicode strings are perfectly interchangable, with
>>> some minor exceptions.
>>
>> It's not only translate, it's decode too...
>
> why would you use decode on the strings you get back from ET ?

Long story... some time ago when computers wouldn't support charsets
people
invented so called "cyrillic fonts" - ie a font that has cyrillic
glyphs
mapped on the latin posstions. Since our cyrillic alphabet has 31
characters, some characters in said fonts were mapped to { or ~ etc..
Of
course this ,,sollution" is awful but it was the only one at the
time.

So I'm making a python script that takes an OpenDocument file and
translates
it to UTF-8...

ps. I use translate now, but I was making a general note that unicode
and
string objects are not 100% interchangeable. translate, encode, decode
are
especially problematic.

anyway, I wrap the output of ET in unicode() now... I don't see
another, better, sollution.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question

2005-12-10 Thread Alex Martelli
Mike Meyer <[EMAIL PROTECTED]> wrote:
   ...
> >> it. Nothing you do with zim.foo or zim.foo.bar can change the state of
> >> zim. The only invariants you need to check are bar's, which you do at
> >> the exit to it's baz method.
> > So foo's class is not allowed to have as its invariant any formula
> > depending on the attributes of its attribute bar, such as "bar.x>23" or
> > the like?
> 
> Of course you can do such things. But it's a silly thing to do. That

I guess this is the crux of our disagreement -- much like, it seems to
me, your disagreement with Xavier and Steven on the other half of this
thread, as I'll try to explain in the following.

> invariant should be written as x > 23 for the class bar is an instance

Let's, for definiteness, say that bar is an instance of class Bar.  Now,
my point is that absolutely not all instances of Bar are constrained to
always have their x attribute >23 -- in general, their x's can vary all
over the place; rather, the constraint applies very specifically to this
one instance of Bar -- the one held by foo (an instance of Foo) as foo's
attribute bar.

Let's try to see if I can make a trivially simple use case.  Say I'm
using a framework to model statical structures in civil engineering.
I have classes such as Truss, Beam, Pier, Column, Girder, and so forth.

So in a given structure (class Foo) I might have a certain instance of
Beam, attribute beam1 of instances of Foo, which carries a certain load
(dependent on the overall loads borne by each given instance of Foo),
and transfers it to an instance of Pier (attribute pier1 of instances of
Foo) and one of Girder (attribute girder1 ditto).

Each of these structural elements will of course be able to exhibit as
attributes all of its *individual* characteristics -- but the exact
manner of *relationship* between the elements depends on how they're
assembled in a given structure, and so it's properly the business of the
structure, not the elements.

So, one invariant that had better hold to ensure a certain instance foo
of Foo is not about to crash, may be, depending on how Foo's detailed
structual geometry is, something like:

  foo.beam1.force_transferred_A <= foo.pier1.max_load_top AND
  foo.beam1.force_transferred_B <= foo.girder1.max_load_A

The natural place to state this invariant is in class Foo, by expressing
'foo' as 'self' in Python (or omitting it in languages which imply such
a lookup, of course).

If I'm not allowed (because you think "it's silly"!) to express a class
invariant in terms of attributes of the attributes of an instance of
that class, I basically have to write tons of boilerplate, violating
encapsulation, to express what are really attributes of attributes of
foo "as if" they were attributes of foo directly, e.g.

  def beam1_force_transferred_A(): return beam1.force_transferred_A

(or other syntax to the same purpose).  After going through this
pointless (truly silly) exercise I can finally code the invariant as

  self.beam1_force_transferred_A <= self.pier1_max_load_top AND

(etc).  Changing a lot of dots into underscores -- what a way to waste
programmer time!  And all to NO advantage, please note, since:

> of. Invariants are intended to be used to check the state of the
> class, not the state of arbitary other objects. Doing the latter
> requires that you have to check the invariants of every object pretty
> much every time anything changes.

...in the end the invariant DOES have to be checked when anything
relevant changes, anyway, with or without the silly extra indirection.

But besides the wasted work, there is a loss of conceptual integrity: I
don't WANT Foo to have to expose the internal details that beam1's
reference point A transfers the force to pier1's top, etc etc, elevating
all of these internal structural details to the dignity of attributes of
Foo.  Foo should expose only its externally visible attributes: loads
and forces on all the relevant points, geometric details of the
exterior, and structural parameters that are relevant for operating
safety margins, for example.

The point is that the internal state of an object foo which composes
other objects (foo's attributes) beam1, pier1, etc, INCLUDES some
attributes of those other objects -- thus stating that the need to check
those attributes' relationships in Foo's class invariant is SILLY,
strikes me as totally out of place.  If the state is only of internal
relevance, important e.g. in invariants but not to be externally
exposed, what I think of as very silly instead is a style which forces
me to build a lot of "pseudoattributes" of Foo (not to be exposed) by
mindless delegation to attributes of attributes.


> Invariants are a tool. Used wisely, they make finding and fixing some
> logic bugs much easier than it would be otherwise. Used unwisely, they
> don't do anything but make the code bigger.

I disagree, most intensely and deeply, that any reference to an
attribute of an attribute of self in the body of an invariant 

Re: slice notation as values?

2005-12-10 Thread Antoon Pardon
On 2005-12-10, Steven Bethard <[EMAIL PROTECTED]> wrote:
> Antoon Pardon wrote:
>> So lets agree that tree['a':'b'] would produce a subtree. Then
>> I still would prefer the possibility to do something like:
>> 
>>   for key in tree.iterkeys('a':'b')
>> 
>> Instead of having to write
>> 
>>   for key in tree['a':'b'].iterkeys()
>> 
>> Sure I can now do it like this:
>> 
>>   for key in tree.iterkeys('a','b')
>> 
>> But the way default arguments work, prevents you from having
>> this work in an analague way as a slice.
>
> How so?  Can't you just pass the *args to the slice contstructor?  E.g.::
>
>  def iterkeys(self, *args):
>  keyslice = slice(*args)
>  ...
>
> Then you can use the slice object just as you would have otherwise.

This doesn't work for a number of reasons,

1) 

>>> slice()
Traceback (most recent call last):
  File "", line 1, in ?
  TypeError: slice expected at least 1 arguments, got 0


2) It doens't give a clear way to indicate the following
   kind of slice: tree.iterkeys('a':). Because of the
   follwing:

>>> slice('a')
slice(None, 'a', None)

   which would be equivallent to tree.iterkeys(:'a')

-- 
Antoon Pardon
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: slice notation as values?

2005-12-10 Thread Duncan Booth
Antoon Pardon wrote:
> In general I use slices over a tree because I only want to iterate
> over a specific subdomain of the keys. I'm not iterested in make
> a tree over the subdomain. Making such a subtree would be an
> enormous waste of resources. 

Probably not unless you have really large data structures. If you have 
something like a dbhash which would be inefficient to copy then both the 
original view of the data structure and the slices could share the same 
data. Creating a slice doesn't *have* to copy anything just so long as the 
semantics are clear.

> With slice notation you could have the following two cases:
> 
>   for key in tree.iterkeys('a':)
> 
>   for key in tree.iterkeys(:'b')

 x['a':] is short for x['a':None]
 x[:'b'] is short for x[None:'b']

> 
> But you can't do
> 
>   for key in tree.iterkeys('a',)
> 
> or more regrettably, you can't do:
> 
>   for key in tree.iterkeys(,'b')

If your datatype defines iterkeys to accept start and end arguments then 
you can do:

   for key in tree.iterkeys('a',None)
   for key in tree.iterkeys(None,'b')

which is directly equivalent to the slices, or you can do:

   for key in tree.iterkeys(start='a')
   for key in tree.iterkeys(stop='b')

which is more explicit.

> I also think that other functions could benefit. For instance suppose
> you want to iterate over every second element in a list. Sure you
> can use an extended slice or use some kind of while. But why not
> extend enumerate to include an optional slice parameter, so you could
> do it as follows:
> 
>   for el in enumerate(lst,::2)

'Why not'? Because it makes for a more complicated interface for something 
you can already do quite easily.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Another newbie question

2005-12-10 Thread Antoon Pardon
On 2005-12-10, Steven D'Aprano <[EMAIL PROTECTED]> wrote:
> On Sat, 10 Dec 2005 01:28:52 -0500, Mike Meyer wrote:
>
> The not-so-wise programmer takes abstraction as an end itself, and
> consequently spends more time and effort defending against events which
> almost certainly will never happen than it would have taken to deal with
> it if they did.
>
> Do you lie awake at nights worrying that in Python 2.6 sys.stdout will be
> renamed to sys.standard_output, and that it will no longer have a write()
> method? According to the "law" of Demeter, you should, and the writers of
> the sys module should have abstracted the fact that stdout is a file away
> by providing a sys.write_to_stdout() function.

I find this a strange interpretation.

sys is a module, not an instance. Sure you can use the same notation
and there are similarities but I think the differences are more
important here.

> That is precisely the sort of behaviour which I maintain is unnecessary.
>
>
>
>>> The bad side of the Guideline of Demeter is that following it requires
>>> you to fill your class with trivial, unnecessary getter and setter
>>> methods, plus methods for arithmetic operations, and so on.
>> 
>> No, it requires you to actually *think* about your API, instead of
>> just allowing every class to poke around inside your implementation.
>
> But I *want* other classes to poke around inside my implementation.
> That's a virtue, not a vice. My API says:
>
> "In addition to the full set of methods which operate on the coordinate as
> a whole, you can operate on the individual ordinates via instance.x and
> instance.y which are floats."

Yikes. I would never do that. Doing so would tie my code unnecesary
close to yours and would make it too difficult to change to an
other class with a different implementation like one using tuples or
lists instead of a seperate x and y instances.

> Your API says:
>
> "In addition to the full set of methods which operate on the coordinate as
> a whole, you can operate on the individual ordinates via methods add_x,
> add_y, mult_x, mult_y, sub_x, sub_y, rsub_x, rsub_y, div_x, div_y, rdiv_x,
> rdiv_y, exp_x, exp_y, rexp_x, rexp_y...; the APIs of these methods are: ... "

Who in heavens name would need those? Maybe there is no x or y because
the implementation uses a list or a tuple, maybe the implementation
uses polar coordinates because that is more usefull for the application
it was planned for.

Sure a way to unpack your coordinate in a number of individual
ordinate variables could be usefull for when you want to manipulate
such an individual number.

> My class is written, tested and complete before you've even decided on
> your API. And you don't even really get the benefit of abstraction: I have
> two public attributes (x and y) that I can't change without breaking other
> people's code, you've got sixteen-plus methods that you can't change
> without breaking other people's code.

No he would have none.

-- 
Antoon Pardon
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: slice notation as values?

2005-12-10 Thread Steven Bethard
Antoon Pardon wrote:
> So lets agree that tree['a':'b'] would produce a subtree. Then
> I still would prefer the possibility to do something like:
> 
>   for key in tree.iterkeys('a':'b')
> 
> Instead of having to write
> 
>   for key in tree['a':'b'].iterkeys()
> 
> Sure I can now do it like this:
> 
>   for key in tree.iterkeys('a','b')
> 
> But the way default arguments work, prevents you from having
> this work in an analague way as a slice.

How so?  Can't you just pass the *args to the slice contstructor?  E.g.::

 def iterkeys(self, *args):
 keyslice = slice(*args)
 ...

Then you can use the slice object just as you would have otherwise.

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Documentation suggestions

2005-12-10 Thread Fredrik Lundh
I wrote:

> if it turns out that this doesn't scale (has anyone put up that library
> reference wiki yet? ;-)

if anyone's interested, here's a seealso file for the python library
reference:

http://effbot.org/zone/seealso-python-library.xml

this was generated from the global module index by this script:

http://effbot.org/zone/seealso-python-library.htm

still waiting for that wiki ;-)





-- 
http://mail.python.org/mailman/listinfo/python-list


newbie question

2005-12-10 Thread Bermi
i have this program
===
from sys import *
import math
import math, Numeric
from code import *
from string import *
from math import *
from dataSet import *
from string import *

def drawAsciiFile():
_fileName=str(argv[1])
__localdataSet=DataSet(_fileName)

#_PlotCols=string.split(str(argv[2]),' ')
#_PlotColsInt=[]
'''for s in _PlotCols:
_PlotColsInt.append(int(s))
CountourPlots(__localdataSet,_PlotColsInt)
'''
print
__data=__localdataSet.GetData()
print max(__data[:,11])
if __name__ == "__main__":
drawAsciiFile()


how i can link it to read my file examle.txt?

thanks 
michael

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: binascii.crc32 results not matching

2005-12-10 Thread Raymond L. Buvel
Tim Peters wrote:
> [Raymond L. Buvel]
> 
>>Check out the unit test in the following.
>>
>>http://sourceforge.net/projects/crcmod/
> 
> 
> Cool!
> 
> 
>>I went to a lot of trouble to get the results to match the results of
>>binascii.crc32.  As you will see, there are a couple of extra operations
>>even after you get the polynomial and bit ordering correct.
> 
> 
> Nevertheless, the purpose of binascii.crc32 is to compute exactly the
> same result as most zip programs give.  All the details (including
> what look to you like "extra operations" ;-)) were specified by RFC
> 1952 (the GZIP file format specification).  As a result,
> binascii.crc32 matches, e.g., the CRCs reported by WinZip on Windows,
> and gives the same results as zlib.crc32 (as implemented by the zlib
> developers).

Since there are probably others following this thread, it should be
pointed out that the specification of those "extra operations" is to
avoid some pathalogical conditions that you can get with a simplistic
CRC operation.  For example, using 0 as the starting value will give a
value of zero for an arbitrary string of zeros.  Consequently, a file
starting with a string of zeros will have the same CRC as one with the
zeros stripped off.  While starting with 0x will give a non-zero
value.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: binascii.crc32 results not matching

2005-12-10 Thread Tim Peters
[Raymond L. Buvel]
> Check out the unit test in the following.
>
> http://sourceforge.net/projects/crcmod/

Cool!

> I went to a lot of trouble to get the results to match the results of
> binascii.crc32.  As you will see, there are a couple of extra operations
> even after you get the polynomial and bit ordering correct.

Nevertheless, the purpose of binascii.crc32 is to compute exactly the
same result as most zip programs give.  All the details (including
what look to you like "extra operations" ;-)) were specified by RFC
1952 (the GZIP file format specification).  As a result,
binascii.crc32 matches, e.g., the CRCs reported by WinZip on Windows,
and gives the same results as zlib.crc32 (as implemented by the zlib
developers).
-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   >