Re: Question About Logic In Python

2005-09-22 Thread Bengt Richter
On Thu, 22 Sep 2005 14:12:52 -0400, "Terry Reedy" <[EMAIL PROTECTED]> wrote:

>
>"Steve Holden" <[EMAIL PROTECTED]> wrote in message 
>news:[EMAIL PROTECTED]
>> Which is yet another reason why it makes absolutely no sense to apply
>> arithmetic operations to Boolean values.
>
>Except for counting the number of true values.  This and other legitimate 
>uses of False/True as 0/1 (indexing, for instance) were explicitly 
>considered as *features* of the current design when it was entered.  The 
>design was not merely based on backwards compatibility, but also on 
>actually use cases which Guido did not want to disable.  There was lots of 
>discussion on c.l.p.
>
OTOH ISTM choosing to define bool as a subclass of int was a case of
practicality beats purity, but I'm not sure it wasn't an overeager coercion in 
disguise.

I.e., IMO a boolean is really not an integer, but it does belong to an ordered 
set,
or enumeration, that has a very useful correspondence to the enumeration of 
binary digits,
which in turn corresponds to the beginning subset of the ordered set of the 
non-negative integers.
IIRC Pascal actually does use an enumeration to define its bools, and you get 
the integer values
via an ord function.

BTW, for counting you could always use sum(1 for x in boolables if x) ;-)

And giving the bool type an __int__ method of its own might have covered a lot 
of coercions
of convenience, especially if __getitem__ of list and tuple would do coercion 
(you
could argue about coercing floats, etc. in that context, as was probably done 
;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Anyone else getting posts back as email undeliverable bounces?

2005-09-22 Thread Bengt Richter
It seems lately all my posts have been coming back to me as bounced emails,
and I haven't emailed them ;-(

I've been getting  bounce messages like (excerpt):
...
___

This is the Postfix program at host deimos.liage.net.

I'm sorry to have to inform you that your message could not be
be delivered to one or more recipients. It's attached below.

For further assistance, please send mail to 

If you do so, please include this problem report. You can
delete your own text from the attached returned message.

The Postfix program

<[EMAIL PROTECTED]>: Host or domain name not found. Name service error for
name=lindensys.net type=A: Host not found
Reporting-MTA: dns; deimos.liage.net
X-Postfix-Queue-ID: DC5264161
X-Postfix-Sender: rfc822; [EMAIL PROTECTED]
Arrival-Date: Thu, 22 Sep 2005 19:50:13 -0400 (EDT)

Final-Recipient: rfc822; [EMAIL PROTECTED]
Action: failed
Status: 5.0.0
Diagnostic-Code: X-Postfix; Host or domain name not found. Name service error
for name=lindensys.net type=A: Host not found
Received: by deimos.liage.net (Postfix, from userid 126)
id DC5264161; Thu, 22 Sep 2005 19:50:13 -0400 (EDT)
Received: from smtp-vbr5.xs4all.nl (smtp-vbr5.xs4all.nl [194.109.24.25])
by deimos.liage.net (Postfix) with ESMTP id 79D8340AA
for <[EMAIL PROTECTED]>; Thu, 22 Sep 2005 19:50:13 -0400 (EDT)
Received: from bag.python.org (bag.python.org [194.109.207.14])
by smtp-vbr5.xs4all.nl (8.13.3/8.13.3) with ESMTP id j8MNoClb072177
for <[EMAIL PROTECTED]>; Fri, 23 Sep 2005 01:50:12 +0200 (CEST)
(envelope-from [EMAIL PROTECTED])
Received: from bag.python.org (bag [127.0.0.1])
by bag.python.org (Postfix) with ESMTP id 8C4871E4013
for <[EMAIL PROTECTED]>; Fri, 23 Sep 2005 01:50:10 +0200 (CEST)
_________
...


Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: time challenge

2005-09-22 Thread Bengt Richter
On Thu, 22 Sep 2005 08:11:26 -0500, nephish <[EMAIL PROTECTED]> wrote:

>
>Hey there,
>i am doing a plotting application.
>i am using mxRelativeDateTimeDiff to get how much time is between
>date x and date y
>
>now what i need to do is divide that time by 20 to get 20 even time
>slots
>for plotting on a graph.
>
>For example, if the difference between them is 20 hours, i need 20
>plots, each an hour apart. if its 40 minutes, i need 20 plots that are
>2 minutes apart.
>
>what would be a way i could pull this off?
>
>thanks

This post of yours:

Date: Thu, 22 Sep 2005 08:11:26 -0500
From: nephish <[EMAIL PROTECTED]>
User-Agent: Debian Thunderbird 1.0.2 (X11/20050331)
X-Accept-Language: en-us, en
MIME-Version: 1.0
To: python-list@python.org
Subject: time challenge

Four minutes prior:

From: [EMAIL PROTECTED]
Newsgroups: comp.lang.python
Subject: need to divide a date
Date: 22 Sep 2005 06:07:50 -0700

Same post body, except an extra blank line at the top of this one.

Please don't do that unless, after waiting a decent amount of time
with no response, you think the lack of response was due to the
bad wording of the title. Your first post got a response, the second
had a worse title ("time challenge").

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Anyone else getting posts back as email undeliverable bounces?

2005-09-23 Thread Bengt Richter
On Fri, 23 Sep 2005 11:49:51 +0100, Steve Holden <[EMAIL PROTECTED]> wrote:

>I've even tried communicating with postmaster at all relevant domains, 
>but such messages are either bounced or ignored, so I've just filtered 
>that domain out.
>
Yeah, but I thought maybe there could be a way to detect this kind of
situation from the python.org or xs4all.nl side and not feed a thing
that vomits indiscriminately, so to speak. Sorry about the metaphor ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [RFC] Parametric Polymorphism

2005-09-26 Thread Bengt Richter
On Sun, 25 Sep 2005 09:30:30 +0100, Catalin Marinas <[EMAIL PROTECTED]> wrote:

>Hi,
>
>Sorry if this was previously discussed but it's something I miss in
>Python. I get around this using isinstance() but it would be cleaner
>to have separate functions with the same name but different argument
>types. I think the idea gets quite close to the Lisp/CLOS
>implementation of methods.
>
>Below is just simple implementation example (and class functions are
>not supported) but it can be further extended/optimised/modified for
>better type detection like issubclass() etc. The idea is similar to
>the @accepts decorator:
>
>
>methods = dict()
>
>def method(*types):
>def build_method(f):
>assert len(types) == f.func_code.co_argcount
>
>if not f.func_name in methods:
>methods[f.func_name] = dict()
>methods[f.func_name][str(types)] = f
>
>def new_f(*args, **kwds):
>type_str = str(tuple([type(arg) for arg in args]))
>assert type_str in methods[f.func_name]
>return methods[f.func_name][type_str](*args, **kwds)
>new_f.func_name = f.func_name
>
>return new_f
>
>return build_method
>
>
>And its utilisation:
>
>@method(int)
>def test(arg):
>print 'int', arg
>
>@method(float)
>def test(arg):
>print 'float', arg
>
>test(1) # succeeds
>test(1.5)   # succeeds
>test(1, 2)  # assert fails
>test('aaa') # assert fails
>
>
>Let me know what you think. Thanks.
>
I am reminded of reinventing multimethods, but an interesting twist, so I'll 
add on ;-)
The following could be made more robust, but avoids a separately named 
dictionary
and lets the function name select a dedicated-to-the-function-name dictionary 
(subclass)
directly instead of looking it up centrally with two levels involved.

Just thought of this variant of your idea, so not tested beyond what you see ;-)

 >>> def method(*types):
 ... def mkdisp(f):
 ... try: disp = eval(f.func_name)
 ... except NameError:
 ... class disp(dict):
 ... def __call__(self, *args):
 ... return self[tuple((type(arg) for arg in args))](*args)
 ... disp = disp()
 ... disp[types] = f
 ... return disp
 ... return mkdisp
 ...
 >>> @method(int)
 ... def test(arg):
 ... print 'int', arg
 ...
 >>> test
 {(,): }
 >>> @method(float)
 ... def test(arg):
 ... print 'float', arg
 ...
 >>> test(1)
 int 1
 >>> test(1.5)
 float 1.5
 >>> test(1, 2)
 Traceback (most recent call last):
   File "", line 1, in ?
   File "", line 7, in __call__
 KeyError: (, )
 >>> test('aaa')
 Traceback (most recent call last):
   File "", line 1, in ?
   File "", line 7, in __call__
 KeyError: 
 >>> test
 {(,): , (,): }

You could give it a nice __repr__ ...

Hm, I'll just cheat right here instead of putting it in the decorator's class 
where it belongs:

 >>> def __repr__(self):
 ... fname = self.values()[0].func_name
 ... types = [tuple((t.__name__ for t in sig)) for sig in self.keys()]
 ... return '<%s-disp for args %s>' % (fname, repr(types)[1:-1])
 ...
 >>> type(test).__repr__ = __repr__
 >>> test
 
 >>> @method(str, str)
 ... def test(s1, s2): return s1, s2
 ...
 >>> test
 
 >>> test('ah', 'ha')
 ('ah', 'ha')
 >>> test(123)
 int 123

That __repr__ could definitely be improved some more ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 350: Codetags

2005-09-27 Thread Bengt Richter
On Mon, 26 Sep 2005 15:35:21 -0700, Micah Elliott <[EMAIL PROTECTED]> wrote:

>Please read/comment/vote.  This circulated as a pre-PEP proposal
>submitted to c.l.py on August 10, but has changed quite a bit since
>then.  I'm reposting this since it is now "Open (under consideration)"
>at <http://www.python.org/peps/pep-0350.html>.
>
>Thanks!
Generally, I like this (I've even rambled on the subject myself before ;-)
But I don't think DONE is a "synonym" for "TBD" or "FIXME" etc.

Some quick reactions: (in case I don't get to detailed comments ;-)

1) IMO DONE (or a well-chosen alternative) should be reserved as a tag that
you insert *after* a TODO-type code tag, and should NOT replace the TODO.
Cleaning TODO/DONE pairs out of a source is a good job for a tool, which can be
given an optional name for a project log file or DONE file etc. or take it from
an environment variable or other config mechanism. This could run as a cron 
job, to
clean and log, and notify etc. IOW, no error-prone manual DONE file stuff.

2) In general, I think it might be good to meet Paul Rubin half way re 
convention
vs syntax, but I don't think code tagging should be part of the language syntax
per se. (-*- cookies -*- really are defacto source syntax that snuck in by 
disguise IMO)
So perhaps a python command line option could invoke an "official" tool,
with some more options passed to it to do various checking or extracting etc.


3) Since a source containing code tags is usually the source for a module,
a python expression with implicit scope of this module is a precise way of
referring to some elements, e.g.,
 
# TODO: clean up uses of magic $MyClass.MAGIC  tool can know 
MyClass.MAGIC is valid expr

or  ?



4) Why can't a codetag be on the same line as code? What's wrong with
 assert something, message  # ???: Really always so? <>

Is it just to make tag lines python parser independent easily? To be purist, 
you still
have to deal with
s = """
# FIXME: I'm embedded in a string that needs the python parser to exclude <>
"""
or make conventional rule against it.

5) Sometimes time of day can be handy, so maybe <2005-09-26 12:34:56> could be 
recognized?

6) Maybe some way of bracketing a section of code explicitly, e.g.,

# FIXME: rework everything in this section 
  def foo(): pass
  class Bar:
 """and so forth"""
# ...: 

7) A way of appending an incremental progress line to an existing code tag 
line, e.g.,

# FIXME: This will take a while: rework foo and bar 
# ...: test_foo for new foo works! 
# ...: vacation 

Later a tool can strip this out to the devlog.txt or DONE file, when the tool
sees an added progress line like
# ---: woohoo, completed ;-) 

My preliminary .02USD for now ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 350: Codetags

2005-09-27 Thread Bengt Richter
On Tue, 27 Sep 2005 18:53:03 +0100, Tom Anderson <[EMAIL PROTECTED]> wrote:

>On Tue, 27 Sep 2005, Bengt Richter wrote:
>
>> 5) Sometimes time of day can be handy, so maybe <2005-09-26 12:34:56> 
>> could be recognized?
>
>ISO 8601 suggests writing date-and-times like 2005-09-26T12:34:56 - using 
>a T as the separator between date and time. I don't really like the look 
>of it, but it is a standard, so i'd suggest using it.
>
I knew of the ISO standard, but I don't really like the "T" connector either.
Why couldn't they have used underscore or + ? Oh well. I guess you are right 
though.

>Bear in mind that if you don't, a black-helicopter-load of blue-helmeted 
>goons to come and apply the rubber hose argument to you.
Nah, ISO 8601 is not a DRM standard.

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 350: Codetags

2005-09-30 Thread Bengt Richter
On Fri, 30 Sep 2005 13:27:57 -0400, =?iso-8859-1?Q?Fran=E7ois?= Pinard <[EMAIL 
PROTECTED]> wrote:

>[Tom Anderson]
>
>> ISO 8601 suggests writing date-and-times like 2005-09-26T12:34:56 -
>> using a T as the separator between date and time. I don't really like
>> the look of it, but it is a standard, so i'd suggest using it.
>
>ISO 8601 suggests a few alternate writings, and the ``T`` you mention is
>for only one of them, meant for those cases where embedded spaces are
>not acceptable.  Other ISO 8601 writings accept embedded spaces, and
>this is how ISO 8601 is generally used by people, so far that I can see.
>
The most detailed discussion I could find was
http://hydracen.com/dx/iso8601.htm
(BTW, IMO the practice of charging for standards documents (as the ISO
and IEEE etc do) in hard copy form is understandable, but charging for
.pdf versions is perversely contrary to the purpose of wide dissemination
necessary for wide adoption. IOW, IMO they ought to think of another way to
get funded).

Anyway, the 'T' looks to me to be optional by mutual agreement between
particular information exchangers, but otherwise required?

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Info] PEP 308 accepted - new conditional expressions

2005-09-30 Thread Bengt Richter
On Fri, 30 Sep 2005 20:25:35 +0200, Reinhold Birkenfeld <[EMAIL PROTECTED]> 
wrote:

>Fredrik Lundh wrote:
>> Reinhold Birkenfeld wrote:
>> 
>>> after Guido's pronouncement yesterday, in one of the next versions of Python
>>> there will be a conditional expression with the following syntax:
>>>
>>> X if C else Y
>>>
>>> which is the same as today's
>>>
>>> (Y, X)[bool(C)]
>> 
>> hopefully, only one of Y or X is actually evaluated ?
>
>(cough) Yes, sorry, it's not the same. The same would be
>
>(C and lambda:X or lambda:Y)()
>
>if I'm not much mistaken.

I think you need to parenthesize, but also note that using lambda does not
always grab the currently-executing-scope X or Y, so I think the list container 
idiom
may be better to show the semantics of the new expression:


 >>> X='this is global X'
 >>> Y='this is global Y'
 >>> C=False
 >>> def foo():
 ... C = True
 ... class K(object):
 ... X='this is K.X'
 ... Y='this is K.Y'
 ... cv = (C and (lambda:X) or (lambda:Y))()
 ... return K
 ...
 >>> def bar():
 ... C = True
 ... class K(object):
 ... X='this is K.X'
 ... Y='this is K.Y'
 ... cv = (C and [X] or [Y])[0]
 ... return K
 ...
 >>> foo().cv
 'this is global X'
 >>> bar().cv
 'this is K.X'


>
>>> C and X or Y (only if X is True)
>> 
>> hopefully, "only if X is True" isn't in fact a limitation of "X if C else Y" 
>> ?
>> 
>> /... snip comment that the natural order is C, X, Y and that programmers that
>> care about readable code will probably want to be extremely careful with this
>> new feature .../
>
>Yes, that was my comment too, but I'll not demonize it before I have used it.
>
Me too ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Will python never intend to support private, protected and public?

2005-10-01 Thread Bengt Richter
0 (self)
LOAD_ATTR1 (pv2)


i.e., _SUBSCR ops instead of _ATTR ops, and analogously for STORE_ATTR and 
STORE_SUBSCR,
(obviously just done to the method variable accesses specified in the privatize 
factory call).

Alternatively, the metaclass could create a central private value dict as a 
hidden cell closure
variable, and the methods could be munged to have a reference constant to that 
hidden dict, and
then self.priv_var code could be munged to [id(self), 
'priv_var'].

This would still leave the access door open via something like
type(instance).__dict__['method_foo'].func_code.co_consts[1][id(instance), 
'priv_var']
so, again, where do you want to draw the line. Starting and communicating with 
a slave debugger
in another (privilaged) process?

Alternatively, it might be possible to define (in the metaclass call) 
__getattribute__ for the class
so that it notices when method_foo or method_bar are being accessed to create 
bound methods, and then bind in
a proxy "self" instead that would have self.private_var on its own "self" and 
delegate public
attribute accesses to the normal self. Maybe this could get around 
byte-code-munging at the cost
of easier breakin than via inspect etc., and less run time efficiency. Just 
musing ...

But the bottom line question is, would you actually use this privacy feature?

Or maybe, what are your real requirements?
;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Will python never intend to support private, protected and public?

2005-10-02 Thread Bengt Richter
On Sun, 02 Oct 2005 16:42:49 -0400, Mike Meyer <[EMAIL PROTECTED]> wrote:

>Paul Rubin <http://[EMAIL PROTECTED]> writes:
>> Well, it's a discussion of why a certain feature might be useful, not
>> that it's required.  Mike Meyer points out some reasons it might be
>> hard to do smoothly without changing Python semantics in a deep way
>> (i.e. Python 3.0 or later).  
>
>Actually, I think that the semantic changes required to make private
>do what you want are deep enough that the resulting language wouldn't
>be Python any longer. It has deep implications from the interpeter
>implementation all the way out to the design of the standard library,
>all of which would have to be reworked to make private do "the right
>thing."
>
>Not that I think that private is a bad idea. If I'm not writing
>python, then I'm probably writing Eiffel. Eiffel has facilities for
>protecting features, though the design is more consistent than the
>mishmash one gets in C++/Java. Nuts - in Eiffel, you can't write
>"instance.attribute = value"; assignment to an attribute has to be
>done in a method of the owning instance.
Actually, ISTM that's true of python too, considering that

instance.attribute = value

is sort of syntactic sugar for

instance.__setattr__('attribute', value)

so it's a (normally inherited) "method of the owning instance"
(taking that to mean a method of its class) that does the assignment.

>
>Which brings me to my point. Rather than trying to bandage Python to
>do what you want - and what, based on this thread, a lot of other
>people *don't* want - you should be building a system from the ground
>up to support the kind of B&D environment you want.
I agree, but I wouldn't agree if I thought you were saying it's useless
to define exactly what it is we're talking about before deciding on
what needs hw/os/gil/intepreter/convention level support, and how python
would make use of such a capability, however implemented.

Paul summarised three levels, of which at least fixing unintentional
name-mangling collisions could be solved, IWT.

>
>Of course, you do realize that in practice you can *never* get what
>you want. It assumes that the infrastructure is all bug-free, which
>may not be the case.
Yeah, but IRL how close to *never* do you in practice demand
to bet your life on it? ;-)

>
>For example, I once had a system that took a kernel panic trying to
>boot an OS upgrade. It had been running the previous versionn of the
>OS for most of a year with no problems. Other basically identical
>systems ran the upgraded OS just fine. I finally wound up stepping
>through the code one instruction at a time, to find that the
>subroutine invocation instruction on this machine was setting a bit in
>a register that it wasn't supposed to touch, but only in kernel
>mode. An internal OS API change meant it only showed up in the
>upgraded OS.
>
>The infamous Pentium floating point bug shows that this case isn't
>restricted to failing hardware.
>
Was that software? I've forgotten the details and am too lazy to google ;-/

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Will python never intend to support private, protected and public?

2005-10-02 Thread Bengt Richter
On 2 Oct 2005 10:31:07 -0700, "El Pitonero" <[EMAIL PROTECTED]> wrote:

>Bengt Richter wrote:
>>
>> I decided to read this thread today, and I still don't know exactly
>> what your requirements are for "private" whatevers.
>
>No name collision in subclassing. Notice that even if you use
>
>self._x = 3
>
>in a parent class, it can be overriden in a sub-sub-class accidentally.
I noticed that, and suggested a GUID-based possible approach ;-)

>
>> Or maybe, what are your real requirements?
>> ;-)
>
>No need for this type of joke.
I'm not sure what you mean, but I meant no offence, just a nudge to
refine the requirements statement, and it seems (and I hope)
Paul took it that way.
[...]
>Would any Python programmer trade the benefits of a highly dynamic
>language with an unessential feature like Java-style "private" data
>members? My guess is not.
ISTM it's not a case of either-or here ;-)
>
>---
>
>What do I say Java-style "private" is unessential?
>
>If your Python class/object needs a real Java-style private working
>namespace, you have to ask yourself: do the private variables REALLY
>belong to the class?
Depends on what you mean by "belong" I think ;-)

>
>In my opinion, the answer is: NO. Whenever you have Java-style private
>variables (i.e, non-temporary variables that need to persist from one
>method call to the next time the class node is accessed), those
>variables/features may be better described as another object, separate
>from your main class hierarchy. Why not move them into a foreign worker
>class/object instead, and let that foreign worker object hold those
>private names, and separate them from the main class hierarchy? (In
>Microsoft's jargon: why not use "containment" instead of
>"aggregation"?)
>
>That is, the moment you need Java-style private variables, I think you
>might as well create another class to hold those names and
>functionalities, since they do not belong to the core functionality of
>the main class hierarchy. Whatever inside the core functionality of the
>main class, should perhaps be inheritable, sharable and modifiable.
Such "another class" is what I was suggesting the automatic hidden creation of
when I said
"""
Alternatively, it might be possible to define (in the metaclass call) 
__getattribute__ for the class
so that it notices when method_foo or method_bar are being accessed to create 
bound methods, and then bind in
a proxy "self" instead that would have self.private_var on its own "self" and 
delegate public
attribute accesses to the normal self. Maybe this could get around 
byte-code-munging at the cost
of easier breakin than via inspect etc., and less run time efficiency. Just 
musing ...
"""
What kind of implementation were you thinking of?

>
>If you use containment instead of aggregation, the chance for name
>collision reduces dramatically. And in my opinion, it's the Pythonic
>way of dealing with the "private" problem: move things that don't
>belong to this object to some other object, and be happy again.
>
Yes, but how do you propose to implement access the "private" variables without
changing the the "spelling" too much?

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Will python never intend to support private, protected and public?

2005-10-03 Thread Bengt Richter
On 03 Oct 2005 00:22:22 -0700, Paul Rubin <http://[EMAIL PROTECTED]> wrote:

>"El Pitonero" <[EMAIL PROTECTED]> writes:
>> The thing is, there are two sides to every coin. Features surely can
>> be viewed as "goodies", or they can be viewed as "handcuffs".
>
>Let's see, say I'm a bank manager, and I want to close my cash vault
>at 5pm today and set its time lock so it can't be opened until 9am
>tomorrow, including by me.  Is that "handcuffs"?  It's normal
>procedure at any bank, for good reason.  It's not necessarily some
>distrustful policy that the bank CEO set to keep me from robbing the
>bank.  I might have set the policy myself.  Java lets me do something
>similar with instance variables.  Why is it somehow an advantage for
>Python to withhold such a capability?
>
>> Sure, in these very dynamic languages you can ACCIDENTALLY override
>> the default system behavior. How many Python programmers have once
>> upon a time done stupid things like:
>> list = 3
>
>That's just a peculiarity, not any deep aspect of Python.  Try it for
>'None' instead of 'list':
>
>>>> None = 3
>SyntaxError: assignment to None
>
>Why is 'list' somehow different from 'None'?  I'd say there's a case
>to be made for having the compiler protect 'list' and other builtins
>the same way it protects 'None'.  Python won't be any less dynamic
>because of it.
>
I think I can write you a custom import that will prevent the assignment of
a list of names you specify in the code of the imported module. Would that
be useful? Or would it be more useful to put that detection in 
py/lint/checker/etc
(where it probably already is?)?

Would you want to outlaw 'None' as an attribute name?
Python seems to be straddling the fence at this point:

 >>> class C(object): pass
 ...
 >>> c = C()
 >>> c.None = 'c.None'
 SyntaxError: assignment to None
 >>> vars(c)['None'] = 'c.None'
 >>> c.None
 'c.None'

;-)


>> The upside is exactly the same as the fact that you can override the
>> "list()" function in Python.  Python is dynamic language. 
>
>That's not exactly an upside, and it has nothing to do with Python
>being dynamic.  C is static but you can override 'printf'.  Overriding
>'list' in Python is pretty much the same thing.
>
>> In Python, if you have a private variable:
>> 
>> self._x = 3
>> 
>> and you, for curiosity reasons and DURING runtime (after the program is
>> already up and running) want to know the exact moment the self._x
>> variable is accessed (say, print out the local time), or print out the
>> calling stack frames, you can do it. And I mean the program is running.
>
>So let's see:
>
>  def countdown():
>n = 3
>while n > 0:
>   yield n
>  g = countdown()
>  print g.next()  # 3
>  print g.next()  # 2
>
>where's the Python feature that lets me modify g's internal value of n
>at this point?  How is that different from modifying a private
>instance variable?  "Python feature" means something in the language
>definition, not an artifact of some particular implementation.  Is
>Python somehow deficient because it doesn't give a way to do that?  Do
>you want to write a PEP to add a way?  Do you think anyone will take
>it seriously?
I could see it as part of a debugging interface that might let you mess more
with frames in general. I wouldn't be surprised if a lot of the under-the-hood 
access
we enjoy as it is was a byproduct of scratching debugging-tool-need itches.

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Will python never intend to support private, protected and public?

2005-10-03 Thread Bengt Richter
On 03 Oct 2005 04:47:26 -0700, Paul Rubin <http://[EMAIL PROTECTED]> wrote:

>[EMAIL PROTECTED] (Bengt Richter) writes:
>> Would you want to outlaw 'None' as an attribute name?
>> Python seems to be straddling the fence at this point:
>>  >>> c.None = 'c.None'
>>  SyntaxError: assignment to None
>
>Heehee, I think that's just a compiler artifact, the lexer is treating
>None as a keyword instead of a normal lexical symbol that the compiler
>treats separately.  That's also why it raises SyntaxError instead of
>some other type of error.  Yes, None should be ok as an attribute name.
>
Not sure whether which compiler. This one seems to differ from the C version.

 >>> import compiler
 >>> compiler.parse("c.None = 'c.None'")
 Module(None, Stmt([Assign([AssAttr(Name('c'), 'None', 'OP_ASSIGN')], 
Const('c.None'))]))

 >>> compiler.compile("c.None = 'c.None'", '', 'exec')
  at 02EE7FA0, file "", line 1>
 >>> import dis
 >>> dis.dis(compiler.compile("c.None = 'c.None'", '', 'exec'))
   1   0 LOAD_CONST   1 ('c.None')
   3 LOAD_NAME0 (c)
   6 STORE_ATTR   1 (None)
   9 LOAD_CONST   0 (None)
  12 RETURN_VALUE
 >>> c = type('',(),{})()
 >>> exec   (compiler.compile("c.None = 'c.None'", '', 'exec'))
 >>> c.None
 'c.None'

So the compiler module is happy to generate code that you can execute,
but the builtin compiler seems not to be:

 >>> c.None = 'c.None'
 SyntaxError: assignment to None

and definitely not run-time:
 >>> def foo():
 ... c.None = 'c.None'
 ...
   File "", line 2
 SyntaxError: assignment to None

Seems like a bug wrt the intent of making compiler.compile work
exactly like the builtin C version. But maybe it has been fixed --
I am still running 2.4 from the beta I built with mingw (because
the new microsoft msi loader won't run on my version of NT4 without
upgrading that I've got too much dll hell to do on this box.
I should also upgrade mingw/msys and recompile, but I spend time here instead 
;-/ )

Python 2.4b1 (#56, Nov  3 2004, 01:47:27)
[GCC 3.2.3 (mingw special 20030504-1)] on win32
Type "help", "copyright", "credits" or "license" for more information.

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Will python never intend to support private, protected and public?

2005-10-03 Thread Bengt Richter
On 03 Oct 2005 14:45:43 -0700, Paul Rubin <http://[EMAIL PROTECTED]> wrote:

>Mike Meyer <[EMAIL PROTECTED]> writes:
>> > If you have a java class instance with a private member that's (say) a
>> > network socket to a special port, access to the port is controlled
>> > entirely by that class.
>> 
>> Are you sure? My understanding was that Java's introspection mechanism
>> could be used to access private variables.
>
>Right, I should have been more specific, if I understand correctly,
>there are some JVM settings that turn that on and off (I'm not any
>kind of expert on this).  For sandbox applets, it's turned off, for
>example.  This came up in a huge discussion thread on sci.crypt a few
>months ago.  Apparently the default is for it to be turned on, to the
>surprise and disappointment of some:
>
>  http://groups.google.com/group/sci.crypt/msg/23edaf95e9978a8d
>
>> A couple of other things to think about:
>> Are you sure you want to use the C++ model for privilege separation?
>
>I'm not sure what you mean by the C++ model.  If you mean the Java
>model, as I keep saying, applet sandbox security relies on it, so it
>may not be perfect but it's not THAT bad.  Using it routinely in
>normal applications would be a big improvement over what we do now.
>High-security applications (financial, military, etc.) would still
>need special measures.
>
>> C++'s design doesn't exactly inspire confidence in me. 
>
>Me neither, "C++ is to C as lung cancer is to lung".  
>
>> Finally, another hole to fix/convention to follow to make this work
>> properly in Python. This one is particularly pernicious, as it allows
>> code that doesn't reference your class at all to violate the private
>> variables. Anyone can dynamically add methods to an instance, the
>> class it belongs to, or to a superclass of that class. 
>
>Yes, Python isn't even type-safe any more:
>
>class A(object): pass
>class B(object): pass
>a = A()
>print type(a)
>a.__class__ = B
>print type(a)# oops
>
>IMHO that "feature" you describe is quite inessential in Python.  The
>correct way to override or extend the operations on a class is to
>subclass it.  I can't think of a single place where I've seen Python
>code legitimately go changing operations by jamming stuff into the
>class object.  I'd consider the stdlib's socket.py to be illegitimate
>and it cost me a couple of hours of debugging one night:
>
> http://groups.google.com/group/comp.lang.python/msg/c9849013e37c995b
>
>and even that is only messing with specific instances, not classes.
>Make sure to have a barf bag handy if you look at the socket.py code.
>I really should open a sf bug about it.
But a class definition is just a syntactic sugar way to associate functions
with certain automatic method-binding operations for its instances, invoked
by other syntactic sugar. But you can do it without modifying the class itself.
E.g.,

 >>> add5 = (lambda self, x: self+x).__get__(5, type(5))
 >>> add5
  of 5>
 >>> add5(10)
 15

I can't stuff the method on type(5) since it's built in,

 >>> type(5).add5 = (lambda self, x: self+x).__get__(5, type(5))
 Traceback (most recent call last):
   File "", line 1, in ?
 TypeError: can't set attributes of built-in/extension type 'int'

but that doesn't stop forming a bound method ;-)

for a better name than  you can always fix it up ;-)

 >>> add5.im_func.func_name = 'add5'
 >>> add5
 

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: "no variable or argument declarations are necessary."

2005-10-04 Thread Bengt Richter
On Tue, 04 Oct 2005 10:18:24 -0700, Donn Cave <[EMAIL PROTECTED]> wrote:
[...]
>In the functional language approach I'm familiar with, you
>introduce a variable into a scope with a bind -
>
>   let a = expr in
>   ... do something with a
>
>and initialization is part of the package.  Type is usually
>inferred.  The kicker though is that the variable is never
>reassigned. In the ideal case it's essentially an alias for
>the initializing expression.  That's one possibility we can
>probably not find in Python's universe.
>
how would you compare that with
lambda a=expr: ... do something (limited to expression) with a
?

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Controlling who can run an executable

2005-10-04 Thread Bengt Richter
On 4 Oct 2005 03:49:50 -0700, "Cigar" <[EMAIL PROTECTED]> wrote:

>Paul Rubin wrote:
>> "Cigar" <[EMAIL PROTECTED]> writes:
>> > Now that I'm three months into the development of this program, my
>> > client tells me she would like to protect her investment by preventing
>> > her employees from doing the same to her.  (Going to the competition
>> > and using her program.)
>>
>> Exactly what is the threat here?
>
>I think the BIGGEST threat here is a feeling of vulnerablity.  She now
>realizes that she is in a position that her competition was many years
>ago when she came into possesion of program the 'other side' was using
>and that she is now vulnerable.  She wants to feel safe in the
>knowledge that she didn't reach into her pocket and pay thousands of
>dollars for a program that now could now be used by her competition.
>Nobody wants to pay money to level the playing field for all in a
>business environment.
So the biggest threat would seem to be her competition posting requirements
here and having some showoff post a complete solution ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: "no variable or argument declarations are necessary."

2005-10-05 Thread Bengt Richter
On 5 Oct 2005 09:27:04 GMT, Duncan Booth <[EMAIL PROTECTED]> wrote:

>Antoon Pardon wrote:
>
>> It also is one possibility to implement writable closures.
>> 
>> One could for instace have a 'declare' have the effect that
>> if on a more inner scope such a declared variable is (re)bound it
>> will rebind the declared variable instead of binding a local name.
>
>That is one possibility, but I think that it would be better to use a 
>keyword at the point of the assigment to indicate assignment to an outer 
>scope. This fits with the way 'global' works: you declare at (or near) the 
>assignment that it is going to a global variable, not in some far away part 
>of the code, so the global nature of the assignment is clearly visible. The 
>'global' keyword itself would be much improved if it appeared on the same 
>line as the assignment rather than as a separate declaration.
>
>e.g. something like:
>
>var1 = 0
>
>def f():
>  var2 = 0
>
>  def g():
> outer var2 = 1 # Assign to outer variable
> global var1 = 1 # Assign to global

IMO you don't really need all that cruft most of the time. E.g., what if ':='
meant 'assign to variable wherever it is (and it must exist), searching 
according
to normal variable resolution order (fresh coinage, vro for short ;-), starting 
with
local, then lexically enclosing and so forth out to module global (but not to 
builtins).'

If you wanted to assign/rebind past a local var shadowing an enclosing variable 
var, you'd have
to use e.g. vro(1).var = expr instead of var := expr.  Sort of analogous to
type(self).mro()[1].method(self, ...)   Hm, vro(1).__dict__['var'] = expr could 
conceivably
force binding at the vro(1) scope specifically, and not search outwards. But 
for that there
would be optimization issues I think, since allowing an arbitrary binding would 
force a real
dict creation on the fly to hold the the new name slot.

BTW, if/when we can push a new namespace on top of the vro stack with a 'with 
namespace: ...' or such,
vro(0) would still be at the top, and vro(1) will be the local before the with, 
and := can still
be sugar for find-and-rebind.

Using := and not finding something to rebind would be a NameError. Ditto for
vro(n).nonexistent_name_at_level_n_or_outwards. vro(-1) could refer global 
module scope
and vro(-2) go inwards towards local scope at vro(0). So vro(-1).gvar=expr would
give you the effect of globals()['gvar']=expr with a pre-existence check.

The pre-existence requirement would effectively be a kind of declaration 
requirement for
the var := expr usage, and an initialization to a particular type could enhance 
inference.
Especially if you could have a decorator for statements in general, not just 
def's, and
you could then have a sticky-types decoration that would say certain bindings 
may be inferred
to stick to their initial binding's object's type.

Rambling uncontrollably ;-)
My .02USD ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: "no variable or argument declarations are necessary."

2005-10-06 Thread Bengt Richter
e functionality with existing elements
before introducing either new elements or new syntax. E.g., the dictionaries
used for instance attribute names and values already exist, and you can already
build all kinds of restrictions on the use of attribute names via properties
and descriptors of other kinds and via __getattribute__ etc.

>
>These might also be checked for in the compile stage and would probably 
>be better as it wouldn't cause any slow down in the code or need a new 
>dictionary type.
Although note that the nnn decorator above does its checking at run time,
when the decorator is executed just after the _def_ is anonymously _executed_
to create the function nnn gets handed to check or modify before what it
returns is bound to the def function name. ;-)
>
>An external checker could possibly work as well if a suitable marker is 
>used such as a bare string.
>
> ...
> x = y = z = None
> "No_New_Names"# checker looks for this
> ...
> X = y/z   # and reports this as an error
> return x,y
>
>and..
>
> ...
> Author = "Fred"
> "Name_Lock Author"# checker sees this...
> ...
> Author = "John"   # then checker catches this
> ...
>
>So there are a number of ways to possibly add these features.
Yup ;-)

>
>Finding common use cases where these would make a real difference would 
>also help.
>
Yup ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dictionary interface

2005-10-06 Thread Bengt Richter
On 5 Oct 2005 08:23:53 GMT, Antoon Pardon <[EMAIL PROTECTED]> wrote:

>Op 2005-10-05, Tom Anderson schreef <[EMAIL PROTECTED]>:
>> On Tue, 4 Oct 2005, Robert Kern wrote:
>>
>>> Antoon Pardon wrote:
>>>
>>>>   class Tree:
>>>>
>>>> def __lt__(self, term):
>>>>   return set(self.iteritems()) < set(term.iteritems())
>>>>
>>>> def __eq__(self, term):
>>>>   return set(self.iteritems()) == set(term.iteritems())
>>>>
>>>> Would this be a correct definition of the desired behaviour?
>>>
>>> No.
>>>
>>> In [1]: {1:2} < {3:4}
>>> Out[1]: True
>>>
>>> In [2]: set({1:2}.iteritems()) < set({3:4}.iteritems())
>>> Out[2]: False
>>>
>>>> Anyone a reference?
>>>
>>> The function dict_compare in dictobject.c .
>>
>> Well there's a really helpful answer. I'm intrigued, Robert - since you 
>> know the real answer to this question, why did you choose to tell the 
>> Antoon that he was wrong, not tell him in what way he was wrong, certainly 
>> not tell him how to be right, but just tell him to read the source, rather 
>> than simply telling him what you knew? Still, at least you told him which 
>> file to look in. And if he knows python but not C, or gets lost in the 
>> byzantine workings of the interpreter, well, that's his own fault, i 
>> guess.
>>
>> So, Antoon, firstly, your implementation of __eq__ is, i believe, correct.
>>
>> Your implementation of __lt__ is, sadly, not. While sets take "<" to mean 
>> "is a proper subset of", for dicts, it's a more conventional comparison 
>> operation, which constitutes a total ordering over all dicts (so you can 
>> sort with it, for example). However, since dicts don't really have a 
>> natural total ordering, it is ever so slightly arbitrary.
>>
>> The rules for ordering on dicts are, AFAICT:
>>
>> - If one dict has fewer elements than the other, it's the lesser
>> - If not, find the smallest key for which the two dicts have different 
>> values (counting 'not present' as a value)
>> -- If there is no such key, the dicts are equal
>> -- If the key is present in one dict but not the other, the dict in which 
>> it is present is the lesser
>> -- Otherwise, the dict in which the value is lesser is itself the lesser
>>
>> In code:
>>
>> def dict_cmp(a, b):
>>  diff = cmp(len(a), len(b))
>>  if (diff != 0):
>>  return diff
>>  for key in sorted(set(a.keys() + b.keys())):
>>  if (key not in a):
>>  return 1
>>  if (key not in b):
>>  return -1
>>  diff = cmp(a[key], b[key])
>>  if (diff != 0):
>>  return diff
>>  return 0
>>
>
>Thanks for the explanation, but you somehow give me too much.
>
>I have been searching some more and finally stumbled on this:
>
>http://docs.python.org/ref/comparisons.html
>
>  Mappings (dictionaries) compare equal if and only if their sorted
>  (key, value) lists compare equal. Outcomes other than equality are
>  resolved consistently, but are not otherwise defined.
>
"other outcomes" may not in general mean orderings are defined,
even when  ==  and != are well defined. E.g., below

>This seems to imply that the specific method to sort the dictionaries
>is unimported (as long as it is a total ordering). So I can use whatever
>method I want as long as it is achieves this.
>
>But that is contradicted by the unittest. If you have a unittest for
>comparing dictionaries, that means comparing dictionaries has a
>testable characteristic and thus is further defined.
>
>So I don't need a full implementation of dictionary comparison,
>I need to know in how far such a comparison is defined and
>what I can choose.
>
A couple of data points that may be of interest:

 >>> {'a':0j} < {'a':1j}
 Traceback (most recent call last):
   File "", line 1, in ?
 TypeError: cannot compare complex numbers using <, <=, >, >=

and
 >>> cmp(0j, 1j)
 Traceback (most recent call last):
   File "", line 1, in ?
 TypeError: cannot compare complex numbers using <, <=, >, >=

but
 >>> {'a':0j} == {'a':1j}
 False
 >>> {'a':1j} == {'a':1j}
 True

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: "no variable or argument declarations are necessary."

2005-10-06 Thread Bengt Richter
On 6 Oct 2005 06:44:41 GMT, Antoon Pardon <[EMAIL PROTECTED]> wrote:

>Op 2005-10-06, Bengt Richter schreef <[EMAIL PROTECTED]>:
>> On 5 Oct 2005 09:27:04 GMT, Duncan Booth <[EMAIL PROTECTED]> wrote:
>>
>>>Antoon Pardon wrote:
>>>
>>>> It also is one possibility to implement writable closures.
>>>> 
>>>> One could for instace have a 'declare' have the effect that
>>>> if on a more inner scope such a declared variable is (re)bound it
>>>> will rebind the declared variable instead of binding a local name.
>>>
>>>That is one possibility, but I think that it would be better to use a 
>>>keyword at the point of the assigment to indicate assignment to an outer 
>>>scope. This fits with the way 'global' works: you declare at (or near) the 
>>>assignment that it is going to a global variable, not in some far away part 
>>>of the code, so the global nature of the assignment is clearly visible. The 
>>>'global' keyword itself would be much improved if it appeared on the same 
>>>line as the assignment rather than as a separate declaration.
>>>
>>>e.g. something like:
>>>
>>>var1 = 0
>>>
>>>def f():
>>>  var2 = 0
>>>
>>>  def g():
>>> outer var2 = 1 # Assign to outer variable
>>> global var1 = 1 # Assign to global
>>
>> IMO you don't really need all that cruft most of the time. E.g., what if ':='
>> meant 'assign to variable wherever it is (and it must exist), searching 
>> according
>> to normal variable resolution order (fresh coinage, vro for short ;-), 
>> starting with
>> local, then lexically enclosing and so forth out to module global (but not 
>> to builtins).'
>
>Just some ideas about this
>
>1) Would it be usefull to make ':=' an expression instead if a
>   statement?
Some people would think so, but some would think that would be tempting the 
weak ;-)

>
>I think the most important reason that the assignment is a statement
>and not an expression would apply less here because '==' is less easy
>to turn into ':=' by mistake than into =
>
>Even if people though that kind of bug was still too easy
>
>2) What if we reversed the operation. Instead of var := expression,
>   we write expression =: var.
>
>IMO this would make it almost impossible to write an assignment
>by mistake in a conditional when you meant to test for equality.
It's an idea. You could also have both, and use it to differentiate
pre- and post-operation augassign variants. E.g.,

alist[i+:=2] # add and assign first, index value is value after adding

alist[i=:+2] # index value is value before adding and assigning

Some people might think that useful too ;-)

Hm, I wonder if any of these variations would combine usefully with the new
short-circuiting expr_true if cond_expr else expr_false ...

Sorry I'll miss the flames, I'll be off line a while ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: updating local()

2005-10-14 Thread Bengt Richter
On Thu, 06 Oct 2005 07:15:12 -0700, Robert Kern <[EMAIL PROTECTED]> wrote:

>Flavio wrote:
>> Ok, its not thousands, but more like dozens of variables...
>> I am reading a large form from the web which returns a lot of values.
>> (I am Using cherrypy)
>> 
>> I know I could pass these variables around as:
>> 
>> def some_function(**variables):
>> ...
>> 
>> some_function(**variables)
>> 
>> but its a pain in the neck to have to refer to them as
>> variables['whatever']...
>> 
>> dont you think? 
>
>Use a Bunch.
>
>class Bunch(dict):
>def __init__(self, *args, **kwds):
>dict.__init__(self, *args, **kwds)
>self.__dict__ = self
>
>--
Or use a version-sensitive byte-code hack to set some preset locals before 
executing,
either with one-time presets via normal decorator, e.g.,

 >>> from ut.presets import presets
 >>> @presets(x=111, y=222, z='zee')
 ... def foo():
 ... return locals()
 ...
 >>> foo()
 {'y': 222, 'x': 111, 'z': 'zee'}

Or the same, just using a predefined dict instead of the keyword format:

 >>> e = {'a':0, 'b':1}
 >>> @presets(**e)
 ... def foo():
 ... return locals()
 ...
 >>> foo()
 {'a': 0, 'b': 1}

What happened to foo viat the decoration:
 >>> import dis
 >>> dis.dis(foo)
   1   0 LOAD_CONST   1 ((0, 1))
   3 UNPACK_SEQUENCE  2
   6 STORE_FAST   0 (a)
   9 STORE_FAST   1 (b)

   3  12 LOAD_GLOBAL  0 (locals)
  15 CALL_FUNCTION0
  18 RETURN_VALUE

To mess with the same base function with different presets more dynamically,
use the explicit way of calling the decorator:

The base function:

 >>> def bar(x, y=123):
 ...return locals()
 ...

decorate and invoke on the fly with particular presets:

 >>> presets(**e)(bar)('exx')
 {'a': 0, 'y': 123, 'b': 1, 'x': 'exx'}

The keyword way:
 >>> presets(hey='there')(bar)('exx')
 {'y': 123, 'x': 'exx', 'hey': 'there'}

BTW, @presets does not change the signature of the function whose selected 
locals
are being preset from the decoration-time-generated constant, e.g.,

 >>> presets(hey='there')(bar)()
 Traceback (most recent call last):
   File "", line 1, in ?
 TypeError: bar() takes at least 1 argument (0 given)
 >>> presets(hey='there')(bar)('exx', 'wye')
 {'y': 'wye', 'x': 'exx', 'hey': 'there'}
 >>> presets(hey='there')(bar)('exx', 'wye', 'zee')
 Traceback (most recent call last):
   File "", line 1, in ?
 TypeError: bar() takes at most 2 arguments (3 given)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class property (was: Class methods)

2005-10-14 Thread Bengt Richter
On Thu, 06 Oct 2005 11:05:10 +0200, Laszlo Zsolt Nagy <[EMAIL PROTECTED]> wrote:

>Hughes, Chad O wrote:
>
>> Is there any way to create a class method?  I can create a class 
>> variable like this:
>>
>Hmm, seeing this post, I have decided to implement a 'classproperty' 
>descriptor.
>But I could not. This is what I imagined:
>
>class A(object):
>_x = 0
>@classmethod
>def get_x(cls):
>print "Getting x..."
>return cls._x
>@classmethod
>def set_x(cls,value):
>print "Setting x..."
>cls._x = value
>x = classproperty(get_x,set_x)
>
>Usage example:
>
> >>>print A.x
>Getting x
>0
> >>>A.x = 8
>Setting x
> >>>print A.x
>Getting x
>8
>
>I was trying for a while, but I could not implement a 'classproperty' 
>function. Is it possible at all?
>Thanks,
>
>   Les
>
Using Peter's advice (not tested beyond what you see):

 >>> class A(object):
 ... _x = 0
 ... class __metaclass__(type):
 ... def get_x(cls):
 ... print "Getting x..."
 ... return cls._x
 ... def set_x(cls,value):
 ... print "Setting x..."
 ... cls._x = value
 ... x = property(get_x, set_x)
 ...
 >>> A.x
 Getting x...
 0
 >>> A.x = 8
 Setting x...
 >>> A.x
 Getting x...
 8
 >>> vars(A).items()
 [('__module__', '__main__'), ('__metaclass__', ), ('_x', 8), ('_
 _dict__', ), ('__weakref__', ), ('__doc__', None)]
 >>> A._x
 8
 >>> vars(A).keys()
 ['__module__', '__metaclass__', '_x', '__dict__', '__weakref__', '__doc__']

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class property

2005-10-14 Thread Bengt Richter
On Thu, 06 Oct 2005 16:09:22 +0200, Laszlo Zsolt Nagy <[EMAIL PROTECTED]> wrote:

>Peter Otten wrote:
>
>>Laszlo Zsolt Nagy wrote:
>>
>>  
>>
>>>I was trying for a while, but I could not implement a 'classproperty'
>>>function. Is it possible at all?
>>>
>>>
>>
>>You could define a "normal" property in the metaclass:
>>  
>>
>The only way I could do this is:
>
>class MyXMetaClass(type):
>_x = 0
>def get_x(cls):
>print "Getting x"
>return cls._x
>def set_x(cls,value):
>cls._x = value
>print "Set %s.x to %s" % (cls.__name__,value)
>x = property(get_x,set_x)
>
>class A(object):
>__metaclass__ = MyXMetaClass
>   
>print A.x
>A.x = 8
>
>
>Results in:
>
>Getting x
>0
>Set A.x to 8
>
>But of course this is bad because the class attribute is not stored in 
>the class. I feel it should be.
>Suppose we want to create a class property, and a class attribute; and 
>we would like the property get/set methods to use the values of the 
>class attributes.
>A real example would be a class that keeps track of its direct and 
>subclassed instances:
>
>class A(object):
>cnt = 0
>a_cnt = 0
>def __init__(self):
>A.cnt += 1
>if self.__class__ is A:
>A.a_cnt += 1
>   
>class B(A):
>pass
>   
>print A.cnt,A.a_cnt # 0,0
>b = B()
>print A.cnt,A.a_cnt # 1,0
>a = A()
>print A.cnt,A.a_cnt # 2,1
>
>But then, I may want to create read-only class property that returns the 
>cnt/a_cnt ratio.
>This now cannot be implemented with a metaclass, because the metaclass 
>cannot operate on the class attributes:
But it can install a property that can.
>
>class A(object):
>cnt = 0
>a_cnt = 0
>ratio = a_class_property_that_returns_the_cnt_per_a_cnt_ratio() # 
>def __init__(self):
>A.cnt += 1
>if self.__class__ is A:
>A.a_cnt += 1
>
>Any ideas?
>

 >>> class A(object):
 ... cnt = 0
 ... a_cnt = 0
 ... def __init__(self):
 ... A.cnt += 1
 ... if self.__class__ is A:
 ... A.a_cnt += 1
 ... class __metaclass__(type):
 ... def ratio(cls):
 ... print "Getting ratio..."
 ... return float(cls.a_cnt)/cls.cnt #
 ... ratio = property(ratio)
 ...
I inverted your ratio to lessen the probability if zero division...

 >>> class B(A): pass
 ...
 >>> A.ratio
 Getting ratio...
 Traceback (most recent call last):
   File "", line 1, in ?
   File "", line 11, in ratio
 ZeroDivisionError: float division

Oops ;-)

 >>> A.cnt, A.a_cnt
 (0, 0)
 >>> b=B()
 >>> A.cnt, A.a_cnt
 (1, 0)
 >>> A.ratio
 Getting ratio...
 0.0
 >>> a=A()
 >>> A.ratio
 Getting ratio...
 0.5

 >>> a=A()
 >>> A.ratio
 Getting ratio...
 0.3

The old instance is no longer bound, so should it still be counted as it is?
You might want to check how to use weak references if not...

 >>> b2=B()
 >>> B.ratio
 Getting ratio...
 0.5
 >>> b3=B()
 >>> B.ratio
 Getting ratio...
 0.40002

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Info] PEP 308 accepted - new conditional expressions

2005-10-14 Thread Bengt Richter
On Wed, 12 Oct 2005 02:15:40 -0400, Chris Smith <[EMAIL PROTECTED]> wrote:

>>>>>> "Sebastian" == Sebastian Bassi <[EMAIL PROTECTED]> writes:
>
>Sebastian> On 9/30/05, Reinhold Birkenfeld <[EMAIL PROTECTED]> wrote:
>>> after Guido's pronouncement yesterday, in one of the next
>>> versions of Python there will be a conditional expression with
>>> the following syntax: X if C else Y
>
>Sebastian> I don't understand why there is a new expression, if
>Sebastian> this could be accomplished with:
>
>Sebastian> if C: X else: Y
>
>Sebastian> What is the advantage with the new expression?
>
>One very frequent use case is building a string.
But what you show doesn't require choice of which to _evaluate_,
since you are merely choosing between immutable constants, and there is
no side effect or computation overhead to avoid by not evaluating
the unselected expression. So

"the answer is " + ('no', 'yes')[X==0]

would do as well.

>Instead of:
>
>if X==0:
>   filler="yes"
>else:
>   filler="no"
>return "the answer is %s" % filler
>
>
>What I really want to do is take four lines of conditional, and put
>them into one, as well as blow off dealing with a 'filler' variable:
>
>return "the answer is " + "yes" if X==0 else "no"
As has been pointed out, though legal there is some doubt as to whether
you meant what you wrote ;-)
>
>
>Or whatever the final release syntax is.
>Conditional expressions are a nice way to shim something in place, on
>an acute basis.  Chronic use of them could lead to a full-on plate of
>spaghetti, where you really wanted code.
>They can be impenitrable.  Whenever I'm dealing with them in C/C++, I
>always line the ?, the :, and the ; characters vertically, which may
>seem a bit excessive in terms of whitespace, but provides a nice
>hieroglyph over on the right side of the screen, to make things
>obvious.
>Best,
>Chris

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: "no variable or argument declarations are necessary."

2005-10-15 Thread Bengt Richter
On Fri, 7 Oct 2005 21:56:12 -0700, [EMAIL PROTECTED] (Alex Martelli) wrote:

>Antoon Pardon <[EMAIL PROTECTED]> wrote:
>   ...
>> >>   egold = 0:
>> >>   while egold < 10:
>> >> if test():
>> >>   ego1d = egold + 1
>> >> 
>> >
>> > Oh come on. That is a completely contrived example,
>> 
>> No it is not. You may not have had any use for this
>> kind of code, but unfamiliary with certain types
>> of problems, doesn't make something contrived.
>
>It's so contrived it will raise a SyntaxError due to the spurious extra
>colon on the first line;-).
>
Glad to see a smiley ;-)

>Or, consider, once the stray extra colon is fixed:
>
[... demonstration of effective tool use to diagnose contrived[1] snippet's 
problems ...]

[1] the code snippet certainly seems contrived to me too. Not sure whether 
entire class of code
that it may have been intended to represent is contrived, but I'm willing to 
give the benefit
of the doubt if I sense no ill will ;-)
[...]
>
>> > It would give the 
>> > programmer a false sense of security since they 'know' all their 
>> > misspellings are caught by the compiler. It would not be a substitute for
>> > run-time testing.
>> 
>> I don't think anyone with a little bit of experience will be so naive.
This strikes me as a somewhat provocative and unfortunately ambiguous statement 
;-)

The way I read it was to assume that Antoon was agreeing with the judgement that
the 'sense of security' would be false, and that he was saying that an 
experienced
programmer would not be so naive as to feel secure about the correctness of his
code merely on the basis of a compiler's static checks (and thus skip run-time 
testing).

>
>Heh, right.  After all, _I_, for example, cannot have even "a little bit
>of experience" -- after all, I've been programming for just 30 years
>(starting with my freshman year in university), and anyway all I have to
>show for that is a couple of best-selling books, and a stellar career
>culminating (so far) with my present job as Uber Technical Lead for
>Google, Inc, right here in Silicon Valley... no doubt Google's reaching
>over the Atlantic to come hire me from Italy, and the US government's
>decision to grant me a visa under the O-1 category (for "Aliens with
>Outstanding Skills"), were mere oversights on their part that,
>obviously, I cannot have even "a little bit of experience", given that I
>(like great authors such as Bruce Eckel and Robert Martin) entirely
>agree with the opinion you deem "so naive"... that any automatic
>catching of misspellings can never be a substitute for unit-testing!
I somehow don't think Antoon was really disagreeing with that (maybe because 
after
45+ years of programming and debugging I think it would be too absurd ;-)
>
>
>Ah well -- my good old iBook's settings had killfiles for newsreaders,
>with not many entries, but yours, Antoon, quite prominent and permanent;
>unfortunately, that beautiful little iBook was stolen
>(http://www.papd.org/press_releases/8_17_05_fix_macs_211.html), so I got
Ugh, bad luck ... but if it had to happen, better that it wasn't from your home.

>myself a brand new one (I would deem it incorrect to use for personal
>purposes the nice 15" Powerbook that Google assigned me), and it takes
>some time to reconstruct all the settings.  But, I gotta get started
>sometime -- so, welcome, o troll, as the very first entry in my
>brand-new killfile.
I'd urge you to reconsider, and see if you really see trollish _intent_
in Antoon ;-)

>
>In other words: *PLONK*, troll!-)
IMO that's a bit harsh, especially coming from a molto certified heavyweight ;-)

Antoon doesn't strike me as having the desire to provoke for the sake of 
provoking,
which seems to me to be the sine qua non hallmark of trolls. Of course anyone
with any ego is likely to be feel like posting defensive tit-for-tat to 
"correct"
any inadequate appreciation of worth coming from the other side, and that can 
degenerate
into something that looks like pure troll postings, but I think that is normal 
succumbing
to ego temptations, not signs true trollishness. E.g., ISTM Antoon and Diez 
were managing
rather well, both being frustrated in getting points across, but both 
displaying patience
and a certain civility. To me, the "winning" posts are the ones that further 
the development
of the "truth" about the topic at hand, and avoid straying into ad hominem 
irrelevancies.
OTOH, I think everyone is entitled at least to ask if a perceived innuendo was 
real and
intentional (and should be encouraged to do so before launching a 
counter-offence).
Sometimes endless niggling and nitpicking gets tiresome, but I don't think that 
is necessarily
troll scat either. And one can always tune out ;-)

Anyway, thanks for the pychecker and pylint demos. And I'm glad that we can 
enjoy your posts again,
even if for a limited time.

-- Martellibot admirer offering his .02USD for peace ... ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Jargons of Info Tech industry

2005-10-15 Thread Bengt Richter
On Tue, 04 Oct 2005 17:14:45 GMT, Roedy Green <[EMAIL PROTECTED]> wrote:

>On Tue, 23 Aug 2005 08:32:09 -0500, l v <[EMAIL PROTECTED]> wrote or quoted :
>
>>I think e-mail should be text only.
I think that is a useful base standard, which allows easy creation of
ad-hoc tools to search and extract data from your archives, etc. 
>
>I disagree.  Your problem  is spam, not HTML. Spam is associated with
>HTML and people have in Pavlovian fashion come to hate HTML.
>
>But HTML is not the problem!
Right, it's what the HTML-interpreting engines might do that is
the problem.
>
>That is like hating all choirs because televangelists use them.
>  
>HTML allows properly aligned table, diagrams, images, use of
>colour/fonts to encode speakers. emphasis, hyperlinks.
All good stuff, but I don't like worrying about side effects when I read
email.
>
>I try to explain Java each day both on my website on the plaintext
>only newsgroups. It is so much easier to get my point across in HTML.
How about pdf?

>
>Program listings are much more readable on my website.
IMO FOSS pdf could provide all the layout benefits while
avoiding (allowing for bugs) all the downsides of X/HTML in emails.

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: When someone from Britain speaks, Americans hear a "British accent"...

2005-10-15 Thread Bengt Richter
On Fri, 7 Oct 2005 15:28:24 -0500, Terry Hancock <[EMAIL PROTECTED]> wrote:

>On Friday 07 October 2005 03:01 am, Steve Holden wrote:
>> OK, so how do you account for the execresence "That will give you a 
>> savings of 20%", which usage is common in America?
>
>In America, anyway, "savings" is a collective abstract noun 
>(like "physics" or "mechanics"), there's no such
>noun as "saving" (that's present participle of "to save"
>only).  How did you expect that sentence to be rendered?
>Why is it an "execresence"?
>
>By the way, dict.org doesn't think "execresence" is a word,
>although I interpret the neologism as meaning something like 
>"execrable utterance":
>
>dict.org said:
>> No definitions found for 'execresence'!
>
Gotta be something to do with .exe ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Force flushing buffers

2005-10-15 Thread Bengt Richter
On Wed, 12 Oct 2005 15:55:10 -0400, Madhusudan Singh <[EMAIL PROTECTED]> wrote:

>Robert Wierschke wrote:
>
>> Madhusudan Singh schrieb:
>>> Hi
>>> 
>>> I have a python application that writes a lot of data to a bunch
>>> of files
>>> from inside a loop. Sometimes, the application has to be interrupted and
>>> I find that a lot of data has not yet been writen (and hence is lost).
>>> How do I flush the buffer and force python to write the buffers to the
>>> files ? I intend to put this inside the loop.
>>> 
>>> Thanks.
>> disable the buffer!
>> 
>> open( filename[, mode[, bufsize]])
>> 
>> open takes an optional 3ed argument set bufsize = 0 means unbuffered.
>> see the documentation of the in build file() mehtod as open is just
>> another name
>
>Though I will not be using this solution (plan to use flush() explicitly)
>for speed reasons, thanks ! I will file this away for future reference :)
I suspect Scott's try/finally approach will get you better speed, since it
avoids unneeded flush calls and the associated buffer management, but
it is best to measure speed when you are concerned.

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: When someone from Britain speaks, Americans hear a "British accent"...

2005-10-15 Thread Bengt Richter
On Sat, 08 Oct 2005 07:55:59 GMT, Dennis Lee Bieber <[EMAIL PROTECTED]> wrote:

>On Fri, 07 Oct 2005 21:24:35 +1000, Steven D'Aprano
><[EMAIL PROTECTED]> declaimed the following in
>comp.lang.python:
>
>> I think where the people are getting confused is that it is (arguably)
>> acceptable to use "their" in place of "his or her", as in:
>> 
>> "Should the purchaser lose their warranty card..."
>>
>   It gets even stranger...
>
>   "One should be prompt in mailing their warranty registration"

That comes after parents buy some toys for their children, and the
children have posession of both the toys and the associated warranty cards.

Of course if one is a parent who worries about warranties in a circumstance 
such as this,

    "One should be prompt in mailing their[1] warranty registration"

[1] The childrens' ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dis.dis question

2005-10-15 Thread Bengt Richter
On Sun, 09 Oct 2005 12:10:46 GMT, Ron Adam <[EMAIL PROTECTED]> wrote:

>Ron Adam wrote:
>> 
>> Can anyone show me an example of of using dis() with a traceback?
>> 
>> Examples of using disassemble_string() and distb() separately if 
>> possible would be nice also.
>
>  [cliped]
>
>> But I still need to rewrite disassemble_string() and need to test it 
>> with tracebacks.
>> 
>> Cheers,
>> Ron
>
>It seems I've found a bug in dis.py, or maybe a expected non feature. 
>When running dis from a program it fails to find the last traceback 
>because sys.last_traceback doesn't get set.  (a bug in sys?)  It works 
>ok from the shell, but not from the program.
>
>Changing it to to get sys.exc_info()[2], fix's it in a program, but then 
>it doesn't work in the shell.  So I replaced it with the following which 
>works in both.
>
> try:
> if hasattr(sys,'last_traceback'):
> tb = sys.last_traceback
> else:
> tb = sys.exc_info()[2]
> except AttributeError:
> raise RuntimeError, "no last traceback to disassemble"
>
>I'm still looking for info on how to use disassemble_string().
>

One way to get dis output without modufying dis is to capture stdout:
(ancient thing I cobbled together, no guarantees ;-)

 >>> class SOCapture:
 ... """class to capture stdout between calls to start & end methods, 
q.v."""
 ... import sys
 ... def __init__(self):
 ... self.so = self.sys.stdout
 ... self.text = []
 ... def start(self, starttext=None):
 ... """Overrides sys.stdout to capture writes.
 ... Optional starttext is immediately appended as if written to 
stdout."""
 ... self.sys.stdout = self
 ... if starttext is None: return
 ... self.text.append(starttext)
 ... def end(self, endtext=None):
 ... """Restores stdout to value seen at contruction time.
 ... Optional endtext is appended as if written to stdout before 
that."""
 ... self.sys.stdout = self.so
 ... if endtext is None: return
 ... self.text.append(endtext)
 ... def gettext(self):
 ... """Returns captured text as single string."""
 ... return ''.join(self.text)
 ... def clear(self):
 ... """Clears captured text list."""
 ... self.text = []
 ... def write(self, s):
 ... """Appends written string to captured text list.
 ... This method is what allows an instance to stand in for stdout."""
 ... self.text.append(s)
 ...
 >>> def foo(x): return (x+1)**2
 ...
 >>> so = SOCapture()
 >>> import dis
 >>> so.start()
 >>> dis.dis(foo)
 >>> so.end()
 >>> print so.gettext()
   1   0 LOAD_FAST0 (x)
   3 LOAD_CONST   1 (1)
   6 BINARY_ADD
   7 LOAD_CONST   2 (2)
  10 BINARY_POWER
  11 RETURN_VALUE

Or safer:

 >>> def diss(code):
 ... try:
 ... so = SOCapture()
 ... so.start()
 ... dis.dis(code)
 ... finally:
 ... so.end()
 ... return so.gettext()
 ...
 >>> diss(foo)
 '  1   0 LOAD_FAST0 (x)\n      3 LOAD_CONST
   1 (1)\
 n  6 BINARY_ADD  \n  7 LOAD_CONST  
 2 (2)\n
 10 BINARY_POWER\n 11 RETURN_VALUE\n'
 >>> print diss(foo)
   1   0 LOAD_FAST0 (x)
   3 LOAD_CONST   1 (1)
   6 BINARY_ADD
   7 LOAD_CONST   2 (2)
  10 BINARY_POWER
  11 RETURN_VALUE


Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why asci-only symbols?

2005-10-15 Thread Bengt Richter
On Wed, 12 Oct 2005 10:56:44 +0200, =?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?= 
<[EMAIL PROTECTED]> wrote:

>Mike Meyer wrote:
>> Out of random curiosity, is there a PEP/thread/? that explains why
>> Python symbols are restricted to 7-bit ascii?
>
>No PEP yet; I meant to write one for several years now.
>
>The principles would be
>- sources must use encoding declarations
>- valid identifiers would follow the Unicode consortium guidelines,
>   in particular: identifiers would be normalized in NFKC (I think),
>   adjusted in the ASCII range for backward compatibility (i.e.
>   not introducing any additional ASCII characters as legal identifier
>   characters)
>- __dict__ will contain Unicode keys
>- all objects should support Unicode getattr/setattr (potentially
>   raising AttributeError, of course)
>- open issue: what to do on the C API (perhaps nothing, perhaps
>   allowing UTF-8)

Perhaps string equivalence in keys will be treated like numeric equivalence?
I.e., a key/name representation is established by the initial key/name binding, 
but
values can be retrieved by "equivalent" key/names with different representations
like unicode vs ascii or latin-1 etc.?

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Function to execute only once

2005-10-16 Thread Bengt Richter
On 14 Oct 2005 12:11:58 -0700, "PyPK" <[EMAIL PROTECTED]> wrote:

>Hi if I have a function called
>tmp=0
>def execute():
>tmp = tmp+1
>return tmp
>
>also I have
>def func1():
>execute()
>
>and
>def func2():
>execute()
>
>
>now I want execute() function to get executed only once. That is the
>first time it is accessed.
>so taht when funcc2 access the execute fn it should have same values as
>when it is called from func1.
>

You could have the execute function replace itself with a function
that returns the first result from there on, e.g., (assuming you want
the global tmp incremented once (which has bad code smell, but can be expedient 
;-)):

 >>> tmp = 0
 >>> def execute():
 ... global tmp, execute
 ... tmp = cellvar = tmp + 1
 ... def execute():
 ... return cellvar
 ... return tmp
 ...
 >>> def func1():
 ... return execute() # so we can see it
 ...
 >>> def func2():
 ... return execute() # so we can see it
 ...
 >>> func1()
 1
 >>> tmp
 1
 >>> func2()
 1
 >>> tmp
 1
 >>> execute()
 1
 >>> execute
 
 >>> import dis
 >>> dis.dis(execute)
   5   0 LOAD_DEREF   0 (cellvar)
   3 RETURN_VALUE

But if you want to call the _same_ "execute" callable that remembers that
it's been called and does what you want, you need a callable that can
remember state one way or another. A callable could be a function with
a mutable closure variable or possibly a function attribute as shown in
other posts in the thread, or maybe a class bound method or class method,
or even an abused metaclass or decorator, but I don't really understand
what you're trying to do, so no approach is likely to hit the mark very well
unless you show more of your cards ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Jargons of Info Tech industry

2005-10-16 Thread Bengt Richter
On 16 Oct 2005 00:31:38 GMT, John Bokma <[EMAIL PROTECTED]> wrote:

>[EMAIL PROTECTED] (Bengt Richter) wrote:
>
>> On Tue, 04 Oct 2005 17:14:45 GMT, Roedy Green
>> <[EMAIL PROTECTED]> wrote: 
>> 
>>>On Tue, 23 Aug 2005 08:32:09 -0500, l v <[EMAIL PROTECTED]> wrote or quoted :
>>>
>>>>I think e-mail should be text only.
>> I think that is a useful base standard, which allows easy creation of
>> ad-hoc tools to search and extract data from your archives, etc. 
>>>
>>>I disagree.  Your problem  is spam, not HTML. Spam is associated with
>>>HTML and people have in Pavlovian fashion come to hate HTML.
>>>
>>>But HTML is not the problem!
>> Right, it's what the HTML-interpreting engines might do that is
>> the problem.
>
>You mean the same problem as for example using a very long header in 
>your email to cause a buffer overflow? That is possible with plain 
>ASCII, and has been done.
Are you trolling? No, I don't mean the same problem.
What an HTML interpreter does by _design_ is not in the same category
as an implementation error enabling a root exploit.

>
>>>That is like hating all choirs because televangelists use them.
>>>  
>>>HTML allows properly aligned table, diagrams, images, use of
>>>colour/fonts to encode speakers. emphasis, hyperlinks.
>> All good stuff, but I don't like worrying about side effects when I
>> read email.
>
>Then you should ask people to print it out, and use snail mail. Exploits 
_I_ should, because _you_ can't think of a better solution?
Always happy to get useful advice, though ;-)

>in email programs are not happening since HTML was added to them.
>
You mean they didn't start happening, presumably. But I'm not talking about 
exploits,
I'm talking about what HTML is designed to do, which is to describe a 
presentation
composed of elements which in general requires retrieving many elements 
separately
as the indirect references (links) are interpreted and the data is requested 
from
the indicated servers -- all at HTML interpretation-time, whatever client 
engine is
doing that for browser or email reader etc.

Don't get me wrong, I said "all good stuff," as far as control of presentation
is concerned. And I would be happy to have nice graphic email if I could get it
as a self-contained file from my ISP's mail server, and I had a presentation
engine involved that I knew was guaranteed to stick to presentation work without
communicating over the web or doing anything else without my knowledge.

I don't see any technical obstacle to that, but HTML is not designed to be
the solution to that. IMO pdf comes close. I recognize that a pdf interpreter
can also have exploitable implementation errors, just like an ascii email 
client,
but that is not what I am talking about.

I prefilter email into plain and X/HTML-containing mailboxes, and I don't open
HTML email from unknown sources, though if I am really curious I will drag and
drop the email into a "probtrash" mailbox and use a python script that extracts 
the
text or other info as text in a console window. All the ones purportedly from 
ebay and amazon
and paypal have been phishing attempts which would look pretty convincing if 
displayed
by normal X/HTML interpretation. If my ISP had a better filter or I imporved 
mine,
I wouldn't see that, but in my normal ascii email boxes I don't have to worry 
about that,
I just have to resist the social engineering of the offers from Nigeria etc. ;-)

>>>I try to explain Java each day both on my website on the plaintext
>>>only newsgroups. It is so much easier to get my point across in HTML.
>
>> How about pdf?
>
>Ah, and that's exploit free?
That's not the issue. All programs can have the kind of exploit possibilities
that you are talking about. A program with the single purpose of interpreting
a page description and presenting it graphically is easier to eliminate
exploitable vulnerabilities from than a program that involves a lot of 
additional
stuff.
>
>>>Program listings are much more readable on my website.
>> IMO FOSS pdf could provide all the layout benefits while
>> avoiding (allowing for bugs) all the downsides of X/HTML in emails.
>
>Amazing, so one data format that's open is better compared to another 
>open data format based on what?
I take it you don't understand the difference between pdf and html?

A primary thing is the monitorable data-moving activity that is involved.
A pdf can have links, but they are not followed (not counting what closed
source proprietary softare might risk a PR black eye doing) in the process
of opening and presenting the document to you.

The whole file comes as a single

Re: Why asci-only symbols?

2005-10-16 Thread Bengt Richter
On Sun, 16 Oct 2005 12:16:58 +0200, =?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?= 
<[EMAIL PROTECTED]> wrote:

>Bengt Richter wrote:
>> Perhaps string equivalence in keys will be treated like numeric equivalence?
>> I.e., a key/name representation is established by the initial key/name 
>> binding, but
>> values can be retrieved by "equivalent" key/names with different 
>> representations
>> like unicode vs ascii or latin-1 etc.?
>
>That would require that you know the encoding of a byte string; this
>information is not available at run-time.
>
Well, what will be assumed about name after the lines

#-*- coding: latin1 -*-
name = 'Martin Löwis' 

?
I know type(name) will be  and in itself contain no encoding 
information now,
but why shouldn't the default assumption for literal-generated strings be what 
the coding
cookie specified? I know the current implementation doesn't keep track of the 
different
encodings that could reasonably be inferred from the source of the strings, but 
we are talking
about future stuff here ;-)

>You could also try all possible encodings to see whether the strings
>are equal if you chose the right encoding for each one. This would
>be both expensive and unlike numeric equivalence: in numeric 
>equivalence, you don't give a sequence of bytes all possible
>interpretations to find some interpretation in which they are
>equivalent, either.
>
Agreed, that would be a mess.

>There is one special case, though: when comparing a byte string
>and a Unicode string, the system default encoding (i.e. ASCII)
>is assumed. This only really works if the default encoding
>really *is* ASCII. Otherwise, equal strings might not hash
>equal, in which case you wouldn't find them properly in a
>dictionary.
>
Perhaps the str (or future byte) type could have an encoding attribute
defaulting to None, meaning to treat its instances as a current str instances.
Then setting the attribute to some particular encoding, like 'latin-1' (probably
internally normalized and optimized to be represented as a c pointer slot with a
NULL or a pointer to an appropriate codec or whatever) would make the str byte
string explicitly an encoded string, without changing the byte string data or
converting to a unicode encoding. With encoding information explicitly present
or absent, keys could have a normalized hash and comparison, maybe just 
normalizing
to platform utf for dict encoding-tagged string keys by default.

If this were done, IWT the automatic result of

#-*- coding: latin1 -*-
name = 'Martin Löwis' 

could be that name.encoding == 'latin-1'

whereas without the encoding cookie, the default encoding assumption
for the program source would be used, and set explicitly to 'ascii'
or whatever it is.

Functions that generate strings, such as chr(), could be assumed to create
a string with the same encoding as the source code for the chr(...) invocation.
Ditto for e.g. '%s == %c' % (65, 65)
And
s = u'Martin Löwis'.encode('latin-1')
would get
s.encoding == 'latin-1'
not
s.encoding == None
so that the encoding information could make
print s
mean
print s.decode(s.encoding)
(which of course would re-encode to the output device encoding for output, like 
current
print s.decode('latin-1') and not fail like the current default assumption for 
s encoding
which is s.encoding==None, i.e., assume default, which is likely print 
s.decode('ascii'))

Hm, probably
s.encode(None)
and
s.decode(None)
could mean retrieve the str byte data unchanged as a str string with encoding 
set to None
in the result either way.

Now when you read a file in binary without specifying any encoding assumption, 
you
would get a str string with .encoding==None, but you could effectively 
reinterpret-cast it
to any encoding you like by assigning the encoding attribute. The attribute
could be a property that causes decode/encode automatically to create data in 
the
new encoding. The None encoding would coming or going would not change the data 
bytes, but
differing explicit encodings would cause decode/encode.

This could also support s1+s2 to mean generate a concatenated string
that has the same encoding attribute if s1.encoding==s2.encoding and otherwise 
promotes
each to the platform standard unicode encoding and concatenates those if they
are different (and records the unicode encoding chosen in the result's encoding
attribute).

This is not a fully developed idea, and there has been discussion on the topic 
before
(even between us ;-) but I thought another round might bring out your current 
thinking
on it ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: override a property

2005-10-17 Thread Bengt Richter
On Mon, 17 Oct 2005 18:52:19 +0100, Robin Becker <[EMAIL PROTECTED]> wrote:

>Is there a way to override a data property in the instance? Do I need to 
>create 
>another class with the property changed?
How do you need to "override" it? Care to create a toy example with a
"wish I could  here" comment line? ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: override a property

2005-10-17 Thread Bengt Richter
On 17 Oct 2005 11:13:32 -0700, "SPE - Stani's Python Editor" <[EMAIL 
PROTECTED]> wrote:

>No, you can just do it on the fly. You can even create properties
>(attributes) on the fly.
>
>class Dummy:
>   property = True
>
>d = Dummy()
>d.property = False
>d.new = True
>
a simple attribute is not a property in the sense Robin meant it,
and a "data property" is even more specific. See

http://docs.python.org/ref/descriptor-invocation.html

also

 >>> help(property)
 Help on class property in module __builtin__:

 class property(object)
  |  property(fget=None, fset=None, fdel=None, doc=None) -> property attribute
  |
  |  fget is a function to be used for getting an attribute value, and likewise
  |  fset is a function for setting, and fdel a function for del'ing, an
  |  attribute.  Typical use is to define a managed attribute x:
  |  class C(object):
  |  def getx(self): return self.__x
  |  def setx(self, value): self.__x = value
  |  def delx(self): del self.__x
  |  x = property(getx, setx, delx, "I'm the 'x' property.")
  |


Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why asci-only symbols?

2005-10-17 Thread Bengt Richter
On Tue, 18 Oct 2005 01:34:09 +0200, =?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?= 
<[EMAIL PROTECTED]> wrote:

>Bengt Richter wrote:
>> Well, what will be assumed about name after the lines
>> 
>> #-*- coding: latin1 -*-
>> name = 'Martin Löwis' 
>> 
>> ?
>
>Are you asking what is assumed about the identifier 'name', or the value
>bound to that identifier? Currently, the identifier must be encoded in 
>latin1 in this source code, and it must only consist of letters, digits,
>and the underscore.
>
>The value of name will be a string consisting of the bytes
>4d 61 72 74 69 6e 20 4c f6 77 69 73

Which is the latin-1 encoding. Ok, so far so good. We know it's latin1, but the 
knowledge
is lost to python.

>
>> I know type(name) will be  and in itself contain no encoding 
>> information now,
>> but why shouldn't the default assumption for literal-generated strings be 
>> what the coding
>> cookie specified?
>
>That certainly is the assumption: string literals must be in the
>encoding specified in the source encoding, in the source code file
>on disk. If they aren't (and cannot be interpreted that way), you
>get a syntax error.
I meant the "literal-generated string" (internal str instance representation 
compiled
from the latin1-encoded source string literal.
>
>> I know the current implementation doesn't keep track of the different
>> encodings that could reasonably be inferred from the source of the strings, 
> > but we are talking about future stuff here ;-)
>
>Ah, so you want the source encoding to be preserved, say as an attribute
>of the string literal. This has been discussed many times, and was
>always rejected.
Not of the string literal per se. That is only one (constant) expression 
resulting
in a str instance. I want (for the sake of this discussion ;-) the str instance
to have an encoding attribute when it can reliably be inferred, as e.g. when a 
coding
cookie is specified and the str instance comes from a constant literal string 
expression.
>

>Some people reject it because it is overkill: if you want reliable,
>stable representation of characters, you should use Unicode strings.
>
>Others reject it because of semantic difficulties: how would such
>strings behave under concatenation, if the encodings are different?
I mentioned that in parts you snipped (2nd half here):
"""
Now when you read a file in binary without specifying any encoding assumption, 
you
would get a str string with .encoding==None, but you could effectively 
reinterpret-cast it
to any encoding you like by assigning the encoding attribute. The attribute
could be a property that causes decode/encode automatically to create data in 
the
new encoding. The None encoding would coming or going would not change the data 
bytes, but
differing explicit encodings would cause decode/encode.

This could also support s1+s2 to mean generate a concatenated string
that has the same encoding attribute if s1.encoding==s2.encoding and otherwise 
promotes
each to the platform standard unicode encoding and concatenates those if they
are different (and records the unicode encoding chosen in the result's encoding
attribute).
"""
>
>> #-*- coding: latin1 -*-
>> name = 'Martin Löwis' 
>> 
>> could be that name.encoding == 'latin-1'
>
>That is not at all intuitive. I would have expected name.encoding
>to be 'latin1'.
That's pretty dead-pan. Not even a smiley ;-)

>
>> Functions that generate strings, such as chr(), could be assumed to create
>> a string with the same encoding as the source code for the chr(...) 
>> invocation.
>
>What is the source of the chr invocation? If I do chr(param), should I 
The source file that the "chr(param)" appears in.
>use the source where param was computed, or the source where the call
No, the param is numeric, and has no reasonably inferrable encoding. (I don't
propose to have ord pass it on for integers to carry ;-) (so ord in another
module with different source encoding could be the source and an encoding
conversion could happen with integer as intermediary. But that's expected ;-)

>to chr occurs? If the latter, how should the interpreter preserve the
>encoding of where the call came from?
not this latter, so not applicable.
>
>What about the many other sources of byte strings (like strings read 
>from a file, or received via a socket)?
I mentioned that in parts you snipped. See above.

>
>> This is not a fully developed idea, and there has been discussion on the 
>> topic before
>> (even between us ;-) but I thought another round might bring out your 
>> current thinking
>> on it ;-)
>
>My thinki

Re: How to add one month to datetime?

2005-10-21 Thread Bengt Richter
On Fri, 21 Oct 2005 14:01:14 -0700, "Robert Brewer" <[EMAIL PROTECTED]> wrote:

>John W wrote:
>> I have been trying to figure out how to
>> easily add just one month to a datetime
>> object? ...I was wondering if there is
>> simple way of doing this with built in
>> datetime object?
>
>If you want the same day in the succeeding month, you can try:
>
>newdate =3D datetime.date(olddate.year, olddate.month + 1,
>olddate.day)
>
>...but as you can see, that will run into problems quickly. See the
>"sane_date" function here:
>http://projects.amor.org/misc/browser/recur.py for a more robust
>solution, where:
>
>newdate =3D recur.sane_date(olddate.year, olddate.month + 1,
>olddate.day)
>
>will roll over any values which are out-of-bounds for their container.
>
>
The OP will still have to decide whether he likes the semantics ;-)
E.g., what does he really want as the date for "one month" after January 30 ?

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: override a property

2005-10-21 Thread Bengt Richter
On Tue, 18 Oct 2005 08:00:51 +, Robin Becker <[EMAIL PROTECTED]> wrote:

>Bruno Desthuilliers wrote:
>> Robin Becker a écrit :
>> 
>>> Is there a way to override a data property in the instance? Do I need 
>>> to create another class with the property changed?
>> 
>> 
>> Do you mean attributes or properties ?
>
>I mean property here. My aim was to create an ObserverProperty class 
>that would allow adding and subtracting of set/get observers. My current 
>implementation works fine for properties on the class, but when I need 
>to specialize an instance I find it's quite hard.
>
Sorry, but my news feed went belly-up for a few days and I had to go to
google groups to see what had transpired.

ISTM you are already a fair way along the lines of Alex's suggestion of a custom
descriptor class, having chosen to create it by subclassing property.

If it is ok to add attributes to instances, you could put overriding
observer functions there and just check for them in your __notify_fset
loop, e.g., by checking for a matching name with an '__override_' prefixed
to the name of the obs function you want to override (see example below).

If it's not ok to add instance attributes, you could store the information
elsewhere, using weak-ref stuff as Alex suggests.

E.g. (I'm pasting in as a quote what I copied from google):


>bruno modulix wrote:
>.
>> 
>> Could you elaborate ? Or at least give an exemple ?
>.
>in answer to Bengt & Bruno here is what I'm sort of playing with. Alex 
>suggests 
>class change as an answer, but that looks really clunky to me. I'm not sure 
>what 
>Alex means by

>> A better design might be to use, instead of the builtin
>> type 'property', a different custom descriptor type that is specifically
>> designed for your purpose -- e.g., one with a method that instances can
>> call to add or remove themselves from the set of "instances overriding
>> this ``property''" and a weak-key dictionary (from the weakref module)
>> mapping such instances to get/set (or get/set/del, if you need to
>> specialize "attribute deletion" too) tuples of callables.

>I see it's clear how to modify the behaviour of the descriptor instance, but 
>is 
>he saying I need to mess with the descriptor magic methods so they know what 
>applies to each instance?

I think maybe only insofar as __notify_fset is magic ;-)

>## my silly example
>class ObserverProperty(property):
> def __init__(self,name,observers=None,validator=None):
> self._name = name
> self._observers = observers or []
> self._validator = validator or (lambda x: x)
> self._pName = '_' + name
> property.__init__(self,
> fset=lambda inst, value: self.__notify_fset(inst,value),
> )

> def __notify_fset(self,inst,value):
> value = self._validator(value)
> for obs in self._observers:
  obs = inst.__dict__.get('__override_'+obs.func_name, obs)

> obs(inst,self._pName,value)
> inst.__dict__[self._pName] = value

> def add(self,obs):
> self._observers.append(obs)

>def obs0(inst,pName,value):
> print 'obs0', inst, pName, value

>def obs1(inst,pName,value):
> print 'obs1', inst, pName, value

>class A(object):
> x = ObserverProperty('x')

>a=A()
>A.x.add(obs0)

>a.x = 3

>b = A()
>b.x = 4

>#I wish I could get b to use obs1 instead of obs0
>#without doing the following
I think your class assignment would eliminate all x-observing functions,
but did you mean strictly just to "use obs1 instead of obs0" -- which
would mean to leave others operable?

## >class B(A):
## > x = ObserverProperty('x',observers=[obs1])
##
## >b.__class__ = B

b.__override_obs0 = obs1

# Of course you could wrap that last line functionality in some helper thing ;-)

>b.x = 7

With the above mods put in becker.py, I get:

 >>> import becker
 obs0  _x 3
 obs0  _x 4
 obs1  _x 7

But adding another observer doesn't eliminate the other(s):

 >>> def obs3(inst,pName,value):
 ... print 'obs3', inst, pName, value
 ...
 >>> becker.A.x.add(obs3)
 >>> becker.b.x = 777
 obs1  _x 777
 obs3  _x 777
 >>> becker.a.x = 777
 obs0  _x 777
 obs3  _x 777

HTH

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: coloring a complex number

2005-10-21 Thread Bengt Richter
On Fri, 21 Oct 2005 20:55:47 -0500, Brandon K <[EMAIL PROTECTED]> wrote:

>I'm not 100% sure about this, but from what it seems like, the reason 
>method B worked, and not method a is because class foo(complex) is 
>subclassing a metaclass.  So if you do this, you can't init a meta class 
>(try type(complex), it equals 'type' not 'complex'.  type(complex()) 
>yields 'complex'), so you use the new operator to generator a class on 
>the fly which is why it works in method B.  I hope that's right.
>
>-Brandon
>> Spending the morning avoiding responsibilities, and seeing what it would
>> take to color some complex numbers.
>> 
>> class color_complex(complex):
>> def  __init__(self,*args,**kws):
>> complex.__init__(*args)
>> self.color=kws.get('color', 'BLUE')
>> 
>>>>> a=color_complex(1,7)
>>>>> print a
>> (1+7j)  #good so far
>>>>> a=color_complex(1,7,color='BLUE') 
>> Traceback (most recent call last):
>>  File "", line 1, in -toplevel-
>>a=color_complex(1,7,color='BLUE')
>> TypeError: 'color' is an invalid keyword argument for this function
>> 
>> No good... it seems that I am actually subclassing the built_in function
No, complex is callable, but it's a type:
 >>> complex
 

>> 'complex' when I am hoping to have been subclassing the built_in numeric
>> type - complex.
>> 
You need to override __new__ for immutable types, since the args that build
the base object are already used by the time __init__ is called, and UIAM the
default __init__ inherited from object is a noop. However, if you define 
__init__
you can choose to process the other args in either place, e.g.:

 >>> class color_complex(complex):
 ... def __new__(cls, *args, **kws):
 ... return complex.__new__(cls, *args)
 ... def __init__(self, *args, **kws):
 ... self.color=kws.get('color', 'BLUE')
 ...
 >>> a=color_complex(1,7)
 >>> a
 (1+7j)
 >>> a=color_complex(1,7, color='BLUE')
 >>> a
 (1+7j)
 >>> a.color
 'BLUE'

Or as in what you found, below:

>> but some googling sends me to lib/test/test_descr.py
>> 
>> where there a working subclass of complex more in
>> accordance with my intentions.
>> 
>> class color_complex(complex):
>>def __new__(cls,*args,**kws):
>>result = complex.__new__(cls, *args)
>>result.color = kws.get('color', 'BLUE')
>>return result
>> 
>>>>> a=color_complex(1,7,color='BLUE')
>>>>> print a
>> (1+7j)
>>>>> print a.color
>> BLUE
>> 
>> which is very good.

 >>> a=color_complex(1,7, color='RED')
 >>> a
 (1+7j)
 >>> a.color
 'RED'

(just to convince yourself that the default is just a default ;-)


>> 
>> But on the chance that I end up pursuing this road, it would be good if
>> I understood what I just did. It would certainly help with my
>> documentation  ;)
>> 
>> Assistance appreciated.
>> 
>> NOTE:
>> 
>> The importance of the asset of the depth and breadth of Python archives
>> -  for learning (and teaching) and real world production - should not be
>> underestimated, IMO. I could be confident if there was an answer to
>> getting the functionality I was looking for as above, it would be found
>> easily enough by a google search.  It is only with the major
>> technologies that one can hope to pose a question of almost any kind to
>> google and get the kind of relevant hits one gets when doing a Python
>> related search.  Python is certainly a major technology, in that
>> respect.  As these archives serve as an extension to the documentation,
>> the body of Python documentation is beyond any  normal expectation.
>> 
>> True, this asset is generally better for answers than explanations.
>> 
>> I got the answer I needed.  Pursuing here some explanation of that answer.
>> 
HTH

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: namespace dictionaries ok?

2005-10-24 Thread Bengt Richter
On Tue, 25 Oct 2005 03:10:17 GMT, Ron Adam <[EMAIL PROTECTED]> wrote:

>Simon Burton wrote:
>
>> Yes!
>> 
>> I do this a lot when i have deeply nested function calls
>> a->b->c->d->e
>> and need to pass args  to the deep function without changing the
>> middle functions.
>
>Yes, :-)  Which is something like what I'm doing also.  Get the 
>dictionary, modify it or validate it somehow, then pass it on.  I also 
>find that when I'm passing variables as keywords,
>
>  foo(name=name, address=address, city=city)
>
>I really don't want (or like) to have to access the names with 
>dictionary key as *strings* in the function that is called and collects 
>them in a single object.
>
>
>> In this situation I think i would prefer this variation:
>> 
>> class Context(dict):
>>   def __init__(self,**kwds):
>> dict.__init__(self,kwds)
>>   def __getattr__(self, name):
>> return self.__getitem__(name)
>>   def __setattr__(self, name, value):
>> self.__setitem__(name, value)
>>   def __delattr__(self, name):
>> self.__delitem__(name)
> >
>> def foo(ctx):
>>print ctx.color, ctx.size, ctx.shape
>> 
>> foo( Context(color='red', size='large', shape='ball') )
>> 
>> 
>> This is looking like foo should be a method of Context now,
>> but in my situation foo is already a method of another class.
>>
Or maybe just add a __repr__ method, if you want to see a readable
representation (e.g., see below). 
>> Simon.
>
>I didn't see what you were referring to at first.  But yes, I see the 
>similarity.
>
 >>> class Context(dict):
 ...   def __init__(self,**kwds):
 ... dict.__init__(self,kwds)
 ...   def __getattr__(self, name):
 ... return self.__getitem__(name)
 ...   def __setattr__(self, name, value):
 ... self.__setitem__(name, value)
 ...   def __delattr__(self, name):
 ... self.__delitem__(name)
 ...   def __repr__(self):
 ... return 'Context(%s)' % ', '.join('%s=%r'% t for t in 
sorted(self.items()))
 ...
 >>> print Context(color='red', size='large', shape='ball')
 Context(color='red', shape='ball', size='large')
 >>> ctx = Context(color='red', size='large', shape='ball')
 >>> print ctx
 Context(color='red', shape='ball', size='large')
 >>> ctx
 Context(color='red', shape='ball', size='large')
 >>> ctx.color
 'red'

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Syntax across languages

2005-10-25 Thread Bengt Richter
On 25 Oct 2005 07:46:07 GMT, Duncan Booth <[EMAIL PROTECTED]> wrote:

>Tim Roberts wrote:
>
>>>
>>>- Nestable Pascal-like comments (useful): (* ... *)
>> 
>> That's only meaningful in languages with begin-comment AND end-comment
>> delimiters.  Python has only begin-comment.  Effectively, you CAN nest
>> comments in Python:
>
>I believe that the OP is mistaken. In standard Pascal comments do not nest, 
>and you can mix delimiters so, for example:
>
>(* this is a comment }
>
>Most Pascal implementations require the delimeters to match and allow you 
>to nest comments which use different delimiters, but I'm not aware of any 
>Pascal implementations which would allow you to nest comments with the same 
>delimiter:
>
>(* This is not a valid (* comment within a comment. *) *)
>
>To this extent Python (if you consider docstrings as a type of comment) 
>does exactly the same thing:
>
>""" this is # a docstring! """
>
># and this is """ a comment """
>
Dusting off old ((c) '74, corrected '78 printing, bought in '79)
Jensen/Wirth Pascal User manual & Report, 2nd edition:
(had to search, since 'comment' was not in the index!)
"""
The construct
{  }
maybe inserted between any two identifiers, numbers (cf. 4), or
special symbols. It is called a _comment_ and may be removed from
the program text without altering its meaning. The symbols { and
} do not occur otherwise in the language, and when appearing in
syntactic description they are meta-symbols like | and ::= .
The symbols pairs (* and *) are used as synonyms for { and }.
"""

I suspect whether you can match a (* with a } depends on a particular
implementation. I think I have run across at least one where they
were independent, so you could use one to comment out blocks of
mixed program and comment source commented with the other.

... aha, it must have been Borland Delphi Object Pascal:
"""
The following constructs are comments and are ignored by the compiler:
{ Any text not containing right brace }
(* Any text not containing star/right parenthesis *)
A comment that contains a dollar sign ($) immediately after the opening { or (* 
is a
/compiler directive/. A mnemonic of the compiler command follows the $ 
character.
"""

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Tricky Areas in Python

2005-10-25 Thread Bengt Richter
On Tue, 25 Oct 2005 10:44:07 +0200, "Fredrik Lundh" <[EMAIL PROTECTED]> wrote:

>Alex Martelli wrote:
>
>>> my hard-won ignorance, and admit that I don't see the
>>> problem with the property examples:
>>>
>>> > class Sic:
>>> > def getFoo(self): ...
>>> > def setFoo(self): ...
>>> > foo = property(getFoo, setFoo)
>>
>> Sorry for skipping the 2nd argument to setFoo, that was accidental in my
>> post.  The problem here is: class Sic is "classic" ("legacy",
>> "old-style") so property won't really work for it (the setter will NOT
>> trigger when you assign to s.foo and s is an instance of Sic).
>
>what's slightly confusing is that the getter works, at least until you attempt
>to use the setter:
>
>>>> class Sic:
>... def getFoo(self):
>... print "GET"
>... return "FOO"
>... def setFoo(self, value):
>... print "SET", value
>... foo = property(getFoo, setFoo)
>...
>>>> sic = Sic()
>>>> print sic.foo
>GET
>FOO
>>>> sic.foo = 10
>>>> print sic.foo
>10
>
>(a "setter isn't part of an new-style object hierarchy" exception would have
>been nice, I think...)
>
Hm, wouldn't that mean type(sic).__getattribute__ would have to look for 
type(sic).foo.__get__
and raise an exception (except for on foo functions that are supposed to be 
methods ;-)
instead of returning type(sic).foo.__get__(sic, type(sic)) without special 
casing to reject
non-function foos having __get__? I guess it could. Maybe it should.

BTW, I know you know, but others may not realize you can unshadow foo
back to the previous state:

 >>> sic.foo = 10
 >>> print sic.foo
 10
 >>> del sic.foo
 >>> print sic.foo
 GET
 FOO

and that applies even if __delete__ is defined in the property:

 >>> class Sic:
 ... def getFoo(self):
 ... print "GET"
 ... return "FOO"
 ... def setFoo(self, value):
 ... print "SET", value
 ... def delFoo(self):
 ... print "DEL"
 ... foo = property(getFoo, setFoo, delFoo)
 ...
 >>> sic = Sic()
 >>> print sic.foo
 GET
 FOO
 >>> sic.foo = 10
 >>> print sic.foo
 10
 >>> del sic.foo
 >>> print sic.foo
 GET
 FOO

but it won't go beyond the instance for del foo

 >>> del sic.foo
 Traceback (most recent call last):
   File "", line 1, in ?
 AttributeError: Sic instance has no attribute 'foo'

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Tricky import question.

2005-10-25 Thread Bengt Richter
On 25 Oct 2005 06:39:15 -0700, "David Poundall" <[EMAIL PROTECTED]> wrote:

>importedfiles = {}
>for f in FileList
>  f2 = f.split('.')[0]   # strip the .py, .pyc
   importedfiles[f2] = __import__(f2).main
   # it sounds like all you want is the above (untested ;-), or
   # use __import__(f2).main() if you actually want the _result_ returned by 
main

Either way, you don't need the following
>  __import__(f2)
>  s2 = f2+'.main()'  # main is the top file in each import
>  c = compile(s2, '', 'eval')
>  importedfiles[f2] =  eval(c)
>
>'importedfiles' should hold an object reference to the main() function
>within each imported file.
Do you want a reference to the function, or the result of calling the function?
If you want a reference to the function itself, you should leave off the () 
after main,
otherwise you will execute the function.

>
>The problem is, the import function works but I can't get the object
>reference into the imortedfiles dictionary object.  the code keeps
>telling me
>
>NameError: name 'C1_Dosing' is not defined.
probably your first file is C1_Dosing.py (or some other extension after the '.')
so f2 becomes 'C1_Dosing' and s2 becomes 'C1_Dosing.main()' and you compile that
into code bound to c, and when you try to eval(c), it tries to find C1_Dosing in
the current environment, et voila! you have the error.

When confronted with mysteries such as presented by your code results, I suggest
you intruduce print statements to verify what you are assuming about what it is 
doing.
E.g., you could print repr(f2) after assignment, and after the __import__ if 
you thought
it was going to do something to f2 in the local name space, and repr(s2) after 
the assignment,
etc. to see if I guessed right. Or a quick line to clone for a snapshot of 
local varibles
at different points might be (untested)
print '\n'.join('%15s: %s'%(k, repr(v)[:60]) for k,v in 
sorted(locals().items()))

>
>in this instance C1_Dosing is the first file name in the filelist.  The
>import worked, why not the compile ??
I think the compile did work. But your code didn't produce a binding for the 
name
'C1_Dosing' which the code c was looking for when eval(c) was called. If you 
wanted to
make your code work, perhaps replacing (untested)
__import__(f2)
with
exec '%s = __import__(f2)'%f2  # bind imported module to name specified by 
f2 string
might have got by the error in eval (c), but I suspect you would want to leave 
the () off
the .main in any case. And why go through all that rigamarole?

>
>TIA.
>
Try it both ways and report back what you found out ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Tricky import question.

2005-10-25 Thread Bengt Richter
On 25 Oct 2005 08:51:08 -0700, "David Poundall" <[EMAIL PROTECTED]> wrote:

>This worked ...
>
>def my_import(name):
>mod = __import__(name)
>components = name.split('.')
>for comp in components[1:]:
>mod = getattr(mod, comp)
>return mod
>
>for reasons given here...
>
>http://www.python.org/doc/2.3.5/lib/built-in-funcs.html
>
Aha. You didn't mention multi-dot names ;-)
But was that the real problem? Your original code
wasn't using anything corresponding to mod above.

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: namespace dictionaries ok?

2005-10-25 Thread Bengt Richter
On Tue, 25 Oct 2005 16:20:21 GMT, Ron Adam <[EMAIL PROTECTED]> wrote:

>Duncan Booth wrote:
>> Ron Adam wrote:
>> 
>>>James Stroud wrote:
>>>
>>>>Here it goes with a little less overhead:
>>>>
>>>>
>> 
>> 
>> 
>>>But it's not a dictionary anymore so you can't use it in the same places 
>>>you would use a dictionary.
>>>
>>>   foo(**n)
>>>
>>>Would raise an error.
>>>
>>>So I couldn't do:
>>>
>>>def foo(**kwds):
>>>   kwds = namespace(kwds)
>>>   kwds.bob = 3
>>>   kwds.alice = 5
>>>   ...
>>>   bar(**kwds) #<--- do something with changed items
>>>
>> 
>> I agree with Steven D'Aprano's reply, but if you are set on it you could 
>> try this:
>> 
>> 
>>>>>class namespace(dict):
>> 
>>  def __init__(self, *args, **kw):
>>  dict.__init__(self, *args, **kw)
>>  self.__dict__ = self
>> 
>>  
>> 
>>>>>n = namespace({'bob':1, 'carol':2, 'ted':3, 'alice':4})
>>>>>n.bob
>> 
>> 1
>> 
>>>>>n.bob = 3
>>>>>n['bob']
>> 
>> 3
>> 
>> The big problem of course with this sort of approach is that you cannot 
>> then safely use any of the methods of the underlying dict as they could be 
>> masked by values.
>> 
>> P.S. James, *please* could you avoid top-quoting.
>
>Or worse, the dictionary would become not functional depending on what 
>methods were masked.
>
>
>And this approach reverses that, The dict values will be masked by the 
>methods, so the values can't effect the dictionary methods.  But those 
>specific values are only retrievable with the standard dictionary notation.
>
> class namespace(dict):
> __getattr__ = dict.__getitem__
> __setattr__ = dict.__setitem__
> __delattr__ = dict.__delitem__
>
> n = namespace()
> n.__getattr__ = 'yes'# doesn't mask __getattr__ method.
>
> print n['__getattr__']   -> 'yes'
>
>The value is there and __getattr__() still works.  But n.__getattr__ 
>returns the method not the value.
>
>So is there a way to keep the functionality without loosing the methods?
>
>
>BTW, I agree with Steven concerning data structures.  This really isn't 
>a substitute for a data structure.  Many keys will not work with this.
>
> n.my name = 'Ron'
> n.(1,2) = 25
> n.John's = [ ... ]
>
>The use case I'm thinking of is not as a shortcut for data structures, 
>but instead, as a way to keep names as names, and maintaining those 
>names in a group.  Thus the namespace association.
>
>def foo(**kwds):
>   kwds = namespace(kwds)
>   print kwds.name
>   print kwds.value
>   ...
>
>name = 'ron'
>value = 25
>foo( name=name, position=position )
>
Just had the thought that if you want to add bindings on the fly modifying the
original object, you could use the __call__ method, e.g.,

 >>> class NameSpace(dict):
 ... __getattr__ = dict.__getitem__
 ... __setattr__ = dict.__setitem__
 ... __delattr__ = dict.__delitem__
 ... def __call__(self, **upd):
 ... self.update(upd)
 ... return self
 ...
 >>> def show(x): print '-- showing %r'%x; return x
 ...
 >>> ns = NameSpace(initial=1)
 >>> show(ns)
 -- showing {'initial': 1}
 {'initial': 1}

And updating with a second keyword on the fly:

 >>> show(show(ns)(second=2))
 -- showing {'initial': 1}
 -- showing {'second': 2, 'initial': 1}
 {'second': 2, 'initial': 1}

FWIW ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Log rolling question

2005-10-25 Thread Bengt Richter
On 25 Oct 2005 10:22:24 -0700, "elake" <[EMAIL PROTECTED]> wrote:

>Is there a way to do this for a whole calendar month?
>
Yes, if you can specify exactly what you want to do. E.g.,
today is 2005-10-25. What date would you like as the earliest
to keep for the four months? What if today's date was not
available in the month of four months ago? Do you want to
have cleansed logs always start on the first day of a month?
if so, do you want to go back four prior months, or consider
any current month days as being a month-log even if partial,
and only include 3 prior months? You haven't defined your requirements ;-)

if you wanted to go back to the first day of four months back, maybe
you could call that earliest_time and delete all files where after
import os, stat
you remove files where
os.stat(pathtofile)[stat.ST_MTIME] < earliest_time

going back from current time might go something like

 >>> import time
 >>> y,m = time.localtime()[:2]
 >>> y,m
 (2005, 10)
 >>> y,m = divmod(y*12+m-1-4,12) # -1 for 0..11 months
 >>> m+=1 # back to 1..12
 >>> y,m
 (2005, 6)
 >>> time.strptime('%04d-%02d-01'%(y,m), '%Y-%m-%d')
 (2005, 6, 1, 0, 0, 0, 2, 152, -1)
 >>> earliest_time = time.mktime(time.strptime('%04d-%02d-01'%(y,m), 
 >>> '%Y-%m-%d'))
 >>> earliest_time
 1117609200.0
 >>> time.ctime(earliest_time)
 'Wed Jun 01 00:00:00 2005'

Hm ...
Jun, Jul, Aug, Sep, Oct
 -4   -3   -2   -1  -0  
I guess that guarantees 4 full prior months.

I used strptime to build a complete 9-tuple rather than doing it by hand, which
I'm not sure off hand how to do ;-)

There's another module that does date interval addition/subtraction, but it 
didn't
come with the batteries in my version.

BTW, make sure all your date stuff is using the same epoch base date, in case 
you
have some odd variant source of numerically encoded dates, e.g., look at

 >>> import time
 >>> time.localtime(0)
 (1969, 12, 31, 16, 0, 0, 2, 365, 0)
 >>> time.ctime(0)
 'Wed Dec 31 16:00:00 1969'
 >>> time.gmtime(0)
 (1970, 1, 1, 0, 0, 0, 3, 1, 0)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Read/Write from/to a process

2005-10-25 Thread Bengt Richter
 low order bit is set keeps I/O
completion from being queued to the completion port.

dwMilliseconds

Specifies the number of milliseconds that the caller is
willing to wait for an input/output completion packet to
appear at the I/O completion port. If a completion packet
doesn't appear within the specified time, the function times
out, returns FALSE, and sets *lpOverlapped to NULL. Timing
out is optional. If dwMilliseconds is -1, the function will
never time out. If dwMilliseconds is zero and there is no
I/O operation to dequeue, the function will timeout
immediately.

Return Value

The GetQueuedCompletionStatus function's return value is
TRUE if the function dequeues an I/O completion packet for a
successful I/O operation from the completion port. The
function stores valid values into the variables pointed to
by lpNumberOfBytesTransferred, lpCompletionKey, and
lpOverlapped. The function's return value is FALSE, and
*lpOverlapped is set to NULL, if the function does not
dequeue an I/O completion packet from the completion port.
The function does not store valid values into the variables
pointed to by lpNumberOfBytesTransferred and
lpCompletionKey. To get extended error information, call
GetLastError.

The function's return value is FALSE, and *lpOverlapped is
not NULL, if the function dequeues an I/O completion packet
for a failed I/O operation from the completion port. The
function stores valid values into the variables pointed to
by lpNumberOfBytesTransferred, lpCompletionKey, and
lpOverlapped. To get extended error information, call
GetLastError.

Remarks

The Win32 I/O system can be instructed to send I/O
completion notification packets to input/output completion
ports, where they are queued up. The CreateIoCompletionPort
function provides a mechanism for this. When you perform an
input/output operation with a file handle that has an
associated input/output completion port, the I/O system
sends a completion notification packet to the completion
port when the I/O operation completes. The I/O completion
port places the completion packet in a first-in-first-out
queue. The GetQueuedCompletionStatus function retrieves
these queued I/O completion packets.

A server application may have several threads calling
GetQueuedCompletionStatus for the same completion port. As
input operations complete, the operating system queues
completion packets to the completion port. If threads are
actively waiting in a call to this function, queued requests
complete their call. You can call the
PostQueuedCompletionStatus function to post an I/O
completion packet to an I/O completion port. The I/O
completion packet will satisfy an outstanding call to the
GetQueuedCompletionStatus function.

See Also

ConnectNamedPipe, CreateIoCompletionPort, DeviceIoControl,
LockFileEx, OVERLAPPED, ReadFile,
PostQueuedCompletionStatus, TransactNamedPipe,
WaitCommEvent, WriteFile,


And then there is the console stuff,


Following are the functions used to access a console. 

AllocConsole
CreateConsoleScreenBuffer
FillConsoleOutputAttribute
FillConsoleOutputCharacter
FlushConsoleInputBuffer
FreeConsole
GenerateConsoleCtrlEvent
GetConsoleCP
GetConsoleCursorInfo
GetConsoleMode
GetConsoleOutputCP
GetConsoleScreenBufferInfo
GetConsoleTitle
GetLargestConsoleWindowSize
GetNumberOfConsoleInputEvents
GetNumberOfConsoleMouseButtons
GetStdHandle
HandlerRoutine
PeekConsoleInput
ReadConsole
ReadConsoleInput
ReadConsoleOutput
ReadConsoleOutputAttribute
ReadConsoleOutputCharacter
ScrollConsoleScreenBuffer
SetConsoleActiveScreenBuffer
SetConsoleCP
SetConsoleCtrlHandler
SetConsoleCursorInfo
SetConsoleCursorPosition
SetConsoleMode
SetConsoleOutputCP
SetConsoleScreenBufferSize
SetConsoleTextAttribute
SetConsoleTitle
SetConsoleWindowInfo
SetStdHandle
WriteConsole
WriteConsoleInput
WriteConsoleOutput
WriteConsoleOutputAttribute
WriteConsoleOutputCharacter 

And that is a miniscule piece of it all.
For sychronizing, there ought to be something in:
---
Following are the functions used in synchronization. 

CreateEvent
CreateMutex
CreateSemaphore
DeleteCriticalSection
EnterCriticalSection
GetOverlappedResult
InitializeCriticalSection
InterlockedDecrement
InterlockedExchange
InterlockedIncrement
LeaveCriticalSection
MsgWaitForMultipleObjects
OpenEvent
OpenMutex
OpenSemaphore
PulseEvent
ReleaseMutex
ReleaseSemaphore
ResetEvent
SetEvent
WaitForMultipleObjects
WaitForMultipleObjectsEx
WaitForSingleObject
WaitForSingleObjectEx 

BTW, 
"""
The CreateFile function creates, opens, or truncates a file, pipe,
communications resource, disk device, or console. It returns a handle
that can be used to access the object. It can also open and return
a handle to a directory.
"""
So it seems likely someone has put together most of the pieces already,
just maybe not wrapped in a python API ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: [OT] Re: output from external commands

2005-10-25 Thread Bengt Richter
On Tue, 25 Oct 2005 10:10:39 -0500, Terry Hancock <[EMAIL PROTECTED]> wrote:

>On Monday 24 October 2005 09:04 pm, darren kirby wrote:
>> quoth the Fredrik Lundh:
>> > (using either on the output from glob.glob is just plain silly, of course)
>> 
>> Silly? Sure. os.listdir() is more on point. Never said I was the smartest. 
>> However, I will defend my post by pointing out that at the time it was the 
>> only one that actually included code that did what the OP wanted.
>
>I think Mr. Lundh's point was only that the output from glob.glob is already
>guaranteed to be strings, so using either '%s'%f or str(f) is superfluous.
>
And so is a listcomp that only reproduces the list returned by glob.glob
-- especially by iterating through that same returned list ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: feature request / max size limit on StringIO

2005-10-25 Thread Bengt Richter
On Tue, 25 Oct 2005 14:15:02 -0400, "Clark C. Evans" <[EMAIL PROTECTED]> wrote:

>Hello.  I've not been able to use cStringIO since I have the need to
>ensure that the memory buffers created are bounded within a resonable
>limit set by specifications.  No, this code does not properly belong
>in my application as the modules that use "files" should not have
>to care about any resource limitations that may be imposed.
>
>class LimitedBuffer(StringIO):
>def __init__(self, buffer = None, maxsize = 5 * 1000 * 1000):
>StringIO.__init__(self,buffer)
>self.cursize = 0
>self.maxsize = maxsize
>def write(self,str):
>self.cursize += len(str)
>if self.cursize > self.maxsize:
>raise IOError("allocated buffer space exceeded")
>return StringIO.write(self,str)

You might want to use StringIO's own knowledge of its writing position 
(self.pos or self.tell())
and then you wouldn't need cursize, nor to worry about whether seek has been 
used to
reposition, maybe writing the same section of file over and over for some 
reason.

I'm not sure whether seek beyond the end actually causes allocation, but I 
don't think so.
So you should be ok just checking self.pos+len(strarg)> self.maxsize.
There's also some interesting things going on with self.buf and self.buflist 
attributes, which
probably is not doing as simple a job of buffering as you might think. Maybe 
you'll want to wrap
a byte array or an mmap instance to store your info, depending on what you are 
doing?

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Setting Class Attributes

2005-10-25 Thread Bengt Richter
On 25 Oct 2005 15:57:41 -0700, "the.theorist" <[EMAIL PROTECTED]> wrote:

>
>Bruno Desthuilliers wrote:
>> the.theorist a =E9crit :
>> > I have a small, simple class which contains a dictionary (and some
>> > other stuff, not shown). I then have a container class (Big) that holds
>> > some instances of the simple class. When I try to edit the elements of
>> > the dictionary, all instances obtain those changes; I want each
>> > instance to hold separate entries.
>> >
>> > #--Begin module test.py
>> > class ex:
>>
>> class ex(object): # oldstyle classes are deprecated
>>
>> > def __init__(self, val=3D{}):
>> > self.value =3D val
>>
>> You didn't search very long. This is one of the most (in)famous Python
>> gotchas: default args are evaluated *only once*, when the function
>> definition is evaluated (at load time). This is also a dirty trick to
>> have a 'static' (as in C) like variable.
>>
>> The solution is quite simple:
>> class ex(object):
>>def __init__(self, val=3DNone):
>>  if val is None: val =3D {}
>>  self.value =3D val
>>
>> (snip)
>
>Hey, that was extremely helpful. I suspect that the default args
>evaluation is optimized for speed. So it makes sense to use the None
>assignment, and a test condition later.
That sounds like you missed the important point: The reason for using None
instead of {} as a default argument is that default arguments are only
evaluated when the function is defined, not when it's called, so if the
default value is {}, that very _same_ dict will be used as default for
the next call to __init__, so you will have every instance of ex looking
at the same val and binding it to its self.value, and then if one instance
modifies it (which it can, since a dict is mutable), the other instances
will see the modification in their self.value dicts. The None default
prevents re-use of a dict that wasn't actually passed in as an argument
replacing the default in the call. The call-time test for None discovers
that a fresh dict is needed, and self.value = {} creates that fresh dict
when __init__ executes, so the new instance gets its own separate self.value 
dict.

Nothing to do with optimization. In fact, re-using the shared default dict
would be faster, though of course generally wrong.
>
>Worked like a charm, Thanks!
>
Just wanted to make sure you realize why it made a difference, in case ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Making "import" *SLIGHTLY* more verbose?

2005-10-25 Thread Bengt Richter
On Tue, 25 Oct 2005 15:34:37 -0700, Steve Juranich <[EMAIL PROTECTED]> wrote:

>First of all, just let me say that I'm aware of the "-v" switch for
>Python, and I don't want anything nearly that verbose.
>
>I often long for the following behavior from Python when I'm running
>interactively: When a new module is imported, I'd like the path to the
>file providing the module to be printed out to the screen.  If the
>module is already in sys.modules, then don't worry about printing
>anything.
>
>The best thing that I can think of to do is something like:
>
>
>__builtins__.__real_import__ =3D __builtins__.__import__
>
>def noisy_import(name, globals =3D globals(), locals =3D locals(), fromlist=
> =3D []):
>printit =3D name not in sys.modules
>mod =3D __real_import__(name, globals, locals, fromlist)
>if printit:
>try:
>print '## Loading %s (%s)' % (mod.__name__, mod.__file__)
>except AttributeError:
>print '## Loading %s (built-in)' % mod.__name__
>return mod
>
>__builtins__.__import__ =3D noisy_import
>
>
>Which seems to work okay for basic kinds of modules, but doesn't quite
>work right for modules that belong to a particular class.  Any
You want to go beyond just teasing, and tell us what "particular class"? ;-)

>suggestions on what I might be missing here?  I would imagine that
>this is a cookbook type thing, and if there are any references on how
>to do this, I'd greatly appreciate it (I couldn't come up with the
>right Google magic words).
>
>Thanks in advance for any help.
The imp module has a lot of info, and some example code that
might be useful.

I'm suspicious of the default values you provide in your noisy_import though.
They are all mutable, though I guess nothing should mutate them. But will they
be valid for all the contexts your hook will be invoked from? (Or are they 
actually
useless and always overridden?)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Would there be support for a more general cmp/__cmp__

2005-10-27 Thread Bengt Richter
On 27 Oct 2005 08:12:15 GMT, Antoon Pardon <[EMAIL PROTECTED]> wrote:
[...]
>
>The evidence suggests that cmp is not used in sorting. If you have a
>list of sets, sort will happily try to sort it, while calling cmp
>with a set as an argument throws an exception.
>
A data point re evidence:

 >>> class C:
 ... def __getattr__(self, attr): print attr; raise AttributeError
 ...

 >>> sorted((C(),C()))
 __lt__
 __gt__
 __gt__
 __lt__
 __coerce__
 __coerce__
 __cmp__
 __cmp__
 [__repr__
 <__main__.C instance at 0x02EF388C>, __repr__
 <__main__.C instance at 0x02EF38CC>]

I think it will be slightly different if you define those methods
in a new-style class -- oh, heck, why not do it:

 >>> class D(object):
 ...def __lt__(*ignore): print '__lt__'; return NotImplemented
 ...def __gt__(*ignore): print '__gt__'; return NotImplemented
 ...def __coerce__(*ignore): print '__coerce__'; return NotImplemented
 ...def __cmp__(*ignore): print '__cmp__'; return NotImplemented
 ...
 >>> sorted((D(),D()))
 __lt__
 __gt__
 __cmp__
 __cmp__

(I haven't followed the thread much, so please excuse if irrelevant ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Double replace or single re.sub?

2005-10-27 Thread Bengt Richter
On 27 Oct 2005 12:39:18 -0700, "EP" <[EMAIL PROTECTED]> wrote:

>How does Python execute something like the following
>
>oldPhrase="My dog has fleas on his knees"
>newPhrase=oldPhrase.replace("fleas",
>"wrinkles").replace("knees","face")
>
>Does it do two iterations of the replace method on the initial and then
>an intermediate string (my guess) -- or does it compile to something
>more efficient (I doubt it, unless it's Christmas in Pythonville... but
>I thought I'd query)
>
Here's a way to get an answer in one form:

 >>> def foo(): # for easy disassembly
 ...oldPhrase="My dog has fleas on his knees"
 ...newPhrase=oldPhrase.replace("fleas",
 ..."wrinkles").replace("knees","face")
 ...
 >>> import dis
 >>> dis.dis(foo)
   2   0 LOAD_CONST   1 ('My dog has fleas on his knees')
   3 STORE_FAST   1 (oldPhrase)

   3   6 LOAD_FAST1 (oldPhrase)
   9 LOAD_ATTR1 (replace)
  12 LOAD_CONST   2 ('fleas')

   4  15 LOAD_CONST   3 ('wrinkles')
  18 CALL_FUNCTION2
  21 LOAD_ATTR1 (replace)
  24 LOAD_CONST   4 ('knees')
  27 LOAD_CONST   5 ('face')
  30 CALL_FUNCTION2
  33 STORE_FAST   0 (newPhrase)
  36 LOAD_CONST   0 (None)
  39 RETURN_VALUE

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to replace all None values with the string "Null" in a dictionary

2005-10-27 Thread Bengt Richter
On Thu, 27 Oct 2005 16:46:32 -0400, Mike Meyer <[EMAIL PROTECTED]> wrote:

>"dcrespo" <[EMAIL PROTECTED]> writes:
>
>> Hi all,
>>
>> How can I replace all None values with the string 'Null' in a
>> dictionary?
>
>Iterate over everything in the dictionary:
>
>for key, item in mydict.items():
>if item is None:
>   mydict[key] = 'Null'
>
Which is probably more efficient than one-liner updating the dict with

mydict.update((k,'Null') for k,v in mydict.items() if v is None)

as in

 >>> mydict = dict(a=1, b=None, c=3, d=None, e=5)
 >>> mydict
 {'a': 1, 'c': 3, 'b': None, 'e': 5, 'd': None}
 >>> mydict.update((k,'Null') for k,v in mydict.items() if v is None)
 >>> mydict
 {'a': 1, 'c': 3, 'b': 'Null', 'e': 5, 'd': 'Null'}
 
(too lazy to measure ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Scanning a file

2005-10-29 Thread Bengt Richter
On 28 Oct 2005 06:51:36 -0700, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:

>First of all, this isn't a text file, it is a binary file. Secondly,
>substrings can overlap. In the sequence 0010010 the substring 0010
>occurs twice.
>
ISTM you better let others know exactly what you mean by this, before
you use the various solutions suggested or your own ;-)

a) Are you talking about bit strings or byte strings?
b) Do you want to _count_ overlapping substrings??!!
Note result of s.count on your example:

 >>> s = '0010010'
 >>> s.count('0010')
 1

vs. brute force counting overlapped substrings (not tested beyond what you see 
;-)

 >>> def ovcount(s, sub):
 ... start = count = 0
 ... while True:
 ... start = s.find(sub, start) + 1
 ... if start==0: break
 ... count += 1
 ... return count
 ...
 >>> ovcount(s, '0010')
 2

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Scanning a file

2005-10-29 Thread Bengt Richter
On Fri, 28 Oct 2005 20:03:17 -0700, [EMAIL PROTECTED] (Alex Martelli) wrote:

>Mike Meyer <[EMAIL PROTECTED]> wrote:
>   ...
>> Except if you can't read the file into memory because it's to large,
>> there's a pretty good chance you won't be able to mmap it either.  To
>> deal with huge files, the only option is to read the file in in
>> chunks, count the occurences in each chunk, and then do some fiddling
>> to deal with the pattern landing on a boundary.
>
>That's the kind of things generators are for...:
>
>def byblocks(f, blocksize, overlap):
>block = f.read(blocksize)
>yield block
>while block:
>block = block[-overlap:] + f.read(blocksize-overlap)
>if block: yield block
>
>Now, to look for a substring of length N in an open binary file f:
>
>f = open(whatever, 'b')
>count = 0
>for block in byblocks(f, 1024*1024, len(subst)-1):
>count += block.count(subst)
>f.close()
>
>not much "fiddling" needed, as you can see, and what little "fiddling"
>is needed is entirely encompassed by the generator...
>
Do I get a job at google if I find something wrong with the above? ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Scanning a file

2005-10-29 Thread Bengt Richter
On Sat, 29 Oct 2005 10:34:24 +0200, Peter Otten <[EMAIL PROTECTED]> wrote:

>Bengt Richter wrote:
>
>> On Fri, 28 Oct 2005 20:03:17 -0700, [EMAIL PROTECTED] (Alex Martelli)
>> wrote:
>> 
>>>Mike Meyer <[EMAIL PROTECTED]> wrote:
>>>   ...
>>>> Except if you can't read the file into memory because it's to large,
>>>> there's a pretty good chance you won't be able to mmap it either.  To
>>>> deal with huge files, the only option is to read the file in in
>>>> chunks, count the occurences in each chunk, and then do some fiddling
>>>> to deal with the pattern landing on a boundary.
>>>
>>>That's the kind of things generators are for...:
>>>
>>>def byblocks(f, blocksize, overlap):
>>>block = f.read(blocksize)
>>>yield block
>>>while block:
>>>block = block[-overlap:] + f.read(blocksize-overlap)
>>>if block: yield block
>>>
>>>Now, to look for a substring of length N in an open binary file f:
>>>
>>>f = open(whatever, 'b')
>>>count = 0
>>>for block in byblocks(f, 1024*1024, len(subst)-1):
>>>count += block.count(subst)
>>>f.close()
>>>
>>>not much "fiddling" needed, as you can see, and what little "fiddling"
>>>is needed is entirely encompassed by the generator...
>>>
>> Do I get a job at google if I find something wrong with the above? ;-)
>
>Try it with a subst of length 1. Seems like you missed an opportunity :-)
>
I was thinking this was an example a la Alex's previous discussion
of interviewee code challenges ;-)

What struck me was

 >>> gen = byblocks(StringIO.StringIO('no'),1024,len('end?')-1)
 >>> [gen.next() for i in xrange(10)]
 ['no', 'no', 'no', 'no', 'no', 'no', 'no', 'no', 'no', 'no']

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How do I sort these?

2005-10-29 Thread Bengt Richter
On Fri, 28 Oct 2005 20:00:42 +0100, Steve Holden <[EMAIL PROTECTED]> wrote:

>KraftDiner wrote:
>> I have two lists.
>> I want to sort by a value in the first list and have the second list
>> sorted as well... Any suggestions on how I should/could do this?
>> 
> >>> first = [1, 3, 5, 7, 9, 2, 4, 6, 8]
> >>> second = ['one', 'three', 'five', 'seven', 'nine', 'two', 'four', 
>'six', 'eight']
> >>> both = zip(first, second)
> >>> both.sort()
> >>> [b[0] for b in both]
>[1, 2, 3, 4, 5, 6, 7, 8, 9]
> >>> [b[1] for b in both]
>['one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine']
> >>>
>
>You mean like this?
ISTM there could be a subtle requirement in the way the OP stated what he 
wanted to do.
I.e., it sounds like he would like to sort the first list and have a second 
list undergo
the same shuffling as was required to sort the first. That's different from 
having the
data of the second participate in the sort as order-determining data, if equals 
in the
first list are not to be re-ordered:

 >>> first = [2]*5 + [1]*5
 >>> first
 [2, 2, 2, 2, 2, 1, 1, 1, 1, 1]
 >>> sorted(first)
 [1, 1, 1, 1, 1, 2, 2, 2, 2, 2]
 >>> second = [chr(ord('A')+i) for i in xrange(9,-1,-1)]
 >>> second
 ['J', 'I', 'H', 'G', 'F', 'E', 'D', 'C', 'B', 'A']
 >>> sorted(second)
 ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J']

Now the zipped sort unzipped:
 >>> zip(*sorted(zip(first,second)))
 [(1, 1, 1, 1, 1, 2, 2, 2, 2, 2), ('A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 
'J')]

Now suppose we sort the first and use the elements' indices to preserve order 
where equal
 >>> sorted((f,i) for i,f in enumerate(first))
 [(1, 5), (1, 6), (1, 7), (1, 8), (1, 9), (2, 0), (2, 1), (2, 2), (2, 3), (2, 
4)]

Separate out the first list elements:
 >>> [t[0] for t in sorted((f,i) for i,f in enumerate(first))]
 [1, 1, 1, 1, 1, 2, 2, 2, 2, 2]

Now select from the second list, by first-element position correspondence:
 >>> [second[t[1]] for t in sorted((f,i) for i,f in enumerate(first))]
 ['E', 'D', 'C', 'B', 'A', 'J', 'I', 'H', 'G', 'F']

Which did the OP really want? ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How do I sort these?

2005-10-30 Thread Bengt Richter
On Sun, 30 Oct 2005 10:13:42 +0100, Peter Otten <[EMAIL PROTECTED]> wrote:

>Bengt Richter wrote:
>
[...]
>> Now select from the second list, by first-element position correspondence:
>>  >>> [second[t[1]] for t in sorted((f,i) for i,f in enumerate(first))]
>>  ['E', 'D', 'C', 'B', 'A', 'J', 'I', 'H', 'G', 'F']
>> 
>> Which did the OP really want? ;-)
>
>I don't know, but there certainly is no subtle requirement to not provide
>the key argument:
>
>>>> import operator
>>>> first = [2]*5 + [1]*5
>>>> second = list(reversed("ABCDEFGHIJ"))
>>>> [s for f, s in sorted(zip(first, second))]
>['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J']
>>>> [s for f, s in sorted(zip(first, second), key=operator.itemgetter(0))]
>['E', 'D', 'C', 'B', 'A', 'J', 'I', 'H', 'G', 'F']
>
D'oh yeah, forgot about key ;-/ (and this kind of problem probably at least
partly motivated its introduction, so it should have jumped to mind).
Thanks, your version is much cleaner ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Scanning a file

2005-10-30 Thread Bengt Richter
On Sat, 29 Oct 2005 21:10:11 +0100, Steve Holden <[EMAIL PROTECTED]> wrote:

>Peter Otten wrote:
>> Bengt Richter wrote:
>> 
>> 
>>>What struck me was
>>>
>>>
>>>>>> gen = byblocks(StringIO.StringIO('no'),1024,len('end?')-1)
>>>>>> [gen.next() for i in xrange(10)]
>>>
>>>['no', 'no', 'no', 'no', 'no', 'no', 'no', 'no', 'no', 'no']
>> 
>> 
>> Ouch. Seems like I spotted the subtle cornercase error and missed the big
>> one.
>> 
>
>No, you just realised subconsciously that we'd all spot the obvious one 
>and decided to point out the bug that would remain after the obvious one 
>had been fixed.
>
I still smelled a bug in the counting of substring in the overlap region,
and you motivated me to find it (obvious in hindsight, but aren't most ;-)

A substring can get over-counted if the "overlap" region joins infelicitously 
with the next
input. E.g., try counting 'xx' in 10*'xx' with a read chunk of 4 instead of 
1024*1024:

Assuming corrections so far posted as I understand them:

 >>> def byblocks(f, blocksize, overlap):
 ... block = f.read(blocksize)
 ... yield block
 ... if overlap>0:
 ... while True:
 ... next = f.read(blocksize-overlap)
 ... if not next: break
 ... block = block[-overlap:] + next
 ... yield block
 ... else:
 ... while True:
 ... next = f.read(blocksize)
 ... if not next: break
 ... yield next
 ...
 >>> def countsubst(f, subst, blksize=1024*1024):
 ... count = 0
 ... for block in byblocks(f, blksize, len(subst)-1):
 ... count += block.count(subst)
 ... f.close()
 ... return count
 ...

 >>> from StringIO import StringIO as S
 >>> countsubst(S('xx'*10), 'xx',  4)
 13
 >>> ('xx'*10).count('xx')
 10
 >>> list(byblocks(S('xx'*10), 4, len('xx')-1))
 ['', '', '', '', '', '', 'xx']

Of course, a large read chunk will make the problem either
go away

 >>> countsubst(S('xx'*10), 'xx',  1024)
 10

or might make it low probability depending on the data.

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Controlling output using print with format string

2005-10-30 Thread Bengt Richter
On Sun, 30 Oct 2005 18:44:06 -0600, Paul Watson <[EMAIL PROTECTED]> wrote:

>It is clear that just using 'print' with variable names is relatively 
>uncontrollable.  However, I thought that using a format string would 
>reign the problem in and give the desired output.
>
>Must I resort to sys.stdout.write() to control output?
>
Maybe. But I wouldn't say "uncontrollable" -- print is pretty predictable.

>$ python
>Python 2.4.1 (#1, Jul 19 2005, 14:16:43)
>[GCC 4.0.0 20050519 (Red Hat 4.0.0-8)] on linux2
>Type "help", "copyright", "credits" or "license" for more information.
> >>> s = 'now is the time'
> >>> for c in s:
>... print c,
>...
>n o w   i s   t h e   t i m e
> >>> for c in s:
>... print "%c" % (c),
>...
>n o w   i s   t h e   t i m e
> >>> for c in s:
>... print "%1c" % (c),
>...
>n o w   i s   t h e   t i m e

If you like C, you can make something pretty close to printf:

 >>> import sys
 >>> def printf(fmt, *args):
 ... s = fmt%args
 ... sys.stdout.write(s)
 ... return len(s)
 ...
 >>> s = 'now is the time'
 >>> for c in s:
 ... printf('%s', c)
 ...
 n1
 o1
 w1
  1
 i1
 s1
  1
 t1
 h1
 e1
  1
 t1
 i1
 m1
 e1

Oops, interactively you probably  want to do something other
than implicity print the printf return value ;-)

 >>> s = 'now is the time'
 >>> for c in s:
 ... nc = printf('%s', c)
 ...
 now is the time>>>
 >>> for c in s:
 ... nc = printf('%c', c)
 ...
 now is the time>>>
 >>> for c in s:
 ... nc = printf('%1c', c)
 ...
 now is the time>>>
 
Just to show multiple args, you could pass all the characters
separately, but at once, e.g., (of course you need a format to match)

 >>> printf('%s'*len(s)+'\n', *s)
 now is the time
 16

 >>> printf('%s .. %s .. %s .. %s\n', *s.split())
 now .. is .. the .. time
 25

Or just don't return anything (None by default) from printf if you
just want to use it interactively. Whatever.

Your character-by-character output loop doesn't give much of a clue
to what obstacle you are really encountering ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Scanning a file

2005-10-31 Thread Bengt Richter
On Mon, 31 Oct 2005 09:41:02 +0100, 
=?ISO-8859-1?Q?Lasse_V=E5gs=E6ther_Karlsen?= <[EMAIL PROTECTED]> wrote:

>David Rasmussen wrote:
>
>> If you must know, the above one-liner actually counts the number of 
>> frames in an MPEG2 file. I want to know this number for a number of 
>> files for various reasons. I don't want it to take forever.
>
>
>Don't you risk getting more "frames" than the file actually have? What 
>if the encoded data happens to have the magic byte values for something 
>else?
>
Good point, but perhaps the bit pattern the OP is looking for is guaranteed
(e.g. by some kind of HDLC-like bit or byte stuffing or escaping) not to occur
except as frame marker (which might make sense re the problem of re-synching
to frames in a glitched video stream).

The OP probably knows. I imagine this thread would have gone differently if the
title had been "How to count frames in an MPEG2 file?" and the OP had supplied
the info about what marks a frame and whether it is guaranteed not to occur
in the data ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Scanning a file

2005-10-31 Thread Bengt Richter
On Mon, 31 Oct 2005 09:19:10 +0100, Peter Otten <[EMAIL PROTECTED]> wrote:

>Bengt Richter wrote:
>
>> I still smelled a bug in the counting of substring in the overlap region,
>> and you motivated me to find it (obvious in hindsight, but aren't most ;-)
>> 
>> A substring can get over-counted if the "overlap" region joins
>> infelicitously with the next input. E.g., try counting 'xx' in 10*'xx'
>> with a read chunk of 4 instead of 1024*1024:
>> 
>> Assuming corrections so far posted as I understand them:
>> 
>>  >>> def byblocks(f, blocksize, overlap):
>>  ... block = f.read(blocksize)
>>  ... yield block
>>  ... if overlap>0:
>>  ... while True:
>>  ... next = f.read(blocksize-overlap)
>>  ... if not next: break
>>  ... block = block[-overlap:] + next
>>  ... yield block
>>  ... else:
>>  ... while True:
>>  ... next = f.read(blocksize)
>>  ... if not next: break
>>  ... yield next
>>  ...
>>  >>> def countsubst(f, subst, blksize=1024*1024):
>>  ... count = 0
>>  ... for block in byblocks(f, blksize, len(subst)-1):
>>  ... count += block.count(subst)
>>  ... f.close()
>>  ... return count
>>  ...
>> 
>>  >>> from StringIO import StringIO as S
>>  >>> countsubst(S('xx'*10), 'xx',  4)
>>  13
>>  >>> ('xx'*10).count('xx')
>>  10
>>  >>> list(byblocks(S('xx'*10), 4, len('xx')-1))
>>  ['', '', '', '', '', '', 'xx']
>> 
>> Of course, a large read chunk will make the problem either
>> go away
>> 
>>  >>> countsubst(S('xx'*10), 'xx',  1024)
>>  10
>> 
>> or might make it low probability depending on the data.
>
>[David Rasmussen]
>
>> First of all, this isn't a text file, it is a binary file. Secondly,
>> substrings can overlap. In the sequence 0010010 the substring 0010
>> occurs twice.
The OP didn't reply to my post re the above for some reason
   http://groups.google.com/group/comp.lang.python/msg/dd4125bf38a54b7c?hl=en&;

>
>Coincidentally the "always overlap" case seems the easiest to fix. It
>suffices to replace the count() method with
>
>def count_overlap(s, token):
>pos = -1
>n = 0
>while 1:
>try:
>pos = s.index(token, pos+1)
>except ValueError:
>break
>n += 1
>return n
>
>Or so I hope, without the thorough tests that are indispensable as we should
>have learned by now...
>
Unfortunately, there is such a thing as a correct implementation of an 
incorrect spec ;-)
I have some doubts about the OP's really wanting to count overlapping patterns 
as above,
which is what I asked about in the above referenced post. Elsewhere he later 
reveals:

[David Rasmussen]
>> If you must know, the above one-liner actually counts the number of 
>> frames in an MPEG2 file. I want to know this number for a number of 
>> files for various reasons. I don't want it to take forever.

In which case I doubt whether he wants to count as above. Scanning for the
particular 4 bytes would assume that non-frame-marker data is escaped
one way or another so it can't contain the marker byte sequence.
(If it did, you'd want to skip it, not count it, I presume). Robust streaming 
video
format would presumably be designed for unambigous re-synching, meaning
the data stream can't contain the sync mark. But I don't know if that
is guaranteed in conversion from file to stream a la HDLC or some link packet 
protocol
or whether it is actually encoded with escaping in the file. If framing in the 
file is with
length-specifying packet headers and no data escaping, then the 
filebytes.count(pattern)
approach is not going to do the job reliably, as Lasse was pointing out.

Requirements, requirements ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: frozenset/subclassing/keyword args

2005-10-31 Thread Bengt Richter
On Mon, 31 Oct 2005 19:31:33 GMT, "Mark E. Fenner" <[EMAIL PROTECTED]> wrote:

>Hello all,
>
>I was migrating some code from sets.ImmutableSet to frozenset and noticed
>the following:
>
>**code
>#!/usr/bin/env python
>
>from sets import ImmutableSet
>
>
>class MSet1(ImmutableSet):
>def __init__(self, iterArg, myName="foo"):
>ImmutableSet.__init__(self, iterArg)
>self.name = myName
>
>
>class MSet2(frozenset):
>def __init__(self, iterArg, myName="foo"):
>frozenset.__init__(self, iterArg)
>self.name = myName
>
>
>m1 = MSet1([1,2,3], myName = "donkey")
>print m1
>print m1.name
>
>m2 = MSet2([1,2,3], myName = "kong")
>print m2
>print m2.name
>*end code**
>
>*run**
>MSet1([1, 2, 3])
>donkey
>Traceback (most recent call last):
>  File "./setTest.py", line 22, in ?
>m2 = MSet2([1,2,3], myName = "kong")
>TypeError: frozenset() does not take keyword arguments
>*end run
>
>I'm missing something and couldn't find it in the docs.

Without researching it, I would guess that you have to override __new__
so as not to pass through the myName arg to the otherwise inherited and
called-with-all-arguments __new__ of the base class. You could take care
of the myName arg in the __new__ method too (by temporarily binding the
instance returned by frozenset.__new__ and assigning the name attribute
before returning the instance), or you can define __init__ to do that part.
See many various posted examples of subclassing immutable types.

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Reuse base-class implementation of classmethod?

2005-10-31 Thread Bengt Richter
On Tue, 01 Nov 2005 00:24:37 GMT, "Giovanni Bajo" <[EMAIL PROTECTED]> wrote:

>Hello,
>
>what's the magic needed to reuse the base-class implementation of a
>classmethod?
>
>class A(object):
>   @classmethod
>   def foo(cls, a,b):
>   # do something
>   pass
>
>class B(A):
>@classmethod
>def foo(cls, a, b):
> A.foo(cls, a, b)   # WRONG!
>
>I need to call the base-class classmethod to reuse its implementation, but I'd
>like to pass the derived class as first argument.
>-- 
>Giovanni Bajo
>
>
Maybe define a funny-class-method decorator?
(nothing below tested beyond what you see ;-)

 >>> def funnycm(m):
 ... return property(lambda cls: m.__get__(type(cls), type(type(cls
 ...
 >>> class A(object):
 ... @funnycm
 ... def foo(cls, a, b):
 ... print cls, a, b # do something
 ...
 >>> class B(A):
 ... pass  # just inherit
 ...
 >>> a=A()
 >>> a.foo(1,2)
  1 2
 >>> b=B()
 >>> b.foo(1,2)
  1 2

Or more directly, a custom descriptor (with dynamic method replacement for good 
measure ;-)

 >>> class funnycm(object):
 ... def __init__(self, f): self._f = f
 ... def __get__(self, inst, cls=None):
 ... return self._f.__get__(inst is None and cls or type(inst))
 ... def __set__(self, inst, m): self._f = m  # replace method
 ...
 >>> class A(object):
 ... @funnycm
 ... def foo(cls, a, b):
 ... print cls, a, b # do something
 ...
 >>> class B(A):
 ... pass  # just inherit
 ...
 >>> a=A()
 >>> a.foo(1,2)
  1 2
 >>> b=B()
 >>> b.foo(1,2)
  1 2
 >>> A.foo(3,4)
  3 4
 >>> B.foo(3,4)
  3 4
 >>> a.foo = lambda cls, *args: repr(args)
 >>> a.foo('what','now','then?')
 "('what', 'now', 'then?')"
 >>> b.foo('eh?')
 "('eh?',)"
 >>> B.foo()
 '()'

Vary to taste ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Running autogenerated code in another python instance

2005-11-02 Thread Bengt Richter
On Wed, 2 Nov 2005 06:08:22 + (UTC), Paul Cochrane <[EMAIL PROTECTED]> 
wrote:

>Hi all,
>
>I've got an application that I'm writing that autogenerates python code
>which I then execute with exec().  I know that this is not the best way to
>run things, and I'm not 100% sure as to what I really should do.  I've had a
>look through Programming Python and the Python Cookbook, which have given me
>ideas, but nothing has gelled yet, so I thought I'd put the question to the
>community.  But first, let me be a little more detailed in what I want to
>do:
>
[...]
>
>Any help or advice would be really (really!) appreciated.
>
It's a little hard to tell without knowing more about your
user input (command language?) syntax that is translated to
or feeds the process that "autogenerates python code".

E.g., is it a limited python subset that you are accepting as input,
or a command language that you might implement using the cmd module, or ?
There are lots of easy things you could do without generating and exec-ing
python code per se. How complex is a user session state? What modifies it?
What actions are possible? History? Undo? Is the data a static passive
resource to view, or partly generated or made accessible by prior actions
in a session? Single user or collaborative? Shared access to everything or
only to static data? Etc., etc.

Some examples of user input and corresponding generated python might help,
with some idea of the kind of visualization aspects being controlled ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


With & marcos via import hooking? (Was Re: Scanning a file)

2005-11-02 Thread Bengt Richter
On Tue, 01 Nov 2005 07:14:57 -0600, Paul Watson <[EMAIL PROTECTED]> wrote:

>Paul Rubin wrote:
>> [EMAIL PROTECTED] (John J. Lee) writes:
>> 
>>>Closing off this particular one would make it harder to get benefit of
>>>non-C implementations of Python, so it has been judged "not worth it".
>>>I think I agree with that judgement.
>> 
>> 
>> The right fix is PEP 343.
>
>I am sure you are right.  However, PEP 343 will not change the existing 
>body of Python source code.  Nor will it, alone, change the existing 
>body of Python programmers who are writing code which does not close files.

It might be possible to recompile existing code (unchanged) to capture most
typical cpython use cases, I think...

E.g., I can imagine a family of command line options based on hooking import on
startup and passing option info to the selected and hooked import module,
which module would do extra things at the AST stage of compiling and executing 
modules
during import, to accomplish various things.

(I did a little proof of concept a while back, see

http://mail.python.org/pipermail/python-list/2005-August/296594.html

that gives me the feeling I could do this kind of thing).

E.g., for the purposes of guaranteeing close() on files opened in typical 
cpython
one-liners or single-suiters) like e.g.

for i, line in enumerate(open(fpath)):
print '%04d: %s' %(i, line.rstrip())

I think a custom import could recognize the open call
in the AST and extract it and wrap it up in a try/finally AST structure 
implementing
something like the following in the place of the above;

__f = open(fpath) # (suitable algorithm for non-colliding __f names is 
required)
try:
for i, line in enumerate(__f):
print '%04d: %s' %(i, line.rstrip())
finally:
__f.close()

In this case, the command line info passed to the special import might look like
python -with open script.py

meaning calls of open in a statement/suite should be recognized and extracted 
like
__f = open(fpath) above, and the try/finally be wrapped around the use of it.

I think this would capture a lot of typical usage, but of course I haven't 
bumped into
the gotchas yet, since I haven't implemented it ;-)

On a related note, I think one could implement macros of a sort in a similar 
way.
The command line parameter would pass the name of a class which is actually 
extracted
at AST-time, and whose methods and other class variables represent macro 
definitions
to be used in the processing of the rest of the module's AST, before 
compilation per se.

Thus you could implement e.g. in-lining, so that


#example.py
class inline:
def mac(acc, x, y):
acc += x*y

tot = 0
for i in xrange(10):
mac(tot, i*i)


Could be run with

python -macros inline example.py

and get the same identical .pyc as you would with the source


#example.py
tot = 0
for i in xrange(10):
tot += i*i


IOW, a copy of the macro body AST is substituted for the macro call AST, with
parameter names translated to actual macro call arg names. (Another variant
would also permit putting the macros in a separate module, and recognize their
import into other modules, and "do the right thing" instead of just translating
the import. Maybe specify the module by python - macromodule inline example.py
and then recognize "import inline" in example.py's AST).

Again, I just have a hunch I could make this work (and a number of people
here could beat me to it if they were motivated, I'm sure). Also have a hunch
I might need some flame shielding. ;-)

OTOH, it could be an easy way to experiment with some kinds of language
tweaks. The only limitation really is the necessity for the source to
look legal enough that an AST is formed and preserves the requisite info.
After that, there's no limit to what an AST-munger could do, especially
if it is allowed to call arbitrary tools and create auxiliary files such
as e.g. .dlls for synthesized imports plugging stuff into the final translated 
context ;-)
(I imagine this is essentially what the various machine code generating 
optimizers do).

IMO the concept of modules and their (optionally specially controlled) 
translation
and use could evolve in may interesting directions. E.g., __import__ could grow
keyword parameters too ...  Good thing there is a BDFL with a veto, eh? ;-)

Should I bother trying to implement this import for with and macros from
the pieces I have (plus imp, to do it "right") ?

BTW, I haven't experimented with command line dependent 
site.py/sitecustomize.py stuff.
Would that be a place to do sessionwise import hooking and could one rewrite 
sys.argv
so the special import command line opts would not be visible to subsequent
processing (and the import hook would be in effect)? IWT so, but probably 
should read
site.py again and figure it out, but appreciate any hints on pitfalls ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: expanding dictionary to function arguments

2005-11-02 Thread Bengt Richter
On 1 Nov 2005 17:17:00 -0800, "Noah" <[EMAIL PROTECTED]> wrote:

>I have a dictionary that I would like to expand to satisfy a
>function's agument list. I can used the ** syntax to pass a dictionary,
>but
>this only works if each key in the dictionary matches an argument.
>I cannot pass a dictionary that has more keys than the function has
>arguments.
[...]
>I created the following function to do what I am describing.
>This isn't too bad, but I thought that perhaps there was some
>secret Python syntax that will do this for me.
>
>def apply_smart (func, args):
>"""This is similar to func(**args), but this won't complain about
>extra keys in 'args'. This ignores keys in 'args' that are
>not required by 'func'. This passes None to arguments that are
>not defined in 'args'. That's fine for arguments with a default
>valeue, but
>that's a bug for required arguments. I should probably raise a
>TypeError.
>"""
Ok, so why not do it? ;-)
>if hasattr(func,'im_func'): # Handle case when func is a class
>method.
>func = func.im_func
 skipself = True
 else: skipself = False
>argcount = func.func_code.co_argcount
Make arg list and call with it instead of
>required_args = dict([(k,args.get(k)) for k in
>func.func_code.co_varnames[:argcount]])
>return func(**required_args)
 try:
 required_args = [args[k] for k in  
func.func_code.co_varnames[skipself:argcount]]
 except KeyError:
 raise TypeError, '%s(...) missing arg %r'%(func.func_name, k)
 return func(*required_args)

>
>So, I now I can do this:
>options = read_config ("options.conf")
>apply_smart (extract_audio, options)
>apply_smart (compress_audio, options)
>apply_smart (mux, options)
>
>Neat, but is that the best I can do?
>
I suppose you could replace your local bindings of extract_audio, 
compress_audio, and mux
with wrapper functions of the same name that could cache the func and arg names 
in closure
variables, e.g., using a decorator function (virtually untested)

def call_with_args_from_dict(func):
argnames = func.func_code.co_varnames[hasattr(func, 
'im_func'):func.func_code.co_argcount]
ndefaults = len(func.func_defaults or ())
if ndefaults:
defnames = argnames[-ndefaults:]
argnames = argnames[:-ndefaults]
else:
defnames = []
def _f(**args):
try:
actualargs = [args[argname] for argname in argnames]
for argname in defnames:
if argname not in args: break
actualargs.append(args[argname]) 
except KeyError: raise TypeError, '%s(...) missing arg(s) %r'%(
 func.func_name, [argname for argname in argnames 
if argname not in args])
return func(*actualargs)
_f.func_name = func.func_name
    return _f


and then wrap like

extract_audio = call_with_args_from_dict(extract_audio)

or use as a decorator if you are defining the function to be wrapped, e.g.,
  
@call_with_args_from_dict
def mux(firstarg, second, etc):
...

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Most efficient way of storing 1024*1024 bits

2005-11-02 Thread Bengt Richter
On Wed, 2 Nov 2005 13:55:10 +0100, "Tor Erik Sønvisen" <[EMAIL PROTECTED]> 
wrote:

>Hi
>
>I need a time and space efficient way of storing up to 6 million bits. Time 
>efficency is more important then space efficency as I'm going to do searches 
>through the bit-set.
>
Very dependent on what kind of "searches" -- e.g., 1024*1024 suggests the
possibility of two dimensions. Quad-trees? How sparse is the data? Etc.
What kinds of patterns are you going to search for?

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Hexadecimal Conversion in Python

2005-11-02 Thread Bengt Richter
On 2 Nov 2005 12:28:26 -0800, "DaBeef" <[EMAIL PROTECTED]> wrote:

>Hello, I am reading in a socket message from a server and am only
>receiving this ''.  Now obviously it is in the wrong format.  How
>would I convert these bys in Python, I have looked everywhere but I do
>not see much documentation on converting ptyhon types to other data
>types.
>Any Help would be appreciated.
>
print repr(msg)
where msg is what you _actually_ read (and tell us how you got the message in, 
BTW)
and show us a copy/pasted copy from your screen.
Unless you are very very good at descriptions, it's hard to beat presentation of
machine representations of what you are talking about ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Hexadecimal Conversion in Python

2005-11-02 Thread Bengt Richter
On 2 Nov 2005 12:53:45 -0800, "DaBeef" <[EMAIL PROTECTED]> wrote:

>I have been coding for 5 years.  This is a proprietary protocol, so it
>is difficult converting.  I did this in java but was just able to
>convert a stream.  I looked through the Python library, I am more or
>less getting backa  string represented as a ""   So now I want to
>convert it to all the hexa, bin until I see a match and can then work
>teh rest of my program
>
Maybe the binascii module's hexlify will get you into territory more
familiar to you? Python generally stores byte data as type str "strings."
If you want to see the bytes as hex (a string of hex characters ;-) you can 
e.g.,

 >>> import binascii
 >>> binascii.hexlify('ABC123...\x01\x02\x03')
 '4142433132332e2e2e010203'

To convert individual character, you can use a format string on the ordinal 
value

 >>> for c in 'ABC123...\x01\x02\x03': print '%02X'%ord(c),
 ...
 41 42 43 31 32 33 2E 2E 2E 01 02 03

Or perhaps you really want the integer ordinal value itself?

 >>> for c in 'ABC123...\x01\x02\x03': print ord(c),
 ...
 65 66 67 49 50 51 46 46 46 1 2 3

(print obviously does a conversion to decimal string representation for output)

If you are interested in the bits, you can check them with bit operations, e.g.,

 >>> for c in 'ABC123...\x01\x02\x03':
 ... print ''.join(chr(48+((ord(c)>>b)&1)) for b in xrange(7,-1,- 1)),
 ...
 0101 0110 0111 00110001 00110010 00110011 00101110 00101110 
00101110 0001 0010 0011

(cf. 41 42 42 etc above)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Getting a function name from string

2005-11-03 Thread Bengt Richter
On Wed, 02 Nov 2005 19:01:46 -0500, Mike Meyer <[EMAIL PROTECTED]> wrote:

>"Paul McGuire" <[EMAIL PROTECTED]> writes:
>> "David Rasmussen" <[EMAIL PROTECTED]> wrote in message
>> news:[EMAIL PROTECTED]
>>> If I have a string that contains the name of a function, can I call it?
>>> As in:
>>>
>>> def someFunction():
>>> print "Hello"
>>>
>>> s = "someFunction"
>>> s() # I know this is wrong, but you get the idea...
>>>
>>> /David
>>
>> Lookup the function in the vars() dictionary.
>>
>>>>> def fn(x):
>> ...   return x*x
>> ...
>>>>> vars()['fn']
>> 
>>>>> vars()['fn'](100)
>> 1
>
>vars() sans arguments is just locals, meaning it won't find functions
>in the global name space if you use it inside a function:
>
>>>> def fn(x):
>...  print x
>... 
>>>> def fn2():
>...  vars()['fn']('Hello')
>... 
>>>> fn2()
>Traceback (most recent call last):
>  File "", line 1, in ?
>  File "", line 2, in fn2
>KeyError: 'fn'
>>>> 
>
>Using globals() in this case will work, but then won't find functions
>defined in the local name space.
>
>For a lot of uses, it'd be better to build the dictionary by hand
>rather than relying on one of the tools that turns a namespace into a
>dictionary.
IMO it would be nice if name lookup were as cleanly
controllable and defined as attribute/method lookup.
Is it a topic for py3k?

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Anomaly in creating class methods

2005-11-03 Thread Bengt Richter
On 3 Nov 2005 03:19:22 -0800, "venk" <[EMAIL PROTECTED]> wrote:

>Hi,
> given below is my interaction with the interpreter In one case, i
>have created the class method using the "famous idiom"... and in the
>other, i have tried to create it outside the class definition... why
>isn't the latter working ? (of course, the presence of decorators is a
>different issue)
>>>> class D:
>... def f(cls):
>... print cls
>...
>>>> D.f=classmethod(D.f)
>>>> D.f
>>
>>>> D.f()
>Traceback (most recent call last):
>  File "", line 1, in ?
>TypeError: unbound method f() must be called with D instance as first
>argument (got classobj instance instead)
>>>> class D:
>... def f(cls):
>... print cls
>... f=classmethod(f)
>...
>>>> D.f
>>
>>>> D.f()
>__main__.D
>

I think you are on very thin ice using classmethod for old-style classes.
In any case, you are not providing the same argument to classmethod
in the two examples above. I'm not sure what classmethod tries to do with
an unbound method, but it's not normal usage even in newstyle classes.
It probably just saves the callable argument as an attribute of the
classmethod instance, so the problem doesn't show until the descriptor
machinery comes into play (on attribute access and calling the bound callable).


 >>> def showcmarg(f): print 'cmarg=%r'%f; return classmethod(f)
 ...
 >>> class D:
 ... def f(cls): print cls
 ...
 >>> D.f=showcmarg(D.f)
 cmarg=

Note that that was an unbound method as the arg, that gets stored in the 
descriptor

 >>> D.f
 >
 >>> D.f()
 Traceback (most recent call last):
   File "", line 1, in ?
 TypeError: unbound method f() must be called with D instance as first argument 
(got classobj
 tance instead)

That's just the callable (D unbound method) you gave it complaining that the 
descriptor is
passing the intended cls D (a classobj) instead of an instance of D to the 
unbound method.

 >>> class D:
 ... def f(cls): print cls
 ... f = showcmarg(f)
 ...
 cmarg=

Note that this time that was a _function_ arg, because it was picked up as a 
local
binding during the execution of the body of the class definition, where 
classmethod
was called. The function will have no demands except to match its signature 
when the
classmethod descriptor instance calls it with first arg of D.

 >>> D.f
 >
 >>> D.f()
 __main__.D

You could extract the normal (function) classmethod arg from the outside though:

 >>> class D:
 ... def f(cls): print cls
 ...
 >>> D.f=showcmarg(D.f.im_func)
 cmarg=
 >>> D.f
 >
 >>> D.f()
 __main__.D

Suggest migrating to new style classes consistently ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class Variable Access and Assignment

2005-11-03 Thread Bengt Richter
On 3 Nov 2005 12:20:35 GMT, Antoon Pardon <[EMAIL PROTECTED]> wrote:

>Op 2005-11-03, Stefan Arentz schreef <[EMAIL PROTECTED]>:
>> Antoon Pardon <[EMAIL PROTECTED]> writes:
>>
>>> Op 2005-11-03, Steven D'Aprano schreef <[EMAIL PROTECTED]>:
>>> 
>>> >> There are two possible fixes, either by prohibiting instance variables
>>> >> with the same name as class variables, which would allow any reference
>>> >> to an instance of the class assign/read the value of the variable. Or
>>> >> to only allow class variables to be accessed via the class name itself.
>>> >
>>> > There is also a third fix: understand Python's OO model, especially
>>> > inheritance, so that normal behaviour no longer surprises you.
>>> 
>>> No matter wat the OO model is, I don't think the following code
>>> exhibits sane behaviour:
>>> 
>>> class A:
>>>   a = 1
>>> 
>>> b = A()
>>> b.a += 2
>>> print b.a
>>> print A.a
>>> 
>>> Which results in
>>> 
>>> 3
>>> 1
>>
>> I find it confusing at first, but I do understand what happens :-)
>
>I understand what happens too, that doesn't make it sane behaviour.
>
>> But really, what should be done different here?
>
>I don't care what should be different. But a line with only one
>referent to an object in it, shouldn't be referring to two different
>objects.
>
>In the line: b.a += 2, the b.a should be refering to the class variable
>or the object variable but not both. So either it could raise an
>attribute error or add two to the class variable.
>
>Sure one could object to those sematics too, but IMO they are preferable
>to what we have now.
>
A somewhat similar name space problem, where you could argue
that "a"  prior to += should be seen as defined in the outer scope,
but lookahead determines that a is local to inner, period, so that
is the reference that is used (and fails).

 >>> def outer():
 ... a = 1
 ... def inner():
 ... a += 2
 ... print a
 ... print 'outer a', a
 ... inner()
 ... print 'outer a', a
 ...
 >>> outer()
 outer a 1
 Traceback (most recent call last):
   File "", line 1, in ?
   File "", line 7, in outer
   File "", line 4, in inner
 UnboundLocalError: local variable 'a' referenced before assignment

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Not Equal to Each Other?

2005-11-03 Thread Bengt Richter
On 3 Nov 2005 17:01:08 -0800, [EMAIL PROTECTED] wrote:

>Another question:  I am writing a sudoku solving program.  The
>'solving' part of is just multiple iterations.  It will take random
>numbers and keep switching it all around until a set of logic
>statements has been met (ie; all numbers in a row are not equal to each
>other) ... that's where my question comes in.
>
>Cellboard = my list for storing each row/column's data.
>
>Rather than writing
>
>cellboard[0] is not* (cellboard[1] and cellboard[2] and cellboard[3]
>and cellboard[4] ... cellboard[8])
>cellboard[1] is not (cellboard[0] and cellboard[2] and cellboard[3] and
>cellboard[4] ... cellboard[8])
>etc...
>
>* should this be != ?
>
>the above so that all the data in one row is not equal to each other,
>is there something I can write to make it simpler?  For example,
>(cellboard[0] is not cellboard[1] is not ... cellboard[8]) only worked
>for the numbers to the left and right of the cell - is there anyway I
>can expand this to cover all numbers in a set range?
>

UIAM if you have a list of items that are comparable and hashable, like 
integers,
you can make a set of the list, and duplicates will be eliminated in the set.
Therefore if the resulting set has the same number of members as the list it
was made from, you can conclude that the list contains no duplicates. E.g.,

 >>> cellboard = range(8)
 >>> cellboard
 [0, 1, 2, 3, 4, 5, 6, 7]
 >>> set(cellboard)
 set([0, 1, 2, 3, 4, 5, 6, 7])
 >>> len(set(cellboard))
 8
 >>> cellboard[2] = 7
 >>> cellboard
 [0, 1, 7, 3, 4, 5, 6, 7]
 >>> set(cellboard)
 set([0, 1, 3, 4, 5, 6, 7])
 >>> len(set(cellboard))
 7

So the test would be
 >>> len(set(cellboard))==len(cellboard)
 False
And after repairing the list to uniqueness of elements:
 >>> cellboard[2] = 2
 >>> len(set(cellboard))==len(cellboard)
 True

HTH

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class Variable Access and Assignment

2005-11-04 Thread Bengt Richter
On 4 Nov 2005 08:23:05 GMT, Antoon Pardon <[EMAIL PROTECTED]> wrote:

>Op 2005-11-03, Magnus Lycka schreef <[EMAIL PROTECTED]>:
>> Antoon Pardon wrote:
>>> There is no instance variable at that point. How can it add 2, to
>>> something that doesn't exist at the moment.
>>
>> Because 'a += 1' is only a shorthand for 'a = a + 1' if a is an
>> immutable object? Anyway, the behaviour is well documented.
>>
>> http://docs.python.org/ref/augassign.html says:
>>
>> An augmented assignment expression like x += 1 can be rewritten as x = x 
>> + 1 to achieve a similar, but not exactly equal effect. In the augmented 
>> version, x is only evaluated once.
>
>Then couldn't we expect that the namespace resolution is also done
>only once?
>
>I say that if the introduction on += like operators implied that the
>same mentioning of a name would in some circumstances be resolved to
>two different namespaces, then such an introduction would better have
>not occured.
>
>Would it be too much to ask that in a line like.
>
>  x = x + 1.
>
>both x's would resolve to the same namespace?
>
I think I would rather seek consistency in terms of
order of evaluation and action. IOW, the right hand side
of an assignment is always evaluated before the left hand side,
and operator precedence and syntax defines order of access to names
in their expression context on either side.

The compilation of function bodies violates the above, even allowing
future (execution-wise) statements to influence the interpretation
of prior statements. This simplifies defining the local variable set,
and allows e.g. yield to change the whole function semantics, but
the practicality/purity ratio makes me uncomfortable ;-)

If there were bare-name properties, one could control the meaning
of x = x + 1 and x += 1, though of course one would need some way
to bind/unbind the property objects themselves to make them visible
as x or whatever names.

It might be interesting to have a means to push and pop objects
onto/off-of a name-space-shadowing stack (__nsstack__), such that the first 
place
to look up a bare name would be as an attribute of the top stack object, i.e.,
   
name = name + 1

if preceded by   

__nsstack__.append(my_namespace_object)

would effectively mean

my_namespace_object.name = my_namespace_object.name + 1

by way of logic like

if __nsstack__:
setattr(__nsstack__[-1], getattr(__nstack__[-1], name) + 1))
else:
x = x + 1


Of course, my_namespace_object could be an instance of a class
that defined whatever properties or descriptors you wanted.
When you were done with that namespace, you'd just __nsstack__.pop()

If __nsstack__ is empty, then of course bare names would be looked
up as now.

BTW, __nsstack__ is not a literal proposal, just a way to illustrate the 
concept ;-)
OTOH, is suppose a function could have a reseved slot for a name space object 
stack
that wouldn't cost much run time to bypass with a machine language check for 
NULL.

BTW2, this kind of stack might play well with a future "with," to guarantee name
space popping. Perhaps "with" syntax could even be extended to make typical 
usage
slick ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class Variable Access and Assignment

2005-11-04 Thread Bengt Richter
On Fri, 04 Nov 2005 02:59:35 +1100, Steven D'Aprano <[EMAIL PROTECTED]> wrote:

>On Thu, 03 Nov 2005 14:13:13 +, Antoon Pardon wrote:
>
>> Fine, we have the code:
>> 
>>   b.a += 2
>> 
>> We found the class variable, because there is no instance variable,
>> then why is the class variable not incremented by two now?
Because the class variable doesn't define a self-mutating __iadd__
(which is because it's an immutable int, of course). If you want
b.__dict__['a'] += 2 or b.__class__.__dict__['a'] += 2 you can
always write it that way ;-)

(Of course, you can use a descriptor to define pretty much whatever semantics
you want, when it comes to attributes).

>
>Because b.a += 2 expands to b.a = b.a + 2. Why would you want b.a =

No, it doesn't expand like that. (Although, BTW, a custom import could
make it so by transforming the AST before compiling it ;-)

Note BINARY_ADD is not INPLACE_ADD:

 >>> def foo(): # for easy disassembly
 ... b.a += 2
 ... b.a = b.a + 2
 ...
 >>> import dis
 >>> dis.dis(foo)
   2   0 LOAD_GLOBAL  0 (b)
   3 DUP_TOP
   4 LOAD_ATTR1 (a)
   7 LOAD_CONST   1 (2)
  10 INPLACE_ADD
  11 ROT_TWO
  12 STORE_ATTR   1 (a)

   3  15 LOAD_GLOBAL  0 (b)
  18 LOAD_ATTR1 (a)
  21 LOAD_CONST   1 (2)
  24 BINARY_ADD
  25 LOAD_GLOBAL  0 (b)
  28 STORE_ATTR   1 (a)
  31 LOAD_CONST   0 (None)
  34 RETURN_VALUE

And BINARY_ADD calls __add__ and INPLACE_ADD calls __iadd__ preferentially.

About __ixxx__:
"""
These methods are called to implement the augmented arithmetic operations
(+=, -=, *=, /=, %=, **=, <<=, >>=, &=, ^=, |=).
These methods should attempt to do the operation in-place (modifying self)
and return the result (which could be, but does not have to be, self).
If a specific method is not defined, the augmented operation falls back
to the normal methods. For instance, to evaluate the expression x+=y,
where x is an instance of a class that has an __iadd__() method,
x.__iadd__(y) is called. If x is an instance of a class that does not define
a __iadd() method, x.__add__(y) and y.__radd__(x) are considered, as with
the evaluation of x+y. 
"""


> to correspond to b.__class__.a = ?
>
>I'm not saying that it couldn't, if that was the model for inheritance you
>decided to use. I'm asking why would you want it? What is your usage case
>that demonstrates that your preferred inheritance model is useful?

It can be useful to find-and-rebind (in the namespace where found) rather
than use separate rules for finding (or not) and binding. The tricks for
boxing variables in closures show there is useful functionality that
is still not as convenient to "spell" as could be imagined.
It is also useful to find and bind separately. In fact, IMO it's not
separate enough in some cases ;-)

I've wanted something like
x := expr
to spell "find x and rebind it to expr" (or raise NameError if not found).
Extending that to attributes and augassign,
b.a +:= 2
could mean find the "a" attribute, and in whatever attribute dict it's found,
rebind it there. Or raise an Exception for whatever failure is encountered.
This would be nice for rebinding closure variables as well. But it's been 
discussed,
like most of these things ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class Variable Access and Assignment

2005-11-04 Thread Bengt Richter
On Thu, 03 Nov 2005 13:37:08 -0500, Mike Meyer <[EMAIL PROTECTED]> wrote:
[...]
>> I think it even less sane, if the same occurce of b.a refers to two
>> different objects, like in b.a += 2
>
>That's a wart in +=, nothing less. The fix to that is to remove +=
>from the language, but it's a bit late for that.
>
Hm, "the" fix? Why wouldn't e.g. treating augassign as shorthand for a source 
transformation
(i.e., asstgt = expr  becomes by simple text substitution asstgt = asstgt 
 expr)
be as good a fix? Then we could discuss what

b.a = b.a + 2

should mean ;-)

OTOH, we could discuss how you can confuse yourself with the results of b.a += 2
after defining a class variable "a" as an instance of a class defining __iadd__ 
;-)

Or point out that you can define descriptors (or use property to make it easy)
to control what happens, pretty much in as much detail as you can describe 
requirements ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class Variable Access and Assignment

2005-11-04 Thread Bengt Richter
On 04 Nov 2005 11:04:58 +0100, Stefan Arentz <[EMAIL PROTECTED]> wrote:

>Antoon Pardon <[EMAIL PROTECTED]> writes:
>
>> Op 2005-11-03, Mike Meyer schreef <[EMAIL PROTECTED]>:
>> > Antoon Pardon <[EMAIL PROTECTED]> writes:
>> >>> What would you expect to get if you wrote b.a = b.a + 2?
>> >> I would expect a result consistent with the fact that both times
>> >> b.a would refer to the same object.
>> >
>> > Except they *don't*. This happens in any language that resolves
>> > references at run time.
>> 
>> Python doesn't resolve references at run time. If it did the following
>> should work.
>> 
>> a = 1
>> def f():
>>   a = a + 1
>> 
>> f()
>
>No that has nothing to do with resolving things at runtime. Your example
>does not work because the language is very specific about looking up
>global variables. Your programming error, not Python's shortcoming.
>
If someone has an old version of Python handy, I suspect that it used
to "work", and the "a" on the right hand side was the global "a" because
a local "a" hadn't been defined until the assignment, which worked to
produce a local binding of "a". Personally, I like that better than
the current way, because it follows the order of accesses implied
by the precedences in expression evaluation and statement execution.
But maybe I don't RC ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class Variable Access and Assignment

2005-11-04 Thread Bengt Richter
On 4 Nov 2005 11:09:36 GMT, Antoon Pardon <[EMAIL PROTECTED]> wrote:
[...]
>
>Take the code:
>
>  lst[f()] += 1
>
>Now let f be a function with a side effect, that in succession
>produces the positive integers starting with one.
>
>What do you think this should be equivallent to:
>
>  t = f()
>  lst[t] = lst[t] + 1
>
>or
>
>  lst[f()] = lst[f()] + 1
>
>If you think the environment can change between references then I
>suppose you prefer the second approach.
>
I am quite sympathetic to your probe of python semantics, but I
don't think the above is an argument that should be translated
to attribute assignment. BTW, ISTM that augassign (+=) is
a red herring here, since it's easy to make a shared class variable
that is augassigned apparently as you want, e.g.,

 >>> class shared(object):
 ... def __init__(self, v=0): self.v=v
 ... def __get__(self, *any): return self.v
 ... def __set__(self, _, v): self.v = v
 ...
 >>> class B(object):
 ... a = shared(1)
 ...
 >>> b=B()
 >>> b.a
 1
 >>> B.a
 1
 >>> b.a += 2
 >>> b.a
 3
 >>> B.a
 3
 >>> vars(b)
 {}
 >>> vars(b)['a'] = 'instance attr'
 >>> vars(b)
 {'a': 'instance attr'}
 >>> b.a
 3
 >>> b.a += 100
 >>> b.a
 103
 >>> B.a
 103
 >>> B.a = 'this could be prevented'
 >>> b.a
 'instance attr'
 >>> B.a
 'this could be prevented'

The spelled out attribute update works too
 >>> B.a = shared('alpha')
 >>> b.a
 'alpha'
 >>> b.a = b.a + ' beta'
 >>> b.a
 'alpha beta'
 >>> B.a
 'alpha beta'

But the instance attribute we forced is still there
 >>> vars(b)
 {'a': 'instance attr'}

You could have shared define __add__ and __iadd__ and __radd__ also,
for confusion to taste ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class Variable Access and Assignment

2005-11-04 Thread Bengt Richter
On Fri, 04 Nov 2005 09:24:41 -0500, Christopher Subich <[EMAIL PROTECTED]> 
wrote:

>Steven D'Aprano wrote:
>> On Thu, 03 Nov 2005 14:13:13 +, Antoon Pardon wrote:
>> 
>> 
>>>Fine, we have the code:
>>>
>>>  b.a += 2
>>>
>>>We found the class variable, because there is no instance variable,
>>>then why is the class variable not incremented by two now?
>> 
>> 
>> Because b.a += 2 expands to b.a = b.a + 2. Why would you want b.a =
>>  to correspond to b.__class__.a = ?
>
>Small correction, it expands to b.a = B.a.__class__.__iadd__(b.a,2), 
>assuming all relevant quantities are defined.  For integers, you're 
>perfectly right.
But before you get to that, a (possibly inherited) type(b).a better
not have a __get__ method trumping __class__ and the rest ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class Variable Access and Assignment

2005-11-04 Thread Bengt Richter
On 04 Nov 2005 17:53:34 -0800, Paul Rubin <http://[EMAIL PROTECTED]> wrote:

>[EMAIL PROTECTED] (Bengt Richter) writes:
>> Hm, "the" fix? Why wouldn't e.g. treating augassign as shorthand for
>> a source transformation (i.e., asstgt = expr becomes by simple
>> text substitution asstgt = asstgt  expr) be as good a fix? Then
>> we could discuss what
>
>Consider "a[f()] += 3".  You don't want to eval f() twice.

Well, if you accepted macro semantics IWT you _would_ want to ;-)

Hm, reminds me of typical adding of parens in macros to control precedence
in expressions ... so I tried
 >>> a = [0]
 >>> (a[0]) += 1
 SyntaxError: augmented assign to tuple literal or generator expression not 
possible

;-/

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class Variable Access and Assignment

2005-11-04 Thread Bengt Richter
On Fri, 04 Nov 2005 10:28:52 -0500, Christopher Subich <[EMAIL PROTECTED]> 
wrote:

>Antoon Pardon wrote:
>>>Since ints are immutable objects, you shouldn't expect the value of b.a
>>>to be modified in place, and so there is an assignment to b.a, not A.a.
>> 
>> 
>> You are now talking implementation details. I don't care about whatever
>> explanation you give in terms of implementation details. I don't think
>> it is sane that in a language multiple occurence of something like b.a
>> in the same line can refer to different objects
>> 
>
>This isn't an implementation detail; to leading order, anything that 
>impacts the values of objects attached to names is a specification issue.
>
>An implementation detail is something like when garbage collection 
>actually happens; what happens to:
>
>b.a += 2
>
>is very much within the language specification.  Indeed, the language 
>specification dictates that an instance variable b.a is created if one 
>didn't exist before; this is true no matter if type(b.a) == int, or if 
>b.a is some esoteric mutable object that just happens to define 
>__iadd__(self,type(other) == int).
But if it is an esoteric descriptor (or even a simple property, which is
a descriptor), the behaviour will depend on the descriptor, and an instance
variable can be created or not, as desired, along with any side effect you like.

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class Variable Access and Assignment

2005-11-05 Thread Bengt Richter
On Fri, 04 Nov 2005 21:14:17 -0500, Mike Meyer <[EMAIL PROTECTED]> wrote:

>[EMAIL PROTECTED] (Bengt Richter) writes:
>> On Thu, 03 Nov 2005 13:37:08 -0500, Mike Meyer <[EMAIL PROTECTED]> wrote:
>> [...]
>>>> I think it even less sane, if the same occurce of b.a refers to two
>>>> different objects, like in b.a += 2
>>>
>>>That's a wart in +=, nothing less. The fix to that is to remove +=
>>>from the language, but it's a bit late for that.
>>>
>> Hm, "the" fix? Why wouldn't e.g. treating augassign as shorthand for a 
>> source transformation
>> (i.e., asstgt = expr  becomes by simple text substitution asstgt = 
>> asstgt  expr)
>> be as good a fix? Then we could discuss what
>>
>> b.a = b.a + 2
>>
>> should mean ;-)
>
>The problem with += is how it behaves, not how you treat it. But you
>can't treat it as a simple text substitution, because that would imply
>that asstgt gets evaluated twice, which doesn't happen.
I meant that it would _make_ that happen, and no one would wonder ;-)

BTW, if b.a is evaluated once each for __get__ and __set__, does that not
count as getting evaluated twice?

 >>> class shared(object):
 ... def __init__(self, v=0): self.v=v
 ... def __get__(self, *any): print '__get__'; return self.v
 ... def __set__(self, _, v): print '__set__'; self.v = v
 ...
 >>> class B(object):
 ... a = shared(1)
 ...
 >>> b=B()
 >>> b.a
 __get__
 1
 >>> b.a += 2
 __get__
 __set__
 >>> B.a
 __get__
 3

Same number of get/sets:

 >>> b.a = b.a + 10
 __get__
 __set__
 >>> b.a
 __get__
 13

I posted the disassembly in another part of the thread, but I'll repeat:

 >>> def foo():
 ... a.b += 2
 ... a.b = a.b + 2
 ...
 >>> import dis
 >>> dis.dis(foo)
   2   0 LOAD_GLOBAL  0 (a)
   3 DUP_TOP
   4 LOAD_ATTR1 (b)
   7 LOAD_CONST   1 (2)
  10 INPLACE_ADD
  11 ROT_TWO
  12 STORE_ATTR   1 (b)

   3  15 LOAD_GLOBAL  0 (a)
  18 LOAD_ATTR1 (b)
  21 LOAD_CONST   1 (2)
  24 BINARY_ADD
  25 LOAD_GLOBAL  0 (a)
  28 STORE_ATTR   1 (b)
  31 LOAD_CONST   0 (None)
  34 RETURN_VALUE

It looks like the thing that's done only once for += is the LOAD_GLOBAL (a)
but DUP_TOP provides the two copies of the reference which are
used either way with LOAD_ATTR followed by STORE_ATTR, which UIAM
lead to the loading of the (descriptor above) attribute twice -- once each
for the __GET__ and __SET__ calls respectively logged either way above.

>
>> OTOH, we could discuss how you can confuse yourself with the results of b.a 
>> += 2
>> after defining a class variable "a" as an instance of a class defining 
>> __iadd__ ;-)
>
>You may confuse yourself that way, I don't have any problems with it
>per se.
I should have said "one can confuse oneself," sorry ;-)
Anyway, I wondered about the semantics of defining __iadd__, since it seems to 
work just
like __add__ except for allowing you to know what source got you there. So 
whatever you
return (unless you otherwise intercept instance attribute binding) will get 
bound to the
instance, even though you internally mutated the target and return None by 
default (which
gives me the idea of returning NotImplemented, but (see below) even that gets 
bound :-(

BTW, semantically does/should not __iadd__ really implement a _statement_ and 
therefore
have no business returning any expression value to bind anywhere?

 >>> class DoIadd(object):
 ... def __init__(self, v=0, **kw):
 ... self.v = v
 ... self.kw = kw
 ... def __iadd__(self, other):
 ... print '__iadd__(%r, %r) => '%(self, other),
 ... self.v += other
 ... retv = self.kw.get('retv', self.v)
 ... print repr(retv)
 ... return retv
 ...
 >>> class B(object):
 ... a = DoIadd(1)
 ...
 >>> b=B()
 >>> b.a
 <__main__.DoIadd object at 0x02EF374C>
 >>> b.a.v
 1

The normal(?) mutating way:
 >>> b.a += 2
 __iadd__(<__main__.DoIadd object at 0x02EF374C>, 2) =>  3
 >>> vars(b)
 {'a': 3}
 >>> B.a
 <__main__.DoIadd object at 0x02EF374C>
 >>> B.a.v
 3

Now fake attempt to mutate self without returning anything (=> None)
 >>> B.a = DoIadd(1, retv=None) # naive default
 >>> b.a
 3
Oops, remove instance attr
 >>> del b.a
 >>> b.a
 &

Re: Class Variable Access and Assignment

2005-11-05 Thread Bengt Richter
On Sat, 05 Nov 2005 14:37:19 +1100, Steven D'Aprano <[EMAIL PROTECTED]> wrote:

>On Sat, 05 Nov 2005 00:25:34 +, Bengt Richter wrote:
>
>> On Fri, 04 Nov 2005 02:59:35 +1100, Steven D'Aprano <[EMAIL PROTECTED]> 
>> wrote:
>> 
>>>On Thu, 03 Nov 2005 14:13:13 +, Antoon Pardon wrote:
>>>
>>>> Fine, we have the code:
>>>> 
>>>>   b.a += 2
>>>> 
>>>> We found the class variable, because there is no instance variable,
>>>> then why is the class variable not incremented by two now?
>> Because the class variable doesn't define a self-mutating __iadd__
>> (which is because it's an immutable int, of course). If you want
>> b.__dict__['a'] += 2 or b.__class__.__dict__['a'] += 2 you can
>> always write it that way ;-)
>> 
>> (Of course, you can use a descriptor to define pretty much whatever semantics
>> you want, when it comes to attributes).
>> 
>>>
>>>Because b.a += 2 expands to b.a = b.a + 2. Why would you want b.a =
>> 
>> No, it doesn't expand like that. (Although, BTW, a custom import could
>> make it so by transforming the AST before compiling it ;-)
>> 
>> Note BINARY_ADD is not INPLACE_ADD:
>
>Think about *what* b.a += 2 does, not *how* it does it. Perhaps for some
what it does, or what in the abstract it was intended to do? (which we need
BDFL channeling to know for sure ;-)

It looks like it means, "add two to ". I think Antoon
is unhappy that  is not determined once for the one b.a
expression in the statement. I sympathize, though it's a matter of defining
what b.a += 2 is really intended to mean. 
The parses are certainly distinguishable:

 >>> import compiler
 >>> compiler.parse('b.a +=2','exec').node
 Stmt([AugAssign(Getattr(Name('b'), 'a'), '+=', Const(2))])
 >>> compiler.parse('b.a = b.a + 2','exec').node
 Stmt([Assign([AssAttr(Name('b'), 'a', 'OP_ASSIGN')], Add((Getattr(Name('b'), 
'a'), Const(2])

Which I think leads to the different (BINARY_ADD vs INPLACE_ADD) code, which 
probably really
ought to have a conditional STORE_ATTR for the result of INPLACE_ADD, so that 
if __iadd__
was defined, it would be assumed that the object took care of everything 
(normally mutating itself)
and no STORE_ATTR should be done. But that's not the way it works now. (See 
also my reply to Mike).

Perhaps all types that want to be usable with inplace ops ought to inherit from 
some base providing
that, and there should never be a return value. This would be tricky for 
immutables though, since
re-binding is necessary, and the __iadd__ method would have to be passed the 
necessary binding context
and methods. Probably too much of a rewrite to be practical.

>other data type it would make a difference whether the mechanism was
>BINARY_ADD (__add__) or INPLACE_ADD (__iadd__), but in this case it does
>not. Both of them do the same thing.
Unfortunately you seem to be right in this case.
>
>Actually, no "perhaps" about it -- we've already discussed the case of
>lists.
Well, custom objects have to be considered too. And where attribute access
is involved, descriptors.

>
>Sometimes implementation makes a difference. I assume BINARY_ADD and
>INPLACE_ADD work significantly differently for lists, because their
>results are significantly (but subtly) different:
>
>py> L = [1,2,3]; id(L)
>-151501076
>py> L += [4,5]; id(L)
>-151501076
>py> L = L + []; id(L)
>-151501428
>
Yes.
>
>But all of this is irrelevant to the discussion about binding b.a
>differently on the left and right sides of the equals sign. We have
>discussed that the behaviour is different with mutable objects, because
>they are mutable -- if I recall correctly, I was the first one in this
>thread to bring up the different behaviour when you append to a list
>rather than reassign, that is, modify the class attribute in place.
>
>I'll admit that my choice of terminology was not the best, but it wasn't
>misleading. b.a += 2 can not modify ints in place, and so the
>effect of b.a += 2 is the same as b.a = b.a + 2, regardless of what
>byte-codes are used, or even what C code eventually implements that
>add-and-store.
It is so currently, but that doesn't mean that it couldn't be otherwise.
I think there is some sense to the idea that b.a should be re-bound in
the same namespace where it was found with the single apparent evaluation
of "b.a" in "b.a += 2" (which incidentally is Antoon's point, I think).
This is just for augassign, of course.

OTOH, this would be find

Re: re sub help

2005-11-05 Thread Bengt Richter
On 4 Nov 2005 22:49:03 -0800, [EMAIL PROTECTED] wrote:

>hi
>
>i have a string :
>a =
>"this\nis\na\nsentence[startdelim]this\nis\nanother[enddelim]this\nis\n"
>
>inside the string, there are "\n". I don't want to substitute the '\n'
>in between
>the [startdelim] and [enddelim] to ''. I only want to get rid of the
>'\n' everywhere else.
>
>i have read the tutorial and came across negative/positive lookahead
>and i think it can solve the problem.but am confused on how to use it.
>anyone can give me some advice? or is there better way other than
>lookaheads ...thanks..
>

Sometimes splitting and processing the pieces selectively can be a solution, 
e.g.,
if delimiters are properly paired, splitting (with parens to keep matches) 
should
give you a repeating pattern modulo 4 of
 <"everywhere else" as you said> ...

 >>> a = 
 >>> "this\nis\na\nsentence[startdelim]this\nis\nanother[enddelim]this\nis\n"
 >>> import re
 >>> splitter = re.compile(r'(?s)(\[startdelim\]|\[enddelim\])')
 >>> sp = splitter.split(a)
 >>> sp
 ['this\nis\na\nsentence', '[startdelim]', 'this\nis\nanother', '[enddelim]', 
'this\nis\n']
 >>> ''.join([(lambda s:s, lambda s:s.replace('\n',''))[not i%4](s) for i,s in 
 >>> enumerate(sp)])
 'thisisasentence[startdelim]this\nis\nanother[enddelim]thisis'
 >>> print ''.join([(lambda s:s, lambda s:s.replace('\n',''))[not i%4](s) for 
 >>> i,s in enumerate(sp)])
 thisisasentence[startdelim]this
 is
 another[enddelim]thisis

I haven't checked for corner cases, but HTH
Maybe I'll try two pairs of delimiters:

 >>> a += 
 >>> "\n33\n4\n[startdelim]\n77\n888[enddelim]\n00\n"
 >>> sp = splitter.split(a)
 >>> print ''.join([(lambda s:s, lambda s:s.replace('\n',''))[not i%4](s) for 
 >>> i,s in enumerate(sp)])
 thisisasentence[startdelim]this
 is
 another[enddelim]thisis334[startdelim]
 77
 888[enddelim]00

which came from
 >>> sp
 ['this\nis\na\nsentence', '[startdelim]', 'this\nis\nanother', '[enddelim]', 
'this\nis\n\n33
 \n4\n', '[startdelim]', '\n77\n888', '[enddelim]', 
'\n00\n']

Which had the replacing when not i%4 was true

 >>> for i,s in enumerate(sp): print '%6s: %r'%(not i%4,s)
 ...
   True: 'this\nis\na\nsentence'
  False: '[startdelim]'
  False: 'this\nis\nanother'
  False: '[enddelim]'
   True: 'this\nis\n\n33\n4\n'
  False: '[startdelim]'
  False: '\n77\n888'
  False: '[enddelim]'
   True: '\n00\n'

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class Variable Access and Assignment

2005-11-06 Thread Bengt Richter
On Sun, 06 Nov 2005 15:17:18 +1100, Steven D'Aprano <[EMAIL PROTECTED]> wrote:

>On Sat, 05 Nov 2005 18:14:03 -0800, Paul Rubin wrote:
>
>>> instance.attribute sometimes reading from the class attribute is a feature
>>> of inheritance; instance.attribute always writing to the instance is a
>>> feature of OOP; instance.attribute sometimes writing to the instance and
>>> sometimes writing to the class would be, in my opinion, not just a wart
>>> but a full-blown misfeature.
>> 
>> But that is what you're advocating: x.y+=1 writes to the instance or
>> the class depending on whether x.y is mutable or not.
>
>Scenario 1:
>
>Pre-conditions: class.a exists; instance.a exists.
>Post-conditions: class.a unchanged; instance.a modified.
>
>I give that a big thumbs up, expected and proper behaviour.
>
>Scenario 2:
>
>Pre-conditions: class.a exists and is immutable; instance.a does not
>exist.
>Post-conditions: class.a unchanged; instance.a exists.
>
>Again, expected and proper behaviour.
>
>(Note: this is the scenario that Antoon's proposed behaviour would change
>to class.a modified; instance.a does not exist.)
>
>Scenario 3:
>
>Pre-conditions: class.a exists and is mutable; instance.a exists.
>Post-conditions: class.a unchanged; instance.a is modified.
>
>Again, expected and proper behaviour.
>
>Scenario 4:
>
>Pre-conditions: class.a exists and is mutable; instance.a does
>not exist.
>Post-conditions: class.a modified; instance.a does not exist.
>
Are you saying the above is what happens or what should happen or not happen?
It's not what happens. Post-conditions are that class.a is modified AND
instance.a gets a _separate_ reference to the same result. Note:

 Python 2.4b1 (#56, Nov  3 2004, 01:47:27)
 [GCC 3.2.3 (mingw special 20030504-1)] on win32
 Type "help", "copyright", "credits" or "license" for more information.
 >>> class A(object):
 ... a = []
 ...
 >>> b=A()
 >>> id(A.__dict__['a'])
 49230700
 >>> b.a += [123]
 >>> id(A.__dict__['a'])
 49230700
 >>> id(b.__dict__['a'])
 49230700
 >>> (b.__dict__['a'])
 [123]
 >>> (A.__dict__['a'])
 [123]

Let's eliminate the inheritable class variable A.a:

 >>> del A.a
 >>> b.a
 [123]
 >>> id(b.__dict__['a'])
 49230700
 >>> vars(b)
 {'a': [123]}

Make sure we did eliminate A.a
 >>> vars(A)
 
 >>> vars(A).keys()
 ['__dict__', '__module__', '__weakref__', '__doc__']

Is that the "wart" you were thinking of, or are you actually happier? ;-)

>Well, that is a wart. It is the same wart, and for the same reasons, as
>the behaviour of:
>
>def function(value=[]):
>value.append(None)
IMO that's not a wart at all, that's a direct design decision, and it's
different from the dual referencing that happens in Scenario 4.

>
>I can live with that. It is a familiar wart, and keeps inheritance of
>attributes working the right way. And who knows? If your attributes are
>mutable, AND you want Antoon's behaviour, then you get it for free just by
>using b.a += 1 instead of b.a = b.a + 1.
Not quite, because there is no way to avoid the binding of the __iadd__
return value to b.a by effective setattr (unless you make type(b).a
a descriptor that intercepts the attempt -- see another post for example).

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python doc problem example: gzip module (reprise)

2005-11-06 Thread Bengt Richter
On 05 Nov 2005 19:19:29 -0800, Paul Rubin <http://[EMAIL PROTECTED]> wrote:

>Mike Meyer <[EMAIL PROTECTED]> writes:
>> > It's only -because- of those licenses that there's any reason not to
>> > bundle.
>> 
>> Actually, there are other reasons, just as there are reasons besides
>> licensing for not simply including third party libraries into the
>> standard library.
>
>I'm not talking about 3rd party libraries, I'm talking about 3rd party
>documentation for modules that are already in the Python standard
>library.  For example, if someone wrote a good Tkinter manual that
>were licensed in a way that the PSF could drop it into the Python
>distro, then PSF should certainly consider including it.  The same
>goes for good docs about urllib2, or various other modules that
>currently have lousy docs.
>
>> > I found 
>> >   http://infohost.nmt.edu/tcc/help/lang/python/tkinter.html
>> > to be a pretty good tutorial, though incomplete as a reference.
>> 
>> Thanks for the URL, but that's just a short list of links, most of
>> which I've already seen.
>
>Sorry, I meant:
>
>  http://infohost.nmt.edu/tcc/help/pubs/tkinter/ (html)
>  http://www.nmt.edu/tcc/help/pubs/tkinter.pdf   (pdf of same)
>
>You've probably seen this manual already.
If not, I'll second the recommendation for the pdf. It's not complete, but
it's quite useful and pretty easy to use. Hm, seems to be updated since I
downloaded a copy, guess I'll grab the newest ;-) Hm2, it doubled in size!

The creation dates are
*** FILE tkinter.pdf ***
/CreationDate (D:20030416170500)

*** FILE tkinter2.pdf ***
/CreationDate (D:20050803114234)

So I guess it could double in 2 years 4 months. I'll have to look into it.

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Can't instantiate class

2005-11-06 Thread Bengt Richter
On Sun, 06 Nov 2005 09:47:04 -0500, David Mitchell <[EMAIL PROTECTED]> wrote:

>Thanks for your prompt reply.
>
>Ok, so If use your first suggestion (db = DataUtil.DataUtil()
>), I get this error:
>
>AttributeError: 'module' object has no attribute 'DataUtil'
>
Have you looked to see what DataUtil you are importing?
E.g., after import DataUtil, put a print repr(DataUtil.__file__)
(is there a .pyc shadowing the .py you want? Is it from an unexpected
directory? Have you looked at the search path that is in effect when you import?
E.g., to print a list of the paths searched to find DataUtil.pyc (or if 
nonexistent
or not up to date, DataUtil.py), you could do this
 import sys
 for p in sys.path: print p

And then have you checked whether the above error message is telling the truth,
i.e., that indeed DataUtil does not define DataUtil.DataUtil?

Try doing help(DataUtil).

>If I try importing the class directly (from DataUtil import DataUtil), 
>I get this error:
>
>ImportError: cannot import name DataUtil
>
Sure, the first error message would predict the latter one ;-)

>
>Could these errors have something to do with the fact that I am doing 
>this through mod_python?
Could be, yes. I haven't used it, but I would guess it's a possibility. A 
server will generally
be set up to run in a different environment than your normal login environment,
so it's possible/probable that it has a different sys.path than you normally 
have, and
even if not textually different, if the first element is '' to indicate current 
working directory
that will generally be a different working from your normal login directory, 
depending on server
config for responding to particular urls.

It's possible to set up apache to run cgi impersonating a particular user 
account instead of
the usual "nobody" or such (which generally has restricted file access), but 
it's probably
"nobody" or some special user/group designed for security purposes, so you 
might want to
check permissions on the files the server is supposed to be able to access for 
r, w, or x.

There should be some standard test cgi stuff that will tell you about the 
environment.
And hopefully also some wrapper to catch exceptions that might otherwise 
silently get lost
(or show up in server error logs -- have you looked there?).

You might want to put a try/except around your whole code, and burp out some
carefully legal message page for the browser in case you catch something, e.g.,
if there were some exception in the DataUtil class body that prevented 
DataUtil.DataUtil
from being defined. (And look for bare except: clauses or other exception 
handling that
might be throwing away a DataUtil definition exception).

HTH

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class Variable Access and Assignment

2005-11-06 Thread Bengt Richter
On Sun, 06 Nov 2005 12:23:02 -0500, Christopher Subich <[EMAIL PROTECTED]> 
wrote:

>Bengt Richter wrote:
>> On Fri, 04 Nov 2005 10:28:52 -0500, Christopher Subich <[EMAIL PROTECTED]> 
>> wrote:
>
>>>is very much within the language specification.  Indeed, the language 
>>>specification dictates that an instance variable b.a is created if one 
>>>didn't exist before; this is true no matter if type(b.a) == int, or if 
>>>b.a is some esoteric mutable object that just happens to define 
>>>__iadd__(self,type(other) == int).
>> 
>> But if it is an esoteric descriptor (or even a simple property, which is
>> a descriptor), the behaviour will depend on the descriptor, and an instance
>> variable can be created or not, as desired, along with any side effect you 
>> like.
>
>Right, and that's also language-specification.  Voodoo, yes, but 
>language specification nonetheless. :)

I guess http://docs.python.org/ref/augassign.html is the spec.
I notice its example at the end uses an old-style class, so maybe
it's understandable that when it talks about getattr/setattr, it doesn't
mention the possible role of descriptors, nor narrow the meaning of
"evaluate once" for a.x to exclude type(a).x in the setattr phase of execution.

I.e., if x is a descriptor, "evaluate" apparently means only

type(a).x.__get__(a, type(a))

since that is semantically getting the value behind x, and so both of the ".x"s 
in

type(a).x.__set__(a, type(a).x.__get__(a, type(a)).__add__(1)) # (or 
__iadd__ if defined, I think ;-)

don't count as "evaluation" of the "target" x, even though it means that a.x 
got evaluated twice
(via getattr and setattr, to get the same descriptor object (which was used two 
different ways)).

I think the normal, non-descriptor case still results in (optimized) probes for 
type(a).x.__get__
and type(a).x.__set__ before using a.__dict__['x'].

ISTM also that it's not clear that defining __iadd__ does _not_ prevent the 
setattr phase from going ahead.
I.e., a successful __iadd__ in-place mutation does not happen "instead" of the 
setattr.

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Running autogenerated code in another python instance

2005-11-06 Thread Bengt Richter
On Thu, 03 Nov 2005 14:23:53 +1000, Paul Cochrane <[EMAIL PROTECTED]> wrote:

>On Wed, 02 Nov 2005 06:33:28 +0000, Bengt Richter wrote:
>
>> On Wed, 2 Nov 2005 06:08:22 + (UTC), Paul Cochrane <[EMAIL PROTECTED]> 
>> wrote:
>> 
>>>Hi all,
>>>
>>>I've got an application that I'm writing that autogenerates python code
>>>which I then execute with exec().  I know that this is not the best way to
>>>run things, and I'm not 100% sure as to what I really should do.  I've had a
>>>look through Programming Python and the Python Cookbook, which have given me
>>>ideas, but nothing has gelled yet, so I thought I'd put the question to the
>>>community.  But first, let me be a little more detailed in what I want to
>>>do:
>>>
>
>Bengt,
>
>Thanks for your reply!
>
>> It's a little hard to tell without knowing more about your
>> user input (command language?) syntax that is translated to
>> or feeds the process that "autogenerates python code".
>Ok, I'll try and clarify things as much as I can.
>
[...snip great reply...]

I owe you another reply, but I started and I couldn't spend the time to
do it justice. But in the meanwhile, I would suggest thinking about how
the MVC (model-view-controller) concept might help you factor things.
ISTM you are already part way there. See

http://en.wikipedia.org/wiki/MVC

for some infos. To that I'd add that if you want a top level interactive 
visualization tool,
you might want to look at it like a kind of IDE, where you might want a few 
workspace/project kind
of top level commands to help manage what you would otherwise do by way of 
manually creating
directory subtrees for various things etc. Anyway, if you want to use template 
code and edit
it and then compile and run it, you could select templates and make working 
copies automatically
and invoke some favorite editor on it from the top level command interpreter. 
Having these
pieces consistently arranged in projects/workspaces and shared template spaces 
etc. would
make it a single command to create a fresh space and not worry about colliding 
with something
you did just to show someone a little demo, etc. This is not the central topic, 
but good useability
is nice ;-)

That way if you just wanted to re-run something, you'd just select it and skip 
calling the
editor. Or if a step was to generate data, you could either create the data 
source program
by several steps or possibly just go on to define or invoke a visualization 
step with
a particular renderer and/or output, knowing that the data source was already 
set up in a
standard way. You could also consider borrowing unix piping/redirection concepts
for some command syntax, for composition of standard interface actions (not to 
mention
invoking the real thing in a subprocess when appropriate). Just 
free-associating here ;-)


Anyway, gotta go for now, sorry.

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class Variable Access and Assignment

2005-11-07 Thread Bengt Richter
On Mon, 07 Nov 2005 12:05:40 +0100, Magnus Lycka <[EMAIL PROTECTED]> wrote:

>First of all, I've still not heard any sensible suggestions
>about a saner behaviour for augmented assignment or for the
>way Python searches the class scope after the instance scope.
A nit, but a sizeable one: For new-style classes, the class scope
is searched first for a descriptor that may trump the instance logic.
>
>What do you suggest?
>
>Today, x += n acts just as x = x + n if x is immutable.
>Do you suggest that this should change?
A descriptor allows you to make it do as you like, so it's
a matter of discussing default behavior, not what you
are locked into (although costs re optimization could be a topic).

>

>Today, instance.var will look for var in the class
>scope if it didn't find it in the instance scope. Do
>you propose to change this?
It is already changed, for new-style classes. It is only if
a data descriptor is NOT found in the class hierarchy that
an existing instance variable is accessed as "usual".

>
>Or, do you propose that we should have some second order
>effect that makes the combination of instance.var += n
>work in such a way that these features are no longer
>orthogonal?
I don't think he is proposing anything, just defending against
what he considers misinterpretations of what he is saying.
Given how hard it is to say ANYTHING and be understood EXACTLY,
this tends towards a pursuit of quantum nits ;-)
I suspect we all experience the emotions relevant to being misunderstood;
we just stop at different nit granularities (modulo horn locking ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class Variable Access and Assignment

2005-11-07 Thread Bengt Richter
On 7 Nov 2005 08:38:49 GMT, Antoon Pardon <[EMAIL PROTECTED]> wrote:

>Op 2005-11-04, Magnus Lycka schreef <[EMAIL PROTECTED]>:
>>
[...]
>> Sure, Python has evolved and grown for about 15 years, and
>> backward compatibility has always been an issue, but the
>> management and development of Python is dynamic and fairly
>> open-minded. If there had been an obvious way to change this
>> in a way that solved more problems than it caused, I suspect
>> that change would have happened already.
>
>Fine I can live with that. 
>
Amen ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Threading-> Stopping

2005-11-07 Thread Bengt Richter
On Mon, 07 Nov 2005 13:49:35 +, Alan Kennedy <[EMAIL PROTECTED]> wrote:

>[Tuvas]
>> Is there a way to stop a thread with some command like t.stop()? Or any
>> other neat way to get around it? Thanks!
>
>Good question.
>
>And one that gets asked so often, I ask myself why it isn't in the FAQ?
>
>http://www.python.org/doc/faq/library.html
>
>It really should be in the FAQ. Isn't that what FAQs are for?
>
>Maybe the FAQ needs to be turned into a wiki?
>
Maybe when really good answers to questions get posted, we could edit it
down to a final version candidate that we all agree on, and when agreed,
make the final post with a tag line that google will recognize and that
can be a tag line for python newsgroup gems.

Sort of like a wiki-within-newsgroup in effect. E.g., "Tim Peters" 
site:python.org
gets a lot of interesting stuff. Likewise Martelli, though the volume is 
daunting
either way (140,000 & 32,400 resp ;-)

What about "python-newsgroup-faq" site:python.org? If we avoid the tag line in 
all but
finalized posts, that plus some additional search-narrowing via google would 
probably
be pretty effective. But this is a social engineering problem more than 
technical ;-)

BTW, would such a thing appropriately be defined in a process PEP?

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: struct.calcsize problem

2005-11-07 Thread Bengt Richter
On 7 Nov 2005 15:27:06 -0800, "Chandu" <[EMAIL PROTECTED]> wrote:

>In using the following struct format I get the size as 593. The same C
>struct is 590 if packed on byte boundary and 596 when using pragma
>pack(4). I am using pack(4) and added 3 spares at the end to get by.
>   hdrFormat = '16s 32s 32s B 8s H 8s H 4s H H 20s 64s 64s 64s 32s 32s
>64s L L B B B B B 64s 64s'
>
>Any ideas on what I am doing wrong?
>
Looks to me like you are getting default native byte order and _alignment_
and some pad bytes are getting added in. For native order with no padding,
try prefixing the format string with '='

 
 >>> hdrFormat
 '16s 32s 32s B 8s H 8s H 4s H H 20s 64s 64s 64s 32s 32s 64s L L B B B B B 64s 
64s'

If you add up the individual sizes, padding doesn't happen, apparently:

 >>> map(struct.calcsize, hdrFormat.split())
 [16, 32, 32, 1, 8, 2, 8, 2, 4, 2, 2, 20, 64, 64, 64, 32, 32, 64, 4, 4, 1, 1, 
1, 1, 1, 64, 64]
 >>> sum(map(struct.calcsize, hdrFormat.split()))
 590

But default:
 >>> struct.calcsize(hdrFormat)
 593
Apparently is native, with native alignment & I get the same as you:
 >>> struct.calcsize('@'+hdrFormat)
 593
Whereas native order standard (no pad) alignment is:
 >>> struct.calcsize('='+hdrFormat)
 590
Little endian, standard alignment:
 >>> struct.calcsize('<'+hdrFormat)
 590
Big endian, standard alignment
 >>> struct.calcsize('>'+hdrFormat)
 590
Network (big endian), standard alignment:
 >>> struct.calcsize('!'+hdrFormat)
 590

I guess if you want alignment for anything non-native, you have to specify pad 
bytes
where you need them (with x format character).

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regular expression question -- exclude substring

2005-11-07 Thread Bengt Richter
On Mon, 7 Nov 2005 16:38:11 -0800, James Stroud <[EMAIL PROTECTED]> wrote:

>On Monday 07 November 2005 16:18, [EMAIL PROTECTED] wrote:
>> Ya, for some reason your non-greedy "?" doesn't seem to be taking.
>> This works:
>>
>> re.sub('(.*)(00.*?01) target_mark', r'\2', your_string)
>
>The non-greedy is actually acting as expected. This is because non-greedy 
>operators are "forward looking", not "backward looking". So the non-greedy 
>finds the start of the first start-of-the-match it comes accross and then 
>finds the first occurrence of '01' that makes the complete match, otherwise 
>the greedy operator would match .* as much as it could, gobbling up all '01's 
>before the last because these match '.*'. For example:
>
>py> rgx = re.compile(r"(00.*01) target_mark")
>py> rgx.findall('00 noise1 01 noise2 00 target 01 target_mark 00 dowhat 01')
>['00 noise1 01 noise2 00 target 01 target_mark 00 dowhat 01']
>py> rgx = re.compile(r"(00.*?01) target_mark")
>py> rgx.findall('00 noise1 01 noise2 00 target 01 target_mark 00 dowhat 01')
>['00 noise1 01 noise2 00 target 01', '00 dowhat 01']
>
>My understanding is that backward looking operators are very resource 
>expensive to implement.
>
If the delimiting strings are fixed, we can use plain python string methods, 
e.g.,
(not tested beyond what you see ;-)

 >>> s = "00 noise1 01 noise2 00 target 01 target_mark"

 >>> def findit(s, beg='00', end='01', tmk=' target_mark'):
 ... start = 0
 ... while True:
 ... t = s.find(tmk, start)
 ... if t<0: break
 ... start = s.rfind(beg, start, t)
 ... if start<0: break
 ... e = s.find(end, start, t)
 ... if e+len(end)==t: # _just_ after
 ... yield s[start:e+len(end)]
 ... start = t+len(tmk)
 ...
 >>> list(findit(s))
 ['00 target 01']
 >>> s2 = s + ' garbage noise3 00 almost 01  target_mark 00 success 01 
 >>> target_mark'
 >>> list(findit(s2))
 ['00 target 01', '00 success 01']

(I didn't enforce exact adjacency the first time, obviously it would be more 
efficient
to search for end+tmk instead of tmk and back to beg and forward to end ;-)

If there can be spurious target_marks, and tricky matching spans, additional 
logic may be needed.
Too lazy to think about it ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: overloading *something

2005-11-08 Thread Bengt Richter
On Mon, 7 Nov 2005 20:39:46 -0800, James Stroud <[EMAIL PROTECTED]> wrote:

>On Monday 07 November 2005 20:21, Robert Kern wrote:
>> James Stroud wrote:
>> > Hello All,
>> >
>> > How does one make an arbitrary class (e.g. class myclass(object)) behave
>> > like a list in method calls with the "*something" operator? What I mean
>> > is:
>> >
>> > myobj = myclass()
>> >
>> > doit(*myobj)
>> >
>> > I've looked at getitem, getslice, and iter. What is it if not one of
>> > these?
>> >
>> > And, how about the "**something" operator?
>>
>> Avoiding magic at the expense of terseness, I would do something like
>> the following:
>>
>>   class myclass(object):
>> def totuple(self):
>>   ...
>> def todict(self):
>>   ...
>>
>>   myargs = myclass()
>>   mykwds = myclass()
>>
>>   doit(*myargs.totuple(), **mykwds.todict())
>
>Actually, I retried __iter__ and it worked. I'm not sure how I screwed it up 
>before. So I'm happy to report a little "magic":
>
>py> def doit(*args):
>...   print args
>...
>py> class bob:
>...   def __init__(self, length):
>... self.length = length
>...   def __iter__(self):
>... return iter(xrange(self.length))
>...
>py> b = bob(8)
>py> list(b)
>[0, 1, 2, 3, 4, 5, 6, 7]
>py> doit(*b)
>(0, 1, 2, 3, 4, 5, 6, 7)
>
I think you can also just define __getitem__ if that's handier. E.g.,

 >>> class MyClass(object):
 ... def __init__(self, limit=1): self.limit=limit
 ... def __getitem__(self, i):
 ...  if i < self.limit: return i**3
 ...  raise StopIteration
 ...
 >>> myobj = MyClass(5)
 >>> list(myobj)
 [0, 1, 8, 27, 64]
 >>> list(MyClass(10))
 [0, 1, 8, 27, 64, 125, 216, 343, 512, 729]

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: os.path.getmtime on winXP

2005-11-08 Thread Bengt Richter
On Tue, 08 Nov 2005 07:57:44 +0100, =?ISO-8859-1?Q?Jorg_R=F8dsj=F8?= <[EMAIL 
PROTECTED]> wrote:

>[sorry to those reading twice, but I just realised that I had posted 
>this after mucking about with the date on my machine to try to figure 
>this out -- so the message probably went into last months messages for 
>most people including me.]
>
>Hi
>
>I'm trying to use os.path.getmtime to check if a file has been modified. 
>  My OS is WinXP. The problem is, that when the os changes from/to 
>daylight savings time, the result is suddenly off by 3600 seconds ie. 
>one hour, even if the file remains the same.

Well, if the file hasn't been modified, the file time should be wrt
a constant epoch, so you must be observing a DST-corrected conversion
of that number, but you don't show what you are using.
E.g. you can convert with time.localtime or time.gmtime, and format the
result any way you want with time.strftime(...)

 >>> time.strftime('%Y-%m-%d %H:%M:%S', 
 >>> time.localtime(os.path.getmtime('wc.py')))
 '2003-09-10 14:38:57'
 >>> time.strftime('%Y-%m-%d %H:%M:%S', time.gmtime(os.path.getmtime('wc.py')))
 '2003-09-10 21:38:57'

Which comes from
 >>> os.path.getmtime('wc.py')
 1063229937

And default localtime formatting is
 >>> time.ctime(os.path.getmtime('wc.py'))
 'Wed Sep 10 14:38:57 2003'

which is
 >>> time.asctime(time.localtime(os.path.getmtime('wc.py')))
 'Wed Sep 10 14:38:57 2003'

the GMT version of which is
 >>> time.asctime(time.gmtime(os.path.getmtime('wc.py')))
 'Wed Sep 10 21:38:57 2003'

reflecting
 >>> os.system('dir wc.py')
  Volume in drive C is System
  Volume Serial Number is 14CF-C4B9

  Directory of c:\pywk\clp

 03-09-10  14:38595 wc.py


>
>I've tried using win32file.GetFileTime, and it reports a consistent 
>number, regardless of DST.
>
>What is happening here? My first thought is that getmtime should measure 
>'raw' time, and not be concerned with DST, and thus give me the same 
>result no matter the date on the machine I call it. Is the module broken 
>in some way, or am I just missing something here?
>
How did you format the number you got from os.path.getmtime?
You might want to try some of the above.

If you actually created/modified files just before and after the DST change
and saw an extra hour difference instead of the time between the two actions,
then maybe I'd look into whether the OS has some perverse option to use local 
DST
time to record in the file stat info, but that's hard to believe. More likely 
someone
is messing with with raw file time setting, like touch. Don't have it handy to 
see
what DST assumptions it makes if any.

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   3   4   5   6   7   8   9   10   >