Re: PEP 343, second look

2005-06-22 Thread Ron Adam
Paul Rubin wrote:
> Ron Adam <[EMAIL PROTECTED]> writes:
> 
>>>A new statement is proposed with the syntax:
>>>with EXPR as VAR:
>>>BLOCK
>>>Here, 'with' and 'as' are new keywords; EXPR is an arbitrary
>>>expression (but not an expression-list)...
>>
>>How is EXPR arbitrary?  Doesn't it need to be or return an object that
>>the 'with' statement can use?  (a "with" object with an __enter__ and
>>__exit__ method?)
> 
> 
> That's not a syntactic issue.  "x / y" is a syntactically valid
> expression even though y==0 results in in a runtime error.

The term 'arbitrary' may be overly broad.  Take for example the 
description used in the 2.41 documents for the 'for' statement.

"The expression list is evaluated once; it should yield an iterable object."


If the same style is used for the with statement it would read.

"The expression, (but not an expression-list), is evaluated once; it 
should yield an object suitable for use with the 'with' statement. ... "

Or some variation of this.


Regards,
Ron






-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Favorite non-python language trick?

2005-06-24 Thread Ron Adam
George Sakkis wrote:
> "Joseph Garvin" wrote:
> 
> 
>>I'm curious -- what is everyone's favorite trick from a non-python
>>language? And -- why isn't it in Python?
> 
> 
> Although it's an optimization rather than language trick, I like the
> inline functions/methods in C++. There has been a thread on that in the
> past (http://tinyurl.com/8ljv5) and most consider it as useless and/or
> very hard to implement in python without major changes in the language
> (mainly because it would introduce 'compile-time' lookup of callables
> instead of runtime, as it is now). Still it might be useful to have for
> time-critical situations, assuming that other optimization solutions
> (psyco, pyrex, weave) are not applicable.
> 
> George

Thanks for the link George, It was interesting.

I think some sort of inline or deferred local statement would be useful 
also.  It would serve as a limited lambda (after it's removed), eval 
alternative, and as a inlined function in some situations as well I think.

Something like:

 name = defer 

then used as:

 result = name()

The expression name() will never have arguments as it's meant to 
reference it's variables as locals and probably will be replaced 
directly with names's byte code contents at compile time.

Defer could be shortened to def I suppose, but I think defer would be 
clearer.  Anyway, it's only a wish list item for now.

Regards,
Ron
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Thoughts on Guido's ITC audio interview

2005-06-27 Thread Ron Adam
Dave Benjamin wrote:
> Guido gave a good, long interview, available at IT Conversations, as was
> recently announced by Dr. Dobb's Python-URL! The audio clips are available
> here: 
> 
> http://www.itconversations.com/shows/detail545.html
> http://www.itconversations.com/shows/detail559.html


Thanks for the links Dave. :-)

He talked a fair amount on adding type checking to python.  From 
reading his blog on that subject, it seems Guido is leaning towards 
having the types as part of function and method definitions via the 'as' 
and/or '->' operator in function and class argument definitions.

My first thoughts on this was to do it by associating the types to the 
names. Limiting reassignment of a names to specific types would be a 
form of persistent name constraint that would serve as a dynamic type 
system.

Has any form of "name constraints" been discussed or considered 
previously?

Regards,
Ron
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is there something similar to ?: operator (C/C++) in Python?

2005-06-28 Thread Ron Adam
Mike Meyer wrote:
> Riccardo Galli <[EMAIL PROTECTED]> writes:
> 
> 
>>On Fri, 24 Jun 2005 09:00:04 -0500, D H wrote:
>>
>>
Bo Peng wrote:


>I need to pass a bunch of parameters conditionally. In C/C++, I can
>do func(cond1?a:b,cond2?c:d,.)
>
>Is there an easier way to do this in Python?


>>>The answer is simply no, just use an if statement instead.
>>
>>That's not true.
>>One used form is this:
>>result = cond and value1 or value2
>>
>>which is equal to
>>if cond:
>>   result=value1
>>else:
>>   result=value2
>>
>>
>>another form is:
>>
>>result = [value2,value1][cond]
>>
>>
>>the first form is nice but value1 must _always_ have a true value (so not
>>None,0,'' and so on), but often you can handle this.
> 
> 
> Note that [value2, value1][cond] doesn't do exactly what cond ? value1 : 
> value2
> does either. The array subscript will always evaluate both value2 and
> value1. The ?: form will always evaluate only one of them. So for
> something like:
> 
>   [compute_1000_digits_of_pi(), compute_1000_digits_of_e][cond]
> 
> you'd really rather have:
> 
>   cond ? compute_1000_digits_of_e() : compute_1000_digits_of_pi()
> 
> There are other such hacks, with other gotchas.
> 
>   http://mail.python.org/mailman/listinfo/python-list


Re: Boss wants me to program

2005-06-28 Thread Ron Adam
[EMAIL PROTECTED] wrote:

> Ok, sorry to throw perhaps unrelated stuff in here, but I want everyone
> to know what we have right now in the office. We started with an
> electric typewriter and file cabinets. We were given an old 386 with a
> 20 mb hard drive about 5 years ago, and we moved everything over to a
> very very old version of msworks on msdos 6. Depending on what we are
> given(for reasons best left alone, I won't explain why we can't
> actually COUNT on the actual buying of a new system), we will be left
> with this relic, or be given a 486. Maybe a old pentium 90 or so. I may
> try to convince the boss that I can write dos programs for the existing
> machine. If we get any kind of upgrade, I'm sure it will be able to run
> linux with X and a low overhead window manager. If that happened, I'd
> be able to use python and this "tk" thing you have talked about and
> make something that will work for him, am I correct? The other
> alternative is to install console mode linux on it and hope that the
> ncurses library can be used by python. The system could be as low as a
> 486 dx2 66 with maybe 16 megs of ram. Well, I just thought I'd give you
> people more info on the situation.
> 
> Xeys

Check the yellow pages in your area (or closest large city if you're not 
in one), for used computer stores. Or just start calling all the 
computer stores in your area and ask them if they have any used 
computers for sale.  I was able to get a Dell Pentium 3 for $45 dollars 
last year for a second computer to put Linux on.  I just asked him if he 
had any old computers for really cheep that booted, and that's what he 
found in the back.  I just needed to add ram and a hard drive.

You might be surprised what you can get used if you ask around.

Cheers,
Ron
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is there something similar to ?: operator (C/C++) in Python?

2005-06-30 Thread Ron Adam
Antoon Pardon wrote:
> Op 2005-06-29, Scott David Daniels schreef <[EMAIL PROTECTED]>:
> 
>>Roy Smith wrote:
>>
>>>Andrew Durdin <[EMAIL PROTECTED]> wrote:
>>>
>>>>Corrected version:
>>>>   result = [(lambda: expr0), lambda: expr1][bool(cond)]()
>>
>>Sorry, I thought cond was a standard boolean.
>>Better is:
>> result = [(lambda: true_expr), lambda: false_expr][not cond]()
> 
> 
> How about the following:
> 
>   result = (cond and (lambda: true_expr) or (lambda: false_expr))()
> 

That works as long as long as they are expressions, but the ? format 
does seem to be more concise I admit.


To use *any* expressions in a similar way we need to use eval which is a 
lot slower unfortunately.

result = eval(['expr0','expr1'][cond])


A thought occurs to me that putting index's before the list might be an 
interesting option for expressions.

result = expr[expr0,expr1]


This would probably conflict with way too many other things though. 
Maybe this would be better?

result = index from [expr0,expr1]


Where index can be an expression.  That is sort of an inline case 
statement.  Using a dictionary it could be:

result = condition from {True:expr1, False:expr0}


As a case using values as an index:

 case expr from [
expr0,
expr2,
expr3 ]


Or using strings with a dictionary...

 case expr from {
'a':expr0,
'b':expr1,
'c':expr3 }
 else:
expr4

Reads nice, but can't put expressions in a dictionary or list without 
them being evaluated first, and the [] and {} look like block brackets 
which might raise a few complaints.

Can't help thinking of what if's.  ;-)

Cheer's
Ron
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientific mini-survey

2005-07-01 Thread Ron Adam
Tom Anderson wrote:

> So, if you're a pythonista who loves map and lambda, and disagrees with 
> Guido, what's your background? Functional or not?

I find map too limiting, so won't miss it.  I'm +0 on removing lambda 
only because I'm unsure that there's always a better alternative.

So what would be a good example of a lambda that couldn't be replaced?

Cheers,
Ron


BTW...  I'm striving to be Pythonic. ;-)
-- 
http://mail.python.org/mailman/listinfo/python-list


custom Tkinter ListBox selectMode

2005-07-01 Thread Ron Provost
Hello,

I've written a simple GUI which contains a listbox to hold some information. 
I've found that the click-selection schemes provided by Tkinter are 
insufficient for my needs.  Essentiall I need to impletement a custom 
selectMode.  As a first go, I've attempted to implement a click-on-click-off 
mode.  So for each entry, a click on that entry toggels its selection mode. 
A click on any entry does not affect the selection mode of any other entry.

I attempted to implement this by binding

myListBoxWidget.bind( '', self.itemClicked )

Where my itemClicked() method determines which item was clicked on then 
toggels it's selection state.




So, what's my problem?  Well, it doesn't work.  The SINGLE selectMode 
(default) seems to always take over.  I'm tempted to set a simple timer 
callback for a short duration which redoes the selection set after the 
built-in hander has had a chance to do its thing, but that seems like such a 
kludge to me.  Any suggestions on how I can implement a custom selectMode?

Thanks for your input.

Ron 


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientific mini-survey

2005-07-01 Thread Ron Adam
Terry Hancock wrote:

> On Friday 01 July 2005 03:36 pm, Ron Adam wrote:
> 
>>I find map too limiting, so won't miss it.  I'm +0 on removing lambda 
>>only because I'm unsure that there's always a better alternative.
> 
> 
> Seems like some new idioms would have to be coined, like:
> 
> def my_function(a1, a2):
> def _(a,b): return a+b
> call_a_lib_w_callback(callback=_)
> 
> doesn't seem too bad, and defeats the "wasting time thinking of names"
> argument.

Usually the time is regained later when you need to go back and figure 
out what it is that the lambda is doing.  Not so easy for beginners.

A hard to understand process that is easy to do, is not easier than an 
easy to understand process that is a bit harder to do.


>>So what would be a good example of a lambda that couldn't be replaced?
>  
> I suspect the hardest would be building a list of functions. Something
> like:
> 
> powers = [lambda a, i=i: a**i for i in range(10)]
> 
> which you might be able to make like this:
> 
> powers = []
> for i in range(10):
> def _(a,i=i): return  a**i
> powers.append(_)
> 
> which works and is understandable, but a bit less concise.

This would be a more direct translation I think:

 def put_func(i):
 def power_of_i(a):
return a**i
 return power_of_i
 power = [put_func(i) for i in range(10)]

I think it's also clearer what it does.  I had to type the lambda 
version into the shell to be sure I understood it.  I think that's what 
we want to avoid.

> The main obstacle to the lambda style here is that def statements
> are not expressions.  I think that's been proposed as an alternative,
> too -- make def return a value so you could say:
> 
> powers = [def _(a,i=i): return a**i for i in range(10)]

Wouldn't it be:

powers = [(def _(a): return a**i) for i in range(10)]

The parens around the def make it clearer I think.

That would be pretty much just renaming lambda and changing a syntax a 
tad.  I get the feeling that the continual desire to change the syntax 
of lambda is one of the reasons for getting rid of it.  I'm not sure any 
of the suggestions will fix that.  Although I prefer this version over 
the current lambda.

Instead of reusing 'def', resurrecting 'let' as a keyword might be an 
option.

powers = [ (let a return a**i) for i in range(10) ]


I just now thought this up, but I like it a lot as an alternative syntax 
to lambda.  :-)


> Personally, I think this is understandable, and given that lambda
> is to be pulled, a nice substitute (I would say it is easier to read
> than the current lambda syntax, and easier for a newbie to
> understand).
> 
> But it would probably encourage some bad habits, such as:
> 
> myfunc = def _(a,b):
> print a,b
> return a+b
> 
> which looks too much like Javascript, to me, where there are
> about three different common idioms for defining a
> function (IIRC).  :-/

I don't think it would be used that way...  Very often anyway.

Still none of these examples make for good use cases for keeping lambda. 
  And I think that's whats needed first, then the new syntax can be decided.

Ron
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Proposal: reducing self.x=x; self.y=y; self.z=z boilerplate code

2005-07-02 Thread Ron Adam
Ralf W. Grosse-Kunstleve wrote:

> class grouping:
> 
> def __init__(self, .x, .y, .z):
> # real code right here

The way this would work seems a bit inconsistent to me.  Args normally 
create local variables that receive references to the objects passed to 
them.

In this case, a function/method is *creating* non local names in a scope 
outside it's own name space to receive it's arguments.  I don't think 
that's a good idea.

A better approach is to have a copy_var_to(dest, *var_list) function 
that can do it.  You should be able to copy only selected arguments, and 
not all of them.

 copy_var_to(self,x,z)

Not exactly straight forward to do as it runs into the getting an 
objects name problem.


> Emulation using existing syntax::
> 
> def __init__(self, x, y, z):
> self.x = x
> del x
> self.y = y
> del y
> self.z = z
> del z

The 'del's aren't needed as the references will be unbound as soon as 
__init__ is finished.  That's one of the reasons you need to do self.x=x 
, the other is to share the objects with other methods.



> Is there a way out with Python as-is?
> -

With argument lists that long it might be better to use a single 
dictionary to keep them in.

  class manager:
  def __init__(self, **args):
  defaults = {
'crystal_symmetry':None,
'model_indices':None,
'conformer_indices':None,
'site_symmetry_table':None,
'bond_params_table':None,
'shell_sym_tables':None,
'nonbonded_params':None,
'nonbonded_types':None,
'nonbonded_function':None,
'nonbonded_distance_cutoff':None,
'nonbonded_buffer':1,
'angle_proxies':None,
'dihedral_proxies':None,
'chirality_proxies':None,
'planarity_proxies':None,
'plain_pairs_radius':None }
  defaults.update(args)
  self.data = defaults

  # real code

Regards,
Ron
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Folding in vim

2005-07-02 Thread Ron Adam
Terry Hancock wrote:

> My general attitude towards IDEs and editors has been 
> extremely conservative, but today I decided to see what 
> this "folding" business was all about.
> 
> I see that vim (and gvim, which is what I actually use)
> has this feature, and it is fairly nice, but at present it's
> very manual --- and frankly it's hard for me to see the
> point if I have to manually mark folds every time I start
> up.

I been trying to learn and use the 'Cream' distribution of Vim.

 http://cream.sourceforge.net/

Playing around with it a bit

If I highlight any block of code... then press F9 it folds it.  Put the 
cursor on the fold and pressing F9 again unfolds it.  It remembers the 
folds, so putting the cursor anywhere in the previous folded area and 
pressing F9 again refolds the fold.  Folds can be inside of folds.

Saving the file exiting and reopening it... the folded folds remained 
folded.  I'm not sure where it keeps the fold info for the file.

The folds don't have anything to do with classes or functions, but are 
arbitrary selected lines, with the first line displayed after the number 
of lines folded.  So a whole file gets reduced to...

   1 # SliderDialog.py
   2
   3 +--- 20 lines: """ SIMPLE SLIDER DIALOG -
  23
  24 +-- 24 lines: # Imports--
  48
  49 +-- 13 lines: # Values extracted from win32.com--
  62
  63 +-- 67 lines: class SliderDialog(dialog.Dialog):-
130
131 +--  4 lines: def GetSliderInput( title, text, label, value=0 ):-
135
136 +-- 17 lines: if __name__ == '__main__':-
153

Pretty cool, I'll probably use folding more now that I've played with it 
a bit.

I like Vim-Cream,  but I still haven't gotten the script right for 
executing the current file in the shell.  And a second script for 
executing the current file in the shell and capturing the output in a 
pane.  I think some of it may be windows path conflicts.

Ron

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientific mini-survey

2005-07-02 Thread Ron Adam
Steven D'Aprano wrote:

> On Sat, 02 Jul 2005 20:26:31 -0700, Devan L wrote:
> 
> 
>> Claiming that sum etc. do the same job is the whimper of
>>someone who doesn't want to openly disagree with Guido.
>>
>>Could you give an example where sum cannot do the job(besides the
>>previously mentioned product situation?
> 
> 
> There is an infinite number of potential lambdas, and therefore an
> infinite number of uses for reduce.
> 
> 
> 
> sum only handles a single case, lambda x,y: x+y
> 
> product adds a second case: lambda x,y: x*y
> 
> So sum and product together cover precisely 2/infinity, or zero percent,
> of all possible uses of reduce.

But together, sum and product, probably cover about 90% of situations in 
which you would use reduce.  Getting a total (sum) from a list probably 
covers 80% of the situations reduce would be used on it's own.  (I can't 
think of any real uses of product at the moment. It's late.)

I'm just estimating, but I think that is the gist of adding those two in 
exchange for reduce.  Not that they will replace all of reduce use 
cases, but that sum and product cover most situations and can be 
implemented more efficiently than using reduce or a for loop to do the 
same thing.  The other situations can easily be done using for loops, so 
it's really not much of a loss.

Ron
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientific mini-survey

2005-07-03 Thread Ron Adam
Erik Max Francis wrote:

> Ron Adam wrote:

>> I'm just estimating, but I think that is the gist of adding those two 
>> in exchange for reduce.  Not that they will replace all of reduce use 
>> cases, but that sum and product cover most situations and can be 
>> implemented more efficiently than using reduce or a for loop to do the 
>> same thing.  The other situations can easily be done using for loops, 
>> so it's really not much of a loss.
> 
> I really don't understand this reasoning.  You essentially grant the 
> position that reduce has a purpose, but you still seem to approve 
> removing it.  Let's grant your whole point and say that 90% of the use 
> cases for reduce are covered by sum and product, and the other 10% are 
> used by eggheads and are of almost no interest to programmers.  But it 
> still serves a purpose, and a useful one.  That it's not of immediate 
> use to anyone is an argument for moving it into a functional module 
> (something I would have no serious objection to, though I don't see its 
> necessity), not for removing it altogether!  Why would you remove the 
> functionality that already exists _and is being used_ just because? What 
> harm does it do, vs. the benefit of leaving it in?

There are really two separate issues here.

First on removing reduce:

1. There is no reason why reduce can't be put in a functional module or 
you can write the equivalent yourself.  It's not that hard to do, so it 
isn't that big of a deal to not have it as a built in.

2. Reduce calls a function on every item in the list, so it's 
performance isn't much better than the equivalent code using a for-loop.

  *** (note, that list.sort() has the same problem. I would support 
replacing it with a sort that uses an optional 'order-list' as a sort 
key.  I think it's performance could be increased a great deal by 
removing the function call reference. ***


Second, the addition of sum & product:

1. Sum, and less so Product, are fairly common operations so they have 
plenty of use case arguments for including them.

2. They don't need to call a pre-defined function between every item, so 
they can be completely handled internally by C code. They will be much 
much faster than equivalent code using reduce or a for-loop.  This 
represents a speed increase for every program that totals or subtotals a 
list, or finds a product of a set.


> But removing reduce is just removing 
> functionality for no other reason, it seems, than spite.

No, not for spite. It's more a matter of increasing the over all 
performance and usefulness of Python without making it more complicated. 
In order to add new stuff that is better thought out, some things 
will need to be removed or else the language will continue to grow and 
be another visual basic.

Having sum and product built in has a clear advantage in both 
performance and potential frequency of use, where as reduce doesn't have 
the same performance advantage and most poeple don't use it anyway, so 
why have it built in if sum and product are?  Why not just code it as a 
function and put it in your own module?

 def reduce( f, seq):
 x = 0
 for y in seq:
 x = f(x,y)
 return x

But I suspect that most people would just do what I currently do and 
write the for-loop to do what they want directly instead of using lambda 
in reduce.

 x = 1
 for y in seq:
 x = x**y

If performance is needed while using reduce with very large lists or 
arrays, using the numeric module would be a much better solution.

http://www-128.ibm.com/developerworks/linux/library/l-cpnum.html

Cheers,
Ron

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Proposal: reducing self.x=x; self.y=y; self.z=z boilerplate code

2005-07-03 Thread Ron Adam
Bengt Richter wrote:

> What if parameter name syntax were expanded to allow dotted names as binding
> targets in the local scope for the argument or default values? E.g.,
> 
> def foometh(self, self.x=0, self.y=0): pass
> 
> would have the same effect as
> 
> def foometh(self, self.y=0, self.x=0): pass
> 
> and there would be a persistent effect in the attributes of self
> (whatever object that was), even with a body of pass.
> 
> I'm not sure about foo(self, **{'self.x':0, 'self.y':0}), but if
> you didn't capture the dict with a **kw formal parameter, IWT you'd
> have to be consistent and effect the attribute bindings implied.
> 
> (Just a non-thought-out bf here, not too serious ;-)
> 
> Regards,
> Bengt Richter

Well it works the other way around to some degree.

def foo(self, x=x, y=y):pass

x=x binds the class variables to the arguments without the self. if no 
value is given.

Which is kind of strange, since x by it self gives an error if no value 
is given.  The strange part is x=x is not the same as just x.  I 
understand why, but it still looks odd.


Why isn't there a dict method to get a sub dict from a key list? 
Fromkeys doesn't quite do it.

sub-dict = dict.subdict(key_list)

Or have dict.copy() take a key list. (?)





The following works and doesn't seem too obscure, although the x=x, 
etc.. could be annoying if they were a lot of long names.

Seems like mutable default arguments is what's needed to make it work, 
not that it's needed IMO.  But it's an interesting problem.


def subdict(dict, keys):
 d = {}
 for k in keys:
 d[k] = dict[k]
 return d

class foo(object):
 x = 1
 y = 2
 z = 3
 def __init__(self,x=x,y=y,z=z):
 save_these = subdict(locals(),['x','y'])
 self.__dict__.update(save_these)

 # rest of code
 print self.x, self.y, self.z

f = foo()
f = foo(5,6,7)








-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientific mini-survey

2005-07-03 Thread Ron Adam
Steven D'Aprano wrote:
> On Sun, 03 Jul 2005 19:31:02 +0000, Ron Adam wrote:
> 
> 
>>First on removing reduce:
>>
>>1. There is no reason why reduce can't be put in a functional module 
> 
> 
> Don't disagree with that.
> 
> 
>>or 
>>you can write the equivalent yourself.  It's not that hard to do, so it 
>>isn't that big of a deal to not have it as a built in.
> 
> 
> Same goes for sum. Same goes for product, ...

Each item needs to stand on it's own.  It's a much stronger argument for 
removing something because something else fulfills it's need and is 
easier or faster to use than just saying we need x because we have y.

In this case sum and product fulfill 90% (estimate of course) of reduces 
use cases.  It may actually be as high as 99% for all I know. Or it may 
be less.  Anyone care to try and put a real measurement on it?


  which doesn't have that many
> common usages apart from calculating the geometric mean, and let's face
> it, most developers don't even know what the geometric mean _is_. 

I'm neutral on adding product myself.


> If you look back at past discussions about sum, you will see that there is
> plenty of disagreement about how it should work when given non-numeric
> arguments, eg strings, lists, etc. So it isn't so clear what sum should do.

Testing shows sum() to be over twice as fast as either using reduce or a 
for-loop.  I think the disagreements will be sorted out.


>>2. Reduce calls a function on every item in the list, so it's 
>>performance isn't much better than the equivalent code using a for-loop.
>  
> That is an optimization issue. Especially when used with the operator
> module, reduce and map can be significantly faster than for loops.

I tried it... it made about a 1% improvement in the builtin reduce and 
an equal improvement in the function that used the for loop.

The inline for loop also performed about the same.

See below..


>>  *** (note, that list.sort() has the same problem. I would support 
>>replacing it with a sort that uses an optional 'order-list' as a sort 
>>key.  I think it's performance could be increased a great deal by 
>>removing the function call reference. ***
>>
>>
>>Second, the addition of sum & product:
>>
>>1. Sum, and less so Product, are fairly common operations so they have 
>>plenty of use case arguments for including them.
> 
> Disagree about product, although given that sum is in the language, it
> doesn't hurt to put product as well for completion and those few usages.

I'm not convinced about product either, but if I were to review my 
statistics textbooks, I could probably find more uses for it.  I suspect 
that there may be a few common uses for it that are frequent enough to 
make it worth adding.  But it might be better in a module.


>>2. They don't need to call a pre-defined function between every item, so 
>>they can be completely handled internally by C code. They will be much 
>>much faster than equivalent code using reduce or a for-loop.  This 
>>represents a speed increase for every program that totals or subtotals a 
>>list, or finds a product of a set.
> 
> I don't object to adding sum and product to the language. I don't object
> to adding zip. I don't object to list comps. Functional, er, functions
> are a good thing. We should have more of them, not less.

Yes, we should have lots of functions to use, in the library, but not 
necessarily in builtins.

>>>But removing reduce is just removing
>>>functionality for no other reason, it seems, than spite.
>>
>>No, not for spite. It's more a matter of increasing the over all
>>performance and usefulness of Python without making it more complicated.
>>In order to add new stuff that is better thought out, some things
>>will need to be removed or else the language will continue to grow and
>>be another visual basic.
>  
> Another slippery slope argument.

Do you disagree or agree?  Or are you undecided?


>>Having sum and product built in has a clear advantage in both
>>performance and potential frequency of use, where as reduce doesn't have
>>the same performance advantage and most poeple don't use it anyway, so
>>why have it built in if sum and product are?  
> 
> Because it is already there. 

Hmm..  I know a few folks, Good people, but they keep everything to the 
point of not being able to find anything because they have so much. 
They can always think of reasons to keep things, "It's worth something", 
"it means something to me", "I'm going to fix it",  "I'm going to sell 

Re: map/filter/reduce/lambda opinions and background unscientific mini-survey

2005-07-03 Thread Ron Adam
Erik Max Francis wrote:

> Ron Adam wrote:
> 
>> Each item needs to stand on it's own.  It's a much stronger argument 
>> for removing something because something else fulfills it's need and 
>> is easier or faster to use than just saying we need x because we have y.
>>
>> In this case sum and product fulfill 90% (estimate of course) of 
>> reduces use cases.  It may actually be as high as 99% for all I know. 
>> Or it may be less.  Anyone care to try and put a real measurement on it?
> 
> 
> Well, reduce covers 100% of them, and it's one function, and it's 
> already there.

So you are saying that anything that has a 1% use case should be 
included as a builtin function?

I think I can find a few hundred other functions in the library that are 
used more than ten times as often as reduce.  Should those be builtins too?

This is a practical over purity issue, so what are the practical reasons 
for keeping it.  "It's already there" isn't a practical reason.  And it 
covers 100% of it's own potential use cases, is circular logic without a 
real underlying basis.

Cheers,
Ron











-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Folding in vim

2005-07-03 Thread Ron Adam
Terry Hancock wrote:

> On Saturday 02 July 2005 10:35 pm, Terry Hancock wrote:
> 
>>I tried to load a couple of different scripts to 
>>automatically fold Python code in vim, but none of them
>>seems to do a good job.
>>
>>I've tried:
>>python_fold.vim by Jorrit Wiersma
>>http://www.vim.org/scripts/script.php?script_id=515
> 
> 
> Actually, I think this one is doing what I want now. It seems
> to be that it isn't robust against files with lots of mixed tabs
> and spaces.  I also got "space_hi.vim" which highlights tabs
> and trailing spaces, which made it a lot easier to fix the 
> problem.

I edited my syntax coloring file to do the same thing.  Not to mention 
adding a few key words that were missing.  :-)

> After fixing my source files, python_fold seems to be able
> to handle them just fine.
> 
> I must also recommend C. Herzog's python_box.vim
> which is fantastic -- especially the automatic Table of
> Contents generation for Python source, and pydoc.vim
> which puts access to pydoc into the editor.

Sounds good. I'll give it a try!  :-)


> Nice.  Now that I have a "very sharp saw", I'm going to
> have to go cut some stuff for a bit. ;-)
 >
> --
> Terry Hancock ( hancock at anansispaceworks.com )
> Anansi Spaceworks  http://www.anansispaceworks.com
> 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientific mini-survey

2005-07-04 Thread Ron Adam
George Sakkis wrote:

> And finally for recursive flattening:
> 
> def flatten(seq):
> return reduce(_accum, seq, [])
> 
> def _accum(seq, x):
> if isinstance(x,list):
> seq.extend(flatten(x))
> else:
> seq.append(x)
> return seq
> 
> 
>>>>flatten(seq)
> 
> [1, 2, 3, 4, 5, 6]
> 
> 
> George
> 

How about this for a non recursive flatten.

def flatten(seq):
 s = []
 while seq:
 while isinstance(seq[0],list):
 seq = seq[0]+seq[1:]
 s.append(seq.pop(0))
 return s

seq = [[1,2],[3],[],[4,[5,6]]]
flatten(seq)


Ron
-- 
http://mail.python.org/mailman/listinfo/python-list


flatten(), [was Re: map/filter/reduce/lambda opinions and background unscientific mini-survey]

2005-07-05 Thread Ron Adam
Tom Anderson wrote:

> The trouble with these is that they make a lot of temporary lists - 
> George's version does it with the recursive calls to flatten, and Ron's 
> with the slicing and concatenating. How about a version which never 
> makes new lists, only appends the base list? We can use recursion to 
> root through the lists ...

Ok...  How about a non-recursive flatten in place? ;-)

def flatten(seq):
 i = 0
 while i!=len(seq):
 while isinstance(seq[i],list):
 seq.__setslice__(i,i+1,seq[i])
 i+=1
 return seq

seq = [[1,2],[3],[],[4,[5,6]]]
print flatten(seq)

I think I'll be using the __setslice__ method more often.



And the test:
#

# Georges recursive flatten
init_a = """
def flatten(seq):
 return reduce(_accum, seq, [])

def _accum(seq, x):
 if isinstance(x,list):
 seq.extend(flatten(x))
 else:
 seq.append(x)
 return seq

seq = [[1,2],[3],[],[4,[5,6]]]
"""

# Ron's non-recursive
init_b = """
def flatten(seq):
 s = []
 while seq:
 while isinstance(seq[0],list):
 seq = seq[0]+seq[1:]
 s.append(seq.pop(0))
 return s

seq = [[1,2],[3],[],[4,[5,6]]]
"""

# Tom's recursive, no list copies made
init_c = """
def isiterable(x):
 return hasattr(x, "__iter__") # close enough for government work

def visit(fn, x): # perhaps better called applytoall
 if (isiterable(x)):
 for y in x: visit(fn, y)
 else:
 fn(x)

def flatten(seq):
 a = []
 def appendtoa(x):
 a.append(x)
 visit(appendtoa, seq)
 return a

seq = [[1,2],[3],[],[4,[5,6]]]
"""

# Devan' smallest recursive
init_d = """
def flatten(iterable):
 if not hasattr(iterable, '__iter__'):
 return [iterable]
 return sum([flatten(element) for element in iterable],[])

seq = [[1,2],[3],[],[4,[5,6]]]
"""

# Ron's non-recursive flatten in place!  Much faster too!
init_e = """
def flatten(seq):
 i = 0
 while i!=len(seq):
 while isinstance(seq[i],list):
 seq.__setslice__(i,i+1,seq[i])
 i+=1
 return seq

seq = [[1,2],[3],[],[4,[5,6]]]
"""

import timeit
t = timeit.Timer("flatten(seq)",init_a)
print 'recursive flatten:',t.timeit()

import timeit
t = timeit.Timer("flatten(seq)",init_b)
print 'flatten in place-non recursive:',t.timeit()

import timeit
t = timeit.Timer("flatten(seq)",init_c)
print 'recursive-no copies:',t.timeit()

import timeit
t = timeit.Timer("flatten(seq)",init_d)
print 'smallest recursive:',t.timeit()

import timeit
t = timeit.Timer("flatten(seq)",init_e)
print 'non-recursive flatten in place without copies:',t.timeit()

#


The results on Python 2.3.5: (maybe someone can try it on 2.4)

recursive flatten: 23.6332723852
flatten in place-non recursive: 22.1817641628
recursive-no copies: 30.909762833
smallest recursive: 35.2678756658
non-recursive flatten in place without copies: 7.8551944451


A 300% improvement!!!

This shows the value of avoiding copies, recursion, and extra function 
calls.

Cheers,
Ron Adam

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: flatten(), [was Re: map/filter/reduce/lambda opinions ...]

2005-07-05 Thread Ron Adam

> Ok...  How about a non-recursive flatten in place? ;-)
> 
> def flatten(seq):
> i = 0
> while i!=len(seq):
> while isinstance(seq[i],list):
> seq.__setslice__(i,i+1,seq[i])
> i+=1
> return seq
> 
> seq = [[1,2],[3],[],[4,[5,6]]]
> print flatten(seq)
> 
> I think I'll be using the __setslice__ method more often.


This is probably the more correct way to do it. :-)

def flatten(seq):
 i = 0
 while i!=len(seq):
 while isinstance(seq[i],list):
 seq[i:i+1]=seq[i]
 i+=1
 return seq



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: flatten(), [was Re: map/filter/reduce/lambda opinions and background unscientific mini-survey]

2005-07-05 Thread Ron Adam
Tom Anderson wrote:

> 
> We really ought to do this benchmark with a bigger list as input - a few 
> thousand elements, at least. But that would mean writing a function to 
> generate random nested lists, and that would mean specifying parameters 
> for the geometry of its nestedness, and that would mean exploring the 
> dependence of the performance of each flatten on each parameter, and 
> that would mean staying up until one, so i'm not going to do that.
> 
> tom
> 

Without getting to picky, would this do?

import random
import time
random.seed(time.time())

def rand_depth_sequence(seq):
 for n in range(len(seq)):
 start = random.randint(0,len(seq)-2)
 end = random.randint(start,start+3)
 seq[start:end]=[seq[start:end]]
 return seq

seq = rand_depth_seq(range(100))
print seq


 >>>
[0, 1], 2, [3, [4, 5, 6]]], [7], [8, [[9, [], 10, [11, [12]], 
[[[, [[], [13, 14, 15]], [[[16, 17]]], [18], [[[19], 20, [[21, 22, 
[23]], [[24, 25, 26], [], [27, 28], [], 29, [], [30, [31, 32, 33]], 
[34], [[35]], 36, [[37, 38, 39], [[40, 41], [[42, [43, 44], 45, 46, 
[47, []], [[[48, 49], [50], [51]], 52], [[[53], [54, 55, [56, 57, 58]], 
[]], [], []], [[59, 60, 61]], 62, [[63]], [], [64], [[[65]]], [[[66, 67, 
68], [69, 70, [71, 72]], [[73, 74], [75, 76]]], [77, 78]], [], 79, 80, 
[[81], []], 82, [[[83, [[], 84], [85]], 86, [[87, 88]]], [[[89], 90, 
91], [92, [93], [94, 95, 96, [97, [98, 99]]]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientificmini-survey

2005-07-05 Thread Ron Adam
Terry Reedy wrote:

> I also suspect that the years of fuss over Python's lambda being what it is 
> rather that what it is 'supposed' to be (and is in other languages) but is 
> not, has encourage Guido to consider just getting rid of it.  I personally 
> might prefer keeping the feature but using a different keyword.
> 
> Terry J. Reedy

Yes, I think a different key word would help.  My current favorite 
alternative is to put it in parentheses similar to list comprehensions 
and use "let".

(let x,y return x+y)

Or you could just explain lambda as let, they both begin with 'L',  and 
then the colon should be read as return.

So lambda x,y: x+y should be read as:  let x,y return x+y

I'm in the group that hadn't heard about lambda as a function before 
Python even after > twenty years of computer tech experience.  I think 
presuming it's common knowledge is a mistake.

Cheers, Ron

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientificmini-survey

2005-07-05 Thread Ron Adam
Terry Reedy wrote:

> "George Sakkis" <[EMAIL PROTECTED]> wrote in message 
>>So, who would object the full-word versions for python 3K ?
>>def -> define
>>del -> delete
>>exec -> execute
> 
> 
> These three I might prefer to keep.
> 
> 
>>elif -> else if
> 
> 
> This one I dislike and would prefer to write out.  I never liked it in 
> whatever else language I first encountered it and still don't.
> 
> Terry J. Reedy

Interesting, the exact opposite of what I was thinking.

I don't use del and exec that often, so the long versions are fine to 
me.  Define is ok for me too because it's usually done only once for 
each function or method so I'm not apt to have a lot of defines repeated 
in a short space like you would in C declarations.

elif...  I was thinking we should keep that one because it's used fairly 
often and having two keywords in sequence doesn't seem like it's the 
beat way to do it.

Although  it could be replaced with an 'andif' and 'orif' pair.  The 
'andif' would fall though like a C 'case', and the 'orif' would act just 
like the 'elif'.  Actually this is a completely differnt subject 
reguarding flow testing verses value testing.  Else and also would be 
the coorisponding end pair, but it seemed nobody really liked that idea 
when I suggested it a while back.  

Cheers,
Ron
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientific mini-survey

2005-07-05 Thread Ron Adam
Robert Kern wrote:

> Dan Bishop wrote:
> 
>> There's also the issue of having to rewrite old code.
> 
> 
> It's Python 3000. You will have to rewrite old code regardless if reduce 
> stays.
> 

And from what I understand Python 2.x will still be maintained and 
supported.  It will probably be more reliable than Python 3000 for a 
version or two as well.

It's going to take time for all the batteries included to catch up, so 
it won't be like we have to all of a sudden rewrite all the old Python 
programs over night.  We'll probably have a couple of years to do that 
to our own programs if we decide it's worth while.  And if not, Python 
2.x will still work.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientific mini-survey

2005-07-06 Thread Ron Adam
Tom Anderson wrote:

>> del -> delete
> 
> 
> How about just getting rid of del? Removal from collections could be 
> done with a method call, and i'm not convinced that deleting variables 
> is something we really need to be able to do (most other languages 
> manage without it).

Since this is a Python 3k item...  What would be the consequence of 
making None the default value of an undefined name?  And then assigning 
a name to None as a way to delete it?

Some benefits
=

*No more NamesError exceptions!

 print value
 >> None

 value = 25
 print value
 >> 25

 value = None#same as 'del value'


*No initialization needed for a while loop!

 While not something:
 if :
 something = True


*Test if name exists without using a try-except!

 if something == None:
 something = value


*And of course one less keyword!


Any drawbacks?


Cheers,
Ron

PS...  not much sleep last night, so this may not be well thought out.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientific mini-survey

2005-07-06 Thread Ron Adam
Dan Sommers wrote:

> On Wed, 06 Jul 2005 14:33:47 GMT,
> Ron Adam <[EMAIL PROTECTED]> wrote:
> 
> 
>>Since this is a Python 3k item...  What would be the consequence of
>>making None the default value of an undefined name?  And then assigning
>>a name to None as a way to delete it?
> 
> 
> [ ... ]
> 
> 
>>Any drawbacks?
> 
> 
> Lots more hard-to-find errors from code like this:
> 
> filehandle = open( 'somefile' )
> do_something_with_an_open_file( file_handle )
> filehandle.close( )
> 
> Regards,
> Dan


If do_something_with_an_open_file() is not defined. Then you will get:

 TypeError: 'NoneType' object is not callable


If "file_handle" (vs "filehandle") is None.  Then you will still get an 
error as soon as you tried to use the invalid file handle.

 AttributeError: 'NoneType' object has no attribute 'read'


If the error was filehundle.close()  you will get:

 AttributeError: 'NoneType' object has no attribute 'close'


I don't think any of those would be hard to find.


Cheers,
Ron



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Use cases for del

2005-07-06 Thread Ron Adam
Steven D'Aprano wrote:
> On Wed, 06 Jul 2005 10:00:02 -0400, Jp Calderone wrote:
> 
> 
>>On Wed, 06 Jul 2005 09:45:56 -0400, Peter Hansen <[EMAIL PROTECTED]> wrote:
>>
>>>Tom Anderson wrote:
>>>
>>>>How about just getting rid of del? Removal from collections could be
>>>>done with a method call, and i'm not convinced that deleting variables
>>>>is something we really need to be able to do (most other languages
>>>>manage without it).
>>>
>>>Arguing the case for del: how would I, in doing automated testing,
>>>ensure that I've returned everything to a "clean" starting point in all
>>>cases if I can't delete variables?  Sometimes a global is the simplest
>>>way to do something... how do I delete a global if not with "del"?
>>>
>>
>>Unless you are actually relying on the global name not being defined, 
>>"someGlobal = None" would seem to do just fine.
>>
>>Relying on the global name not being defined seems like an edge case.
> 
> 
> Er, there is a lot of difference between a name not existing and it being
> set to None.

Yes, they are not currently the same thing.


But what if assigning a name to None actually unbound it's name?


And accessing an undefined name returned None instead of a NameError?


Using an undefined name in most places will still generate some sort of 
an None type error.

I think the biggest drawback to this second suggestion is that we would 
have to test for None in places where it would matter, but that's 
probably a good thing and enables checking if a variable exists without 
using a try-except.


$ cat mymodule2.py

# define some temporary names
a, b, c, d, e, f = 1, 2, 3, 4, 5, 6
# do some work
result = a+b+c+d*e**f
# delete the temp variables
a = b = c = d = e = f = None# possibly unbind names

This would work if None unbound names.


> It is bad enough that from module import * can over-write your variables
> with the modules' variables, but for it to do so with DELETED variables is
> unforgivable.

Yes, I agree using None as an alternative to delete currently is 
unacceptable.

Cheers,
Ron
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientific mini-survey

2005-07-06 Thread Ron Adam
Devan L wrote:

>># from a custom numeric class
>># converts a tuple of digits into a number
>>mantissa = sign * reduce(lambda a, b: 10 * a + b, mantissa)
> 
> 
> I'll admit I can't figure out a way to replace reduce without writing
> some ugly code here, but I doubt these sorts of things appear often.

It's not ugly or difficult to define a named function.

def digits_to_value(seq):
 v = 0
 for d in seq:
 v = v*10+d
 return v


Then where you need it.

mantissa = sign * digits_to_value(mantissa)


One of the motivations is the reduce-lambda expressions are a lot harder 
to read than a properly named function.

And a function will often work faster than the reduce-lambda version as 
well.

Cheers,
Ron
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientific mini-survey

2005-07-06 Thread Ron Adam
Dan Sommers wrote:

>> AttributeError: 'NoneType' object has no attribute 'read'
> 
> 
> This is the one of which I was thinking.  So you see this error at the
> end of a (long) traceback, and try to figure out where along the (long)
> line of function calls I typed the wrong name.  Currently, the very end
> of the traceback points you right at the bad code and says "NameError:
> name 'filehandle' is not defined," which tells me (very nearly) exactly
> what I did wrong.

The actual error could be improved a bit to refer the the name of the 
and line the error is in.  Which would help some.

AttributeError: 'NoneType' object "file_handle" has no attribute 
'read'


> I guess it depends on how long your traceback is and how big those
> functions are.  Also, from the Zen:
> 
> Explicit is better than implicit.
> 
> although from previous threads, we know that every pythonista has his or
> her own definitions of "explicit" and "implicit."

True, but this isn't any different than any other 'Type' error, or value 
error.  And considerably easier to find than one off errors.

There would be more restrictions on None than there are now so it 
wouldn't be as bad as it seems.  For example passing a None in a 
function would give an error as the name would be deleted before the 
function actually gets it.

So doing this would give an error for functions that require an argument.

 def foo(x):
 return x

 a = None
 b = foo(a)#  error because a dissapears before foo gets it.

 >>TypeError: foo() takes exactly 1 argument(0 given)


So they wouldn't propagate like you would expect.

Hmmm interesting that would mean... lets see.


1. var = None# removes ref var,  this is ok

2. None = var# give an error of course

3. var = undefined_var   # same as "var = None",  that's a problem!

Ok... this must give an error because it would delete var silently! 
Definitely not good.  So going on, on that basis.

4. undefined == None # Special case, evaluates to True.

5.  def foo():return None   # same as return without args

6.  x = foo()#  Would give an error if foo returns None

This might be an improvement over current behavior.  Breaks a lot of 
current code though.

7.  if undefined:# error

8.  if undefined==None   #  Special case -> True

Good for checking if vars exist.

9.  if undefined==False  #  Error, None!=False (or True)

9.  while undefined: # error

10  while undefined==None:

Possible loop till defined behavior.


Ok... and undefined var returning None is a bad idea, but using None to 
del names could still work.  And (undefined==None) could be a special 
case for checking if a variable is defined.  Otherwise using an 
undefined name should give an error as it currently does.

Cheers,
Ron


> Regards,
> Dan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Avoiding NameError by defaulting to None

2005-07-06 Thread Ron Adam
Scott David Daniels wrote:

> Ron Adam wrote:
> 
>> Dan Sommers wrote:
>>
>>> Lots more hard-to-find errors from code like this:
>>> filehandle = open( 'somefile' )
>>> do_something_with_an_open_file( file_handle )
>>> filehandle.close( )
>>
>>
>> If do_something_with_an_open_file() is not defined. Then you will get:
>> TypeError: 'NoneType' object is not callable
>>
>>
>> If "file_handle" (vs "filehandle") is None.  Then you will still get 
>> an error as soon as you tried to use the invalid file handle.
> 
> Ah, but where you use it can now far from the error in the source.  The
> None could get packed up in a tuple, stored in a dictionary, all manner
> of strange things before discovering it was an unavailable value.  I
> would love the time back that I spent chasing bugs when Smalltalk told
> me (I forget the exact phrasing) "nil does not respond to message abc."
> My first reaction was always, "of course it doesn't, this message is
> useless."  You really want the error to happen as close to the problem
> as possible.
> 
> --Scott David Daniels
> [EMAIL PROTECTED]


Yep, I concede.  The problem is if undefined names return None, and None 
deletes a name,  then assigning a undefined name to an existing name, 
will delete it silently.

Definitely Not good!

So it should give an error as it currently does if an undefined name is 
used on the right side of an =.

It could still work otherwise to unbind names, and (undefined_name == 
None) could still be a valid way to check if a name is defined without 
using a try-except.

But it would definitely be a Python 3000 change as all the functions 
that return None would then cause errors.

Cheers,
Ron

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Use cases for del

2005-07-06 Thread Ron Adam
Ron Adam wrote:

> And accessing an undefined name returned None instead of a NameError?

I retract this.  ;-)

It's not a good idea.  But assigning to None as a way to unbind a name 
may still be an option.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientific mini-survey

2005-07-06 Thread Ron Adam
Stian Søiland wrote:

> On 2005-07-06 16:33:47, Ron Adam wrote:
> 
> 
>>*No more NamesError exceptions!
>> print value
>> >> None
> 
> 
> So you could do lot's of funny things like:
> 
> def my_fun(extra_args=None):
> if not extraargs:
> print "Behave normally"
> extra_args = 1337
> 
> if extraargs:
> asdkjaskdj
> ..
> if extra_args:
> kajsdkasjd

Yes, returning None from an undefined name is DOA.

In the above case you would get an error by the way.

"if extraargs:"  would evaluate to "if None:", which would evaluate to 
"if:" which would give you an error.


>>*No initialization needed for a while loop!
>>
>> while not something:
>> if :
>> something = True
> 
> 
> This is the only "good" case I could find, but opening for a lots of
> errors when you get used to that kind of coding:

It would need to be.. while not (something==None):  and the compiler 
would need to handle it as a special case.  But this one could still 
work without allowing something=undefined to be valid.

> while not finished:
> foo()
> finished = calculate_something()
> 
> (..)
> (..)  # Added another loop
> while not finished:
> bar()
> finished = other_calculation()
> 
> Guess the amount of fun trying to find out the different errors that
> could occur when bar() does not run as it should because the previous
> "finished" variable changes the logic.

It's not really differnt than any other value test we currently use.

 notfinished = True
 while notfinished:
 notfinished = (condition)

 # Need to set notfinished back to True here.
 while notfinished:
 


>>*Test if name exists without using a try-except!
>> if something == None:
>> something = value
> 
> Now this is a question from newcomers on #python each day.. "How do I
> check if a variable is set?".
> 
> Why do you want to check if a variable is set at all? If you have so
> many places the variable could or could not be set, your program design
> is basically flawed and must be refactored.

There's a few places the Python library that do exactly that.

try:
value
except:
value = something

I admit it's something that should be avoided if possible because if 
there's doubt that a name exists, then there would also be doubt 
concerning where it came from and weather or not it's value/object is valid.

Anyway, it was an interesting but flawed idea, I should of thought more 
about it before posting it.

Cheers,
Ron







-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Use cases for del

2005-07-06 Thread Ron Adam
Reinhold Birkenfeld wrote:

> Ron Adam wrote:
> 
>>Ron Adam wrote:
>>
>>
>>>And accessing an undefined name returned None instead of a NameError?
>>
>>I retract this.  ;-)
>>
>>It's not a good idea.  But assigning to None as a way to unbind a name 
>>may still be an option.
> 
> IMO, it isn't. This would completely preclude the usage of None as a value.
> None is mostly used as a "null value". The most prominent example is default
> function arguments:
> 
> def foo(bar, baz=None):
> 
> With None unbinding the name, what would you suggest should happen? baz being
> undefined in the function scope?

It would be a way to set an argument as being optional without actually 
assigning a value to it.  The conflict would be if there where a global 
with the name baz as well.  Probably it would be better to use a valid 
null value for what ever baz if for.  If it's a string then "", if its a 
number then 0, if it's a list then [], etc...

If it's an object... I suppose that would be None...  Oh well. ;-)

> Or, what should happen for
> 
> somedict[1] = None

Remove the key of course.

   var = somedict[1]  Would then give an error.

and

   (somedict[1] == None)  Would evaluate to True.


> ? Also, the concept of _assigning_ something to a name to actually _unassign_
> the name is completely wrong.

That's not anymore wrong than ([] == [None]) --> False.

Having a None value remove a name binding could make the above condition 
True.

> Of course, this is a possible way to unassign names if (and only if)
> (1) there is a real "undefined" value (not None)
> (2) unbound names return the undefined value
> 
> Look at Perl. Do we want to be like that? ;)
> 
> Reinhold

The problem I had with perl is that number of symbols and ways of 
combining them can be quite confusing when you don't use it often.  So 
it's either a language you use all the time... or avoid.

Anyway, there seems to be enough subtle side effects both small and 
large to discount the idea.  While implementing it may be possible, it 
would require a number of other changes as well as a different way of 
thinking about it, so I expect it would be disliked quite a bit.

Cheers,
Ron
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientific mini-survey

2005-07-06 Thread Ron Adam
Benji York wrote:
> Ron Adam wrote:
> 
>> "if extraargs:"  would evaluate to "if None:", which would evaluate to 
>> "if:" which would give you an error.
> 
> 
> In what way is "if None:" equivalent to "if:"?
> -- 
> Benji York

It's not now.. but if None where to really represent the concept None, 
as in not bound to anything, it would evaluate to nothing.

Given the statement:

a = None

And the following are all true:

  a == None
(a) == (None)
(a) == ()
(None) == ()

Then this "conceptual" comparison should also be true:

if (None):  ==  if ():
if (): == if:


Comparing if's like that wouldn't be a valid code of course, but it 
demonstrates the consistency in which the comparison is made.

I think. ;-)

Of course this is all hypothetical anyways, so it could be what ever we 
decide makes the most since, include not changing anything.

Cheers,
Ron
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Use cases for del

2005-07-06 Thread Ron Adam
Grant Edwards wrote:

> On 2005-07-06, Ron Adam <[EMAIL PROTECTED]> wrote:
> 
> 
>>It would be a way to set an argument as being optional without actually 
>>assigning a value to it.  The conflict would be if there where a global 
>>with the name baz as well.  Probably it would be better to use a valid 
>>null value for what ever baz if for.  If it's a string then "", if its a 
>>number then 0, if it's a list then [], etc...
> 
> 
> Except those aren't "null values" for those types.  0 is a
> perfectly good integer value, and I use it quite often. There's
> a big difference between an "invalid integer value" and an
> integer with value 0.

Why would you want to use None as an integer value?

If a value isn't established yet, then do you need the name defined? 
Wouldn't it be better to wait until you need the name then give it a value?



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: flatten(), [was Re: map/filter/reduce/lambda opinions ...]

2005-07-06 Thread Ron Adam
Stian Søiland wrote:

> Or what about a recursive generator?
> 
> a = [1,2,[[3,4],5,6],7,8,[9],[],]
> 
> def flatten(item):
> try:
> iterable = iter(item)
> except TypeError:
> yield item  # inner/final clause
> else:
> for elem in iterable:
> # yield_return flatten(elem)
> for x in flatten(elem):
> yield x
> 
> print list(flatten(a))


Ok,  lets see...  I found a few problems with the testing (and corrected 
it)  so the scores have changed.   My sort in place routines were 
cheating because the lists weren't reset between runs so it had the 
advantage after the first time though of sorting already sorted list. 
And Tom's recursive no copy had a bug which kept a reference to one of 
it's arguments so it's output was doubling the list.  And the hasattr 
function was slowing everyone down, so I inlined it for everyone who 
used it.

Using a 1000 item list and starting with a flat list and increasing the 
depth (unflatten) to shallow, medium, and deep.  (but not so deep to 
cause recursion errors.)

And the winners are...



Python 2.3.5 (#62, Feb  8 2005, 16:23:02)
[MSC v.1200 32 bit (Intel)] on win32
IDLE 1.0.5

 >>>

Size: 1000   Depth: 0
georges_recursive_flatten:   0.00212513042834
rons_nonrecursive_flatten:   0.00394323859609
toms_recursive_zerocopy_flatten: 0.00254557492644
toms_iterative_zerocopy_flatten: 0.0024332701505
devans_smallest_recursive_flatten:   0.011406198274
rons_nonrecursive_inplace_flatten:   0.00149963193644
stians_flatten_generator:0.00798257879114

Size: 1000   Depth: 25
georges_recursive_flatten:   0.0639824335217
rons_nonrecursive_flatten:   0.0853463219487
toms_recursive_zerocopy_flatten: 0.0471856059917
toms_iterative_zerocopy_flatten: 0.188437915992
devans_smallest_recursive_flatten:   0.0844073757976
rons_nonrecursive_inplace_flatten:   0.0847048996452
stians_flatten_generator:0.0495694285169

Size: 1000   Depth: 50
georges_recursive_flatten:   0.134300309118
rons_nonrecursive_flatten:   0.183646245542
toms_recursive_zerocopy_flatten: 0.0886252303017
toms_iterative_zerocopy_flatten: 0.371141304272
devans_smallest_recursive_flatten:   0.185467985456
rons_nonrecursive_inplace_flatten:   0.188668392212
stians_flatten_generator:0.090114246364

Size: 1000   Depth: 100
georges_recursive_flatten:   0.248168133101
rons_nonrecursive_flatten:   0.380992276951
toms_recursive_zerocopy_flatten: 0.177362486014
toms_iterative_zerocopy_flatten: 0.741958265645
devans_smallest_recursive_flatten:   0.306604051632
rons_nonrecursive_inplace_flatten:   0.393641091256
stians_flatten_generator:0.177185368532
 >>>


Stians flatten generator is nearly tied with Tom's recursive zerocopy. 
My nonrecursive inplace is faster for very shallow lists, but Tom's 
quickly over takes it.  I was able to improve my nonrecursive copy 
flatten a bit, but not enough to matter.  So Tom's recursive zerocopy is 
the overall winner with Stian's flatten generator close behind and just 
barely winning out in the very deep catagory.  But they're all 
respectable times so everyone wins. ;-)



And here's the source code.

Cheers,  :-)
Ron



# ---

import sys
import time

TIMERS = {"win32": time.clock}
timer = TIMERS.get(sys.platform, time.time)

def timeit(fn,*arg):
 t0 = timer()
 r = fn(*arg)
 t1 = timer()
 return float(t1-t0), r

# 

def georges_recursive_flatten(seq):
 return reduce(_accum, seq, [])

def _accum(a, item):
 if hasattr(item, "__iter__"):
 a.extend(georges_recursive_flatten(item))
 else:
 a.append(item)
 return a


def rons_nonrecursive_flatten(seq):
 a = []
 while seq:
 if hasattr(seq[0], "__iter__"):
 seq[0:1] = seq[0]
 else:
 a.append(seq.pop(0))
 return a


def toms_recursive_zerocopy_flatten(seq, a=None):  #a=[] kept a 
reference to a?
 if a==None:
 a = []
 if hasattr(seq,"__iter__"):
 for item in seq:
 toms_recursive_zerocopy_flatten(item, a)
 else:
 a.append(seq)
 return a


def toms_iterative_zerocopy_flatten(seq):
 stack = [None]
 cur = iter(seq)
 a = []
 while (cur != None):
 try:
 item = cur.next()
 if hasattr(item,"__iter__"):
 stack.append(cur)
 cur = iter(item)
 else:
 a.append(item)
 except StopIteration:
 cur = stack.pop()
 return a


def devans_smallest_recursive_flatten(seq):
 if hasattr(seq,"__iter__"):

Re: Use cases for del

2005-07-06 Thread Ron Adam
Grant Edwards wrote:

> On 2005-07-07, Ron Adam <[EMAIL PROTECTED]> wrote:
> 
>>Grant Edwards wrote:
>>
>>
>>>On 2005-07-06, Ron Adam <[EMAIL PROTECTED]> wrote:
>>>
>>>
>>>
>>>>It would be a way to set an argument as being optional without
>>>>actually assigning a value to it.  The conflict would be if
>>>>there where a global with the name baz as well.  Probably it
>>>>would be better to use a valid null value for what ever baz if
>>>>for.  If it's a string then "", if its a number then 0, if it's
>>>>a list then [], etc...
>>>
>>>Except those aren't "null values" for those types.  0 is a
>>>perfectly good integer value, and I use it quite often. There's
>>>a big difference between an "invalid integer value" and an
>>>integer with value 0.
>>
>>Why would you want to use None as an integer value?
> 
> 
> 1) So I know whether an parameter was passed in or not. Perhaps
>it's not considered good Pythonic style, but I like to use a
>single method for both get and set operations.  With no
>parameters, it's a get.  With a parameter, it's a set:
> 
>class demo:
>   def foo(v=None):
>   if v is not None:
>   self.v = v
>   return self.v   

You are really checking if v exists, so having it undefined in namespace 
  as the default is consistent with what you are doing.

As I said above...

 >>>>It would be a way to set an argument as being optional without
 >>>>actually assigning a value to it.

So it would still work like you expect even though v is not bound to 
anything.  Like I said the bigger problem is that globals will be 
visible and that would create a conflict.  Setting a value to None in a 
function hides globals of the same name.  That wouldn't happen if None 
unbound names as del does.

So you would need to use something else for that purpose I suppose, but 
that was why None became a literal in the first place, so maybe it's 
taking a step backwards.


> 2) So I can use it as sort of a NaN equivalent.
> 
>if self.fd is None:
>   self.fd = os.open('foo.bar','w')
>
>if self.fd is not None:
>   os.close(self.fd)
>   self.fd = None  

It would still work as you expect.  A while back I suggested an 'also' 
for if that would do exactly that with only one test.  Nobody liked it.

  if self.fd is None:
 self.fd = os.open('foo.bar','w')
 # do something with self.fd

  also:
 os.close(self.fd)
 self.fd = None


>>If a value isn't established yet, then do you need the name
>>defined?
>  
> I find it more obvious to set the name to None during the
> periods that it isn't valid than to delete it and check for a
> NameError when I want to know if the value is usable or not.

You would still be able to do that explicitly and it probably would be a 
good idea when you aren't sure if a name is left over from something else.

If a name is None, it means it's available and unassigned, so you don't 
have to check for a NameError.


>>Wouldn't it be better to wait until you need the name then
>>give it a value?
> 
> "Better" is a value judgement.  I prefer setting it None and
> than deleting it and then checking for existance.

They would be the same in this case.  Setting it to None would delete it 
also.  And checking it for None will let you know it's available or if 
it has a value...

You are right that it's a value judgment.  I agree.  BTW, I'm sort of 
just exploring this a bit, not trying to argue it's any better than what 
we currently do.  I think it could work either way, but it would mean 
doing things in a different way also, not neccisarily a good idea.

Cheers,
Ron


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientific mini-survey

2005-07-06 Thread Ron Adam
Mike Meyer wrote:

> Ron Adam <[EMAIL PROTECTED]> writes:
> 
>>So doing this would give an error for functions that require an argument.
>>
>> def foo(x):
>> return x
>>
>> a = None
>> b = foo(a)#  error because a dissapears before foo gets it.
> 
> 
> So how do I pass None to a function?

You wouldn't.  What would a function do with it anyway?  If you wanted 
to pass a neutral value of some sort, then you'd just have to pick 
something else and test for it.  Maybe we could use Nil as an 
alternative to None for that purpose?


>>6.  x = foo()#  Would give an error if foo returns None
> 
> Why? Shouldn't it delete x?

That was my first thought as well.

It would cause all sorts of problems.  One way to avoid those is to 
insist that the only way to assign (delete) a name to None is by 
literally using "None" and only "None" on the right side of the = sign.

Any undefined name on the right side would give an error except for None 
itself.

>>This might be an improvement over current behavior.  Breaks a lot of
>>current code though.
>  
> I don't think so. I've programmed in langauges that gave undefined
> variables a value rather than an exception. It almost inevitabley led
> to hard-to-find bugs.

That is why the above should give an error I believe.  ;)


> FORTRAN used to give all variables a type, so that a typo in a
> variable name could well lead to a valid expression. The technic for
> disabling this was baroque ("IMPLICIT BOOLEAN*1 A-Z"), but so common
> they invented a syntax for it in later versions of FORTRAN ("IMPLICIT
> NONE").

It's been so long since I did anything if Fortran I've actual don't 
recall any of it.  :)  '83-'84

I went from that to pascal, which I thought was a big improvement at the 
time.

Cheers,
Ron

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientific mini-survey

2005-07-07 Thread Ron Adam
Reinhold Birkenfeld wrote:
> Ron Adam wrote:
> 
> 
>>Given the statement:
>>
>>a = None
>>
>>And the following are all true:
>>
>>  a == None
> 
> 
> Okay.
> 
> 
>>(a) == (None)
> 
> 
> Okay.
> 
> 
>>(a) == ()
> 
> 
> Whoops! a (which is None) is equal to the empty tuple (which is not None)?

It's not an empty tuple, it's an empty parenthesis.  Using tuples it 
would be.

(a,) == (,)

which would be the same as:

(,) == (,)

>>(None) == ()
>>
>>Then this "conceptual" comparison should also be true:
>>
>>if (None):  ==  if ():
>>if (): == if:
> 
> 
> I can't really see any coherent concept here.
> 
> Reinhold

It would work out that.

if: == if:

Does that help?

Cheers,
Ron



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why anonymity? [was Re: map/filter/reduce/lambda opinions and background unscientific mini-survey]

2005-07-07 Thread Ron Adam
Steven D'Aprano wrote:

> On Thu, 07 Jul 2005 09:36:24 +, Duncan Booth wrote:
> 
> 
>>Steven D'Aprano wrote:
>>
>>>This is something I've never understood. Why is it bad 
>>>form to assign an "anonymous function" (an object) to a 
>>>name?
>>
>>Because it obfuscates your code for no benefit. You should avoid making it 
>>hard for others to read your code (and 'others' includes yourself in the 
>>future).

Use a descriptive name like this?

def get_the_cube_of_x_and_then_subtract_five_multiplied_by_x(x):
  x**3 - 5*x

I think I like the lambda version here.  ;-)

It would probably have a name which refers to the context in which it's 
used, but sometimes the math expression it self is also the most readable.



> Put it this way: whenever I see a two-line def as above, I can't help
> feeling that it is a waste of a def. ("Somebody went to all the trouble
> to define a function for *that*?") Yet I would never think the same about
> a lambda -- lambdas just feel like they should be light-weight. 

In the case of an interface module you might have a lot of two like 
def's that simply change the name and argument format so several modules 
can use it and have a standard consistent or simplified interface.

The lambda may be perfectly fine for that.  But why not use def?

func_x = lambda x: (someother_func_x(x,'value'))

def func_x(x): return someother_func_x(x,'value')

There's both nearly identical, but the def is understandable to 
beginners and advanced python programs.


Cheers,
Ron


> Am I just weird?

Aren't we all?   ;-)

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Use cases for del

2005-07-07 Thread Ron Adam
Grant Edwards wrote:

> On 2005-07-07, Ron Adam <[EMAIL PROTECTED]> wrote:

>>>>>>It would be a way to set an argument as being optional without
>>>>>>actually assigning a value to it.
>>
>>So it would still work like you expect even though v is not
>>bound to anything.  Like I said the bigger problem is that
>>globals will be visible and that would create a conflict.
>>Setting a value to None in a function hides globals of the
>>same name.  That wouldn't happen if None unbound names as del
>>does.
> 
> Good point.  I hadn't thought of that.

The easiest way to fix that is to create a Nil or Null object as a 
replacement for the current None object.


>>>I find it more obvious to set the name to None during the
>>>periods that it isn't valid than to delete it and check for a
>>>NameError when I want to know if the value is usable or not.
>>
>>You would still be able to do that explicitly and it probably would be a 
>>good idea when you aren't sure if a name is left over from something else.
>>
>>If a name is None, it means it's available and unassigned, so
>>you don't have to check for a NameError.
> 
> 
> How would you spell "if a name is None"?

How about:

if name is None: name = something

Making None the same as undefined wouldn't change this.  I think the 
parser would need to be a little smarter where None is concerned in 
order to be able to handle it in bool expressions,  and catch improper 
name = undefined assignments.

But it might not change things as much as you think.


> Personally, I think the spellings
> 
>del name
>if 'name' in locals()
> 
> is much more explicit/obvious than
> 
>name = None
>name is None   

This would be
  name = None # remove it from locals()
  if name is None: dosomething# if it's not in locals()

No need to check locals or globals.  That's one of the plus's I think.


Since we can already assign a value to any name without first 
initializing it,  this just allows us to use it in a some expressions 
without also first initializing it.  It would still give an error if we 
tried to use it in "any" operation.  Just like the present None.


> I expect the "=" operator to bind a name to a value.  Having it
> do something completely different some of the time seems a bit
> unpythonic.

But you are assigning it a value. Instead of having a Name assigned to a 
None object, the name is just removed from name space to do the same 
thing and you save some memory in the process.



How about keeping None as it is as a place holder object, and having a 
keyword called "undefined" instead.

x = 3
x = undefined   # delete x from name space
if x is not undefined: print x

if an undefined name is used with any operator, it could give a 
'UndefinedName' error.

This would be more compatible with the current python language.

Cheers,
Ron

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Use cases for del

2005-07-07 Thread Ron Adam
Steven D'Aprano wrote:

> Ron Adam wrote:
> 
>> Why would you want to use None as an integer value?
>>
>> If a value isn't established yet, then do you need the name defined? 
>> Wouldn't it be better to wait until you need the name then give it a 
>> value?
> 
> 
> Er, maybe I'm misunderstanding something here, but surely the most 
> obvious case is for default and special function arguments:
> 
> def count_records(record_obj, start=0, end=None):
> if end == None:
> end = len(record_obj)
> if start == None:  # this is not the default!
> # start at the current position
> start = record_obj.current
> n = 0
> for rec in record_obj.data[start:end]:
> if not rec.isblank():
> n += 1
> return n
> 
> which you call with:
> 
> # current position to end
> count_records(myRecords, None)
> # start to end
> count_records(myRecords)
> # start to current position
> count_records(myRecords, end=myRecords.current)


You have three possible outcomes,
count all
count range
count using current index
count range from beginning to current
count range from current to end


The most consistent way to do this would be:


def count_records(record_obj, start=0, end=len(record_obj)):
 n = 0
 for rec in record_obj.data[start:end]:
 if not rec.isblank():
 n += 1
 return n


# current position to end
count_records(myRecords, myRecords.current)

# start to end
count_records(myRecords)

# start to current position
count_records(myRecords, end=myRecords.current)


Cheers,
Ron



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientific mini-survey

2005-07-07 Thread Ron Adam
Christopher Subich wrote:

> As others have mentioned, this looks too much like a list comprehension 
> to be elegant, which also rules out () and {}... but I really do like 
> the infix syntax.

Why would it rule out ()?

You need to put a lambda express in ()'s anyways if you want to use it 
right away.

  print (lambda x,y:x+y)(1,2)

If you don't use the ()'s it reads the y(1,2) as part of the lambda 
expression, might as well require the ()'s to start with rather than 
leave it open for a possible error.


You could even say () is to function as [] is to list.

a function :  name(args)  ->  returns a value

a list :  name[index] ->  returns a value



My choice:

 name = (let x,y return x+y)   # easy for beginners to understand
 value = name(a,b)

 value = (let x,y return x+y)(a,b)



I think the association of (lambda) to [list_comp] is a nice 
distinction.  Maybe a {dictionary_comp} would make it a complete set. ;-)

Cheers,
Ron













-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientific mini-survey

2005-07-07 Thread Ron Adam
Erik Max Francis wrote:

> Ron Adam wrote:
> 
>> It's not an empty tuple, it's an empty parenthesis.  Using tuples it 
>> would be.
>>
>> (a,) == (,)
>>
>> which would be the same as:
>>
>> (,) == (,)
> 
> 
>  >>> ()
> ()
>  >>> a = ()
>  >>> type(a)
> 
>  >>> (,)
>   File "", line 1
> (,)
>  ^
> SyntaxError: invalid syntax
> 
> You've wandered way off into the woods now.

Yes,  ummm seems soo...  err.

This is one of those Python isn't quite consistent for practical reasons 
area.  I don't create empty tuples that way very often, but [] is to () 
is to {} is pretty obvious so I don't really have a good excuse.

 >>> (1)
1
 >>> ()
()

 >>> (1)
1
 >>> ((()))
()

Well in my previous explanation I *mean* it to be empty parenthesis.

Does that help?

Cheers,
Ron


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientific mini-survey

2005-07-07 Thread Ron Adam
Erik Max Francis wrote:
> Ron Adam wrote:
> 
>> Well in my previous explanation I *mean* it to be empty parenthesis.
>>
>> Does that help?
> 
> 
> Maybe it might be beneficial to learn a little more of the language 
> before proposing such wide-reaching (and un-Pythonic) reforms?

Hi Erik,

Getting more sleep is the answer to not making those kinds of oversights 
in this case.

It's really was not a (my) proposal, but a suggestion someone else made. 
  It seemed like an interesting idea and I wanted to see what kind of 
problems and benefits it would have.  Discussing an idea with other 
before it's fully thought out is a good way to explore its possibilities 
even though it may mean appearing silly at times, which I don't mind. :)

In the previous posts I was attempting to show a possible pattern or 
logic which doesn't currently correspond to the languages syntax using 
parenthesis.

 >>> (None)
 >>>

That's as close to an empty parenthesis as Python gets.  I was really 
trying to explain an underlying concept, not show actual python code.

And the conclusion (opinion) I've come to, is such a change might be 
made to work, but it would be very confusing to most people who have 
gotten use to the current None usage.  And difficult to emplement in a 
way that's consistant overall.

An alternative is to use a different word such as 'undefined'.  Then 
None can be used as it is currently, and undefined, can be used to test 
for in a comparison for undefined names.  Assigning a name to undefined 
could be used as an alternative to delete a name but so far I don't see 
an advantage to doing that way over using del.



if name is undefined: do something.

Instead of:

try:
   name
except:
   do something


And maybe:

 name = undefined

can be used in expressions where del name can't?

But so far this doesn't seem useful enough to propose and it would 
probably cause more problems (or confusion) than it solves.

Cheers,
Ron
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Use cases for del

2005-07-07 Thread Ron Adam
Steven D'Aprano wrote:

> Ron Adam wrote:

>> def count_records(record_obj, start=0, end=len(record_obj)):
> 
> 
> That would work really well, except that it doesn't work at all.

Yep, and I have to stop trying to post on too little sleep.


Ok, how about... ?


def count_records(record_obj, start=0, end='to-end'):
 if end == 'to-end':
 end = len(record_obj)
 n = 0
 for rec in record_obj.data[start:end]:
 if not rec.isblank():
 n += 1
 return n


This isn't really different from using None. While it's possible to 
avoid None, its probably not worth the trouble.  I use it myself.


Here's something interesting:

import time

x = None
t = time.time()
for i in range(100):
 if x==None:
 pass
print 'None:',time.time()-t

x = 'to-end'
t = time.time()
for i in range(100):
 if x=='to-end':
 pass
print 'String:',time.time()-t

 >>>
None: 0.4673515
String: 0.36133514


Of course the difference this would make on a single call in practically 
Nill.

Anyway, time to call it a night so tomorrow I don't make anymore silly 
mistakes on comp.lang.python. :)

Cheers,
Ron

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Use cases for del

2005-07-08 Thread Ron Adam
George Sakkis wrote:

> 
> How about using the right way of comparing with None ?
> 
> x = None
> t = time.time()
> for i in range(100):
>   if x is None:
>   pass
> print 'is None:',time.time()-t
> 
> I get:
> 
> None: 0.54952316
> String: 0.498000144958
> is None: 0.45047684

Yep, that occurred to me after I posted.  :-)



>>Anyway, time to call it a night so tomorrow I don't make anymore silly
>>mistakes on comp.lang.python. :)
> 
> 
> That would be a great idea ;-) An even greater idea would be to give up
> on this "None should mean undefined" babble; it looks like a solution
> looking for a problem and it raises more complications than it actually
> solves (assuming it does actually solve anything, which is not quite
> obvious).

Already gave up on it as a serious idea quite a while ago and said so in 
several posts.

 From two days ago... in this same thread ... I said.

 >>Yes, I agree using None as an alternative to delete currently is 
 >>unacceptable.

Cheers,
Ron

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Use cases for del

2005-07-08 Thread Ron Adam
George Sakkis wrote:

> I get:
> 
> None: 0.54952316
> String: 0.498000144958
> is None: 0.45047684

What do yo get for "name is 'string'" expressions?

Or is that a wrong way too?


On my system testing "if string is string" is slightly faster than "if 
True/ if False" expressions.

But the actual different is so small as to be negligible.  I'm just kind 
of curious why testing is objects isn't just as fast as testing is string?

Cheers,
Ron

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: removing list comprehensions in Python 3.0

2005-07-08 Thread Ron Adam
Kay Schluehr wrote:
> 
> Leif K-Brooks schrieb:
> 
>>Kay Schluehr wrote:
>>
>>>Well, I want to offer a more radical proposal: why not free squared
>>>braces from the burden of representing lists at all? It should be
>>>sufficient to write
>>>
>>>
>>>>>>list()
>>>
>>>list()
>>
>>So then what would the expression list('foo') mean? Would it be
>>equivalent to ['foo'] (if so, how would you convert a string or other
>>iterable to a list under Py3k?), or would it be equivalent to ['f', 'o',
>>'o'] as it is in now (and is so, what gives?)?
> 
> 
> Spiltting a string and putting the characters into a list could be done
> in method application style:
> 
> 
>>>>"abc".tolist()
> 
> list('a','b','c')

"abc".splitchrs()

There's already a str.split() to create a list of words,
and a str.splitline() to get a list of lines, so it would group related 
methods together.

I don't thin adding sting methods to lists is a good idea.

Cheers,
Ron



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: removing list comprehensions in Python 3.0

2005-07-08 Thread Ron Adam
Leif K-Brooks wrote:
> Kay Schluehr wrote:
> 
>>>>>list.from_str("abc")
>>
>>list("a", "b", "c" )
> 
> 
> 
> I assume we'll also have list.from_list, list.from_tuple,
> list.from_genexp, list.from_xrange, etc.?

List from list isn't needed, nor list from tuple. That's what the * is 
for.  And for that matter neither is the str.splitchar() either.


class mylist(list):
 def __init__(self,*args):
  self[:]=args[:]


mylist(*[1,2,3]) -> [1,2,3]

mylist(*(1,2,3)) -> [1,2,3]

mylist(*"abc") -> ['a','b','c']

mylist(1,2,3) -> [1,2,3]

mylist([1],[2])  -> [[1],[2]]

mylist('hello','world')  -> ['hello','world']



Works for me.  ;-)

I always thought list([1,2,3]) -> [1,2,3] was kind of silly.


Cheers,
Ron


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Use cases for del

2005-07-09 Thread Ron Adam
Scott David Daniels wrote:
> Ron Adam wrote:
> 
>> George Sakkis wrote:
>>
>>> I get:
>>>
>>> None: 0.54952316
>>> String: 0.498000144958
>>> is None: 0.45047684
>>
>>
>>
>> What do yo get for "name is 'string'" expressions?
> 
> 
> >>> 'abc' is 'abcd'[:3]
> False

Well of course it will be false... your testing two different strings! 
And the resulting slice creates a third.

Try:

ABC = 'abc'

value = ABC
if value is ABC:   # Test if it is the same object
pass

In the previous discussion I was comparing using a string as an 
alternative to using None as a "flag" object.  Not as a value to be 
calculated.

And just to be clear, I'm not disagreeing with you.

Yes, you can have problems with string comparisons if you create a copy 
instead of pointing a name to the object like you did above.  Something 
to be aware of.

To avoid that you either need to define the flag string as a global name 
or use it strictly in the local scope it's defined in.  Python will also 
sometimes reuse strings as an optimization instead of creating a second 
string if they are equal.  Something else to be aware of.

So I'm not suggesting it is a good idea to use strings in place of None. 
   But I still wonder why bool and other object comparisons are slightly 
slower than string comparisons. (?)

Cheers,
Ron


> You need to test for equality (==), not identity (is) when
> equal things may be distinct.  This is true for floats, strings,
> and most things which are not identity-based (None, basic classes).
> This is also true for longs and most ints (an optimization that makes
> "small" ints use a single identity can lead you to a mistaken belief
> that equal integers are always identical.
> 
> >>> (12345 + 45678) is (12345 + 45678)
> False
> 
> 'is' tests for identity match.  "a is b" is roughly equivalent to
> "id(a) == id(b)".  In fact an optimization inside string comparisons
> is the C equivalent of "return (id(a) == id(b) of len(a) == len(b)
> and )
> 
> --Scott David Daniels
> [EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: decorators as generalized pre-binding hooks

2005-07-09 Thread Ron Adam
Bengt Richter wrote:
> ;-)
> We have

Have we?

Looks like not a lot of interested takers so far.

But I'll bite. ;-)




> So why not
> 
> @deco
> foo = lambda:pass
> equivalent to
> foo = deco(lambda:pass)
 >
> and from there,
> @deco
>  = 
> being equivalent to
>  = deco()
> 
> e.g.,
> @range_check(1,5)
> a = 42
> for
> a = range_check(1,5)(42)
> 
> or
> @default_value(42) 
> b = c.e['f']('g')
> for
> b = default_value(42)(c.e['f']('g'))

So far they are fairly equivalent.  So there's not really any advantage 
over the equivalent inline function.  But I think I see what you are 
going towards.  Decorators currently must be used when a function is 
defined.  This option attempts to makes them more dynamic so that they 
can be used where and when they are needed.

How about if you make it optional too?

if keep_log:
@log_of_interesting_values
b = get_value(c,d):

Or...

@@keeplog log_of_interesting_values # if keeplog decorate.
b = get_value(c,d)

Just a thought.

> Hm, binding-intercept-decoration could be sugar for catching exceptions too,
> and passing them to the decorator, e.g., the above could be sugar for
> 
> try:
> b = default_value(42)(c.e['f']('g'))
> except Exception, e:
> b = default_value(__exception__=e) # so decorator can check
># and either return a value or just re-raise 
> with raise [Note 1]

I'm not sure I follow this one.. Well I do a little. Looks like it might 
be going the direction of with statements, but only applied to a single 
expression instead of a block or suite.


> This might be useful for plain old function decorators too, if you wanted the 
> decorator
> to define the policy for substituting something if e.g. a default argument 
> evaluation
> throws and exception. Thus
> 
> @deco
> def foo(x=a/b): pass # e.g., what if b==0?
> as
> try:
> def foo(x=a/b): pass # e.g., what if b==0?
> foo = deco(foo)
> except Exception, e:
> if not deco.func_code.co_flags&0x08: raise #avoid mysterious 
> unexpected keyword TypeError
> foo = deco(__exception__=e)

Wouldn't this one work now?

> [Note 1:]
> Interestingly raise doesn't seem to have to be in the same frame or 
> down-stack, so a decorator
> called as above could re-raise:
> 
>  >>> def deco(**kw):
>  ... print kw
>  ... raise
>  ...
>  >>> try: 1/0
>  ... except Exception, e: deco(__exception__=e)
>  ...
>  {'__exception__': }
>  Traceback (most recent call last):
>File "", line 2, in ?
>File "", line 1, in ?
>  ZeroDivisionError: integer division or modulo by zero

Interestin.

When it comes to decorators, and now the with statements, I can't help 
but feel that there's some sort of underlying concept that would work 
better.  It has to do with generalizing flow control in a dynamic way 
relative to an associated block.

One thought is to be able to use a place holder in an expression list to 
tell a statement when to do the following block of code.

I'll use 'do' here... since it's currently unused and use @@@ as the 
place holder.

(These are *NOT* thought out that far yet, I know...  just trying to 
show a direction that might be possible.)


do f=open(filename); @@@; f.close():   # do the block where @@@ is.
for line in f:
   print line,
print


And an alternate version similar to a "with" statement.

try f=open(filename); @@@; finally f.close():
for line if f:
   print line,
print

Maybe the exception could be held until after the try line is complete?


The place holder idea might be useful for decorating as well.  But I'm 
not sure how the function name and arguemtns would get passed to it.

do deco(@@@):
def foo():
pass

or maybe it would need to be...

do f=@@@, deco(f):
def foo()
pass

As I said, it still needs some thinking.. ;-)


Maybe leaving off the colon would use the following line without 
indenting as per your example?

do deco(@@@)
b = 42

It doesn't quite work this way I think. Possibly having a couple of 
different types of place holder symbols which alter the behavior might work?

do deco($$$)   # $$$ intercept name binding operation?
b = 42


Well, it may need to be a bit (or a lot) of changing.  But the idea of 
controlling flow with a place holder symbol might be a way to generalize 
some of the concepts that have been floating around into one tool.

I like the place holders because I think they make the code much more 
explicit and they are more flexible because you can put them where you 
need them.


> orthogonal-musing-ly ;-)

"Orthogonal is an unusual computer language in which your program flow 
can go sideways. In actuality in can go in just about any direction you 
could want."

http://www.muppetlabs.com/~breadbox/orth/


;-)

Cheers,
Ron


> Regards,
> Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Use cases for del

2005-07-10 Thread Ron Adam
Reinhold Birkenfeld wrote:
> Ron Adam wrote:
> 
> 
>>>>>> 'abc' is 'abcd'[:3]
>>>False
>>
>>Well of course it will be false... your testing two different strings! 
>>And the resulting slice creates a third.
>>
>>Try:
>>
>>ABC = 'abc'
>>
>>value = ABC
>>if value is ABC:   # Test if it is the same object
>>pass
> 
> 
> That's not going to buy you any time above the "is None", because identity-
> testing has nothing to do with the type of the object.

Ok, I retested it... Not sure why I was coming up with a small 
difference before.It seemed strange to me at the time which 
is part of why I asked.

My point was using strings "could" work in same context as None,  (if 
None acted as del), with out a performance penalty.  Anyway, this is all 
moot as it's an solution to a hypothetical situation that is not a good 
idea.

> Additionally, using "is" with immutable objects is horrible.

Why is it "horrible"?  Oh nevermind.  ;-)


Cheers,
Ron


> Reinhold
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Should I use "if" or "try" (as a matter of speed)?

2005-07-10 Thread Ron Adam
Roy Smith wrote:

> Thomas Lotze <[EMAIL PROTECTED]> wrote:
> 
>>Basically, I agree with the "make it run, make it right, make it fast"
>>attitude. However, FWIW, I sometimes can't resist optimizing routines that
>>probably don't strictly need it. Not only does the resulting code run
>>faster, but it is usually also shorter and more readable and expressive.
> 
> 
> Optimize for readability and maintainability first.  Worry about speed 
> later.

Yes, and then...

If it's an application that is to be used on a lot of computers, some of 
them may be fairly old.  It might be worth slowing your computer down 
and then optimizing the parts that need it.

When it's run on faster computers, those optimizations would be a bonus.

Cheers,
Ron

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: decorators as generalized pre-binding hooks

2005-07-10 Thread Ron Adam
Bengt Richter wrote:
> On Sun, 10 Jul 2005 05:35:01 GMT, Ron Adam <[EMAIL PROTECTED]> wrote:

>>So far they are fairly equivalent.  So there's not really any advantage 
>>over the equivalent inline function.  But I think I see what you are 
>>going towards.  Decorators currently must be used when a function is 
>>defined.  This option attempts to makes them more dynamic so that they 
>>can be used where and when they are needed.
> 
> IMO part of the decorator benefit is clearer code, and also IMO the
> @range_check and @default_value decorators succeed in that. The code
> generated would presumably be the same, unless the exception capture
> discussed further down were implemented.

If you take the decorator at face value, it's clear. (In a sort of 
because I said so way.) But if you look in the decorator, it may be 
quite unclear.  Ie.. it sort of sweeps the dirt under the rug. (IMO) 
The thing is, defining a decorator can be fairly complex compared to a 
regular function depending on what one is trying to do.


>>How about if you make it optional too?

>>@@keeplog log_of_interesting_values # if keeplog decorate.
>>b = get_value(c,d)
>>
>>Just a thought.
> 
> Yes, but that is too easy to do another way. Plus I want to reserve
> '@@' for an AST-time decoration idea I have ;-)

The @@ could be whatever, but a single @ could probably be used just as 
well.

How about any line that begins with an @ is preparsed as sugar.  And 
then create a simple sugar language to go along with it?

But that would be compile time macros wouldn't it. ;-)


>>When it comes to decorators, and now the with statements, I can't help 
>>but feel that there's some sort of underlying concept that would work 
>>better.  It has to do with generalizing flow control in a dynamic way 
>>relative to an associated block.
>>
>>One thought is to be able to use a place holder in an expression list to 
>>tell a statement when to do the following block of code.
> 
> it depends on scope and control aspects of what you mean by "block".

By block I meant the indented following suite.  No special scope rules 
that don't already currently exist in any 'if', 'while', or 'for' suite.


> But I doubt if we have much chance of introducing something is one
> more bf in the storm of "with" ideas that have already been
> discussed.

I'd like to think until 2.5 is released that there's still a chance that 
something better could come along.  But it would have to be pretty darn 
good I expect.


> They strike me as a kind of macro idea where the only substitution argument
> is the block suite that follows, which IMO is a severe limitation on both
> the macro idea and the use of blocks ;-)

I'm not sure it's macro or not.  Maybe it's a flow control parser 
statement?

  Does that sound any better than macro?  ;-)


>>I like the place holders because I think they make the code much more 
>>explicit and they are more flexible because you can put them where you 
>>need them.
> 
> Yes, but if you want to go that way, I'd want to have named place holders
> and be able to refer to arbitrary things that make sense in the context.

 From what I've seen so far, there's a lot of resistance to real run 
time macro's.  So I don't expect them any time soon.

The mechanism I suggested doesn't store code or name it. So it's not a 
macro, it's closer to a while that conditionally runs the body, but in 
this case the condition is when instead of if.  It's a different concept 
that I think can compliment the language without being too complex.


Named macros make it even more useful.

Here I used 'this' as the keyword to indicate when the suite is to be 
done.  So it's a do-this-suite statement.


do f = opening(filename); try this; finally f.close():
suite


Now using "Sugar" language!;-)

 # Create sugar
@with_opened = "opening(%s); try this; finally f.close()"


do f = $with_opened%('filename'):# $ indicates sugar
suite



I used Pythons % operator as it already exists and works fine in this 
situation.  Easy to implement as well.

Hmm.. not sure how to apply this to a decorator. Lets see... Working it 
out...

 # function to use
 def check_range(x):
if x in range(10,25):
   return
raise RangeError# or what ever is suitable

 # Make the decorator with sugar
 @checkrange = "%s %s check_range(%s)"

Parses on spaces as default?

 $checkrange%   # Use the following line here
 x = 24 # since nothing given after the %


Which will

Re: extend for loop syntax with if expr like listcomp&genexp ?

2005-07-11 Thread Ron Adam
Bengt Richter wrote:
> E.g., so we could write
> 
> for x in seq if x is not None:
> print repr(x), "isn't None ;-)"
> 
> instead of
> 
> for x in (x for x in seq if x is not None):
> print repr(x), "isn't None ;-)"
> 
> just a thought.
> 
> Regards,
> Bengt Richter

Is it new idea month?  :)



That would seem to follow the pattern of combining sequential lines that 
end in ':'.


if pay<10 if hours>10 if stressed:
sys.exit()

That would be the same as using ands.



And this gives us an if-try pattern with a shared else clause.

if trapped try:
exit = find('door')
except:
yell_for_help()
else:  #works for both if and try!  ;-D
leave()


Which would be the same as:

if trapped:
try:
   exit = find('door')
except:
   yell_for_help()
else:
   leave()
else:
leave()


Interesting idea, but I think it might make reading other peoples code 
more difficult.


Cheers,
Ron
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Ordering Products

2005-07-17 Thread Ron Adam
Kay Schluehr wrote:
> Here might be an interesting puzzle for people who like sorting
> algorithms ( and no I'm not a student anymore and the problem is not a
> students 'homework' but a particular question associated with a
> computer algebra system in Python I'm currently developing in my
> sparetime ).



>>>>x = 7*a*b*a*9
>>>>x.factors.sort()
>>>>x
> 
> a*a*b*7*9
> 
> -> (a**2)*b*63
> 
> Now lets drop the assumption that a and b commute. More general: let be
> M a set of expressions and X a subset of M where each element of X
> commutes with each element of M: how can a product with factors in M be
> evaluated/simplified under the condition of additional information X?
> 
> It would be interesting to examine some sorting algorithms on factor
> lists with constrained item transpositions. Any suggestions?
> 
> Regards,
> Kay

Looks interesting Kay.

I think while the built in sort works as a convenience, you will need to 
write your own more specialized methods, both an ordering (parser-sort), 
and simplify method, and call them alternately until no further changes 
are made.  (You might be able to combine them in the sort process as an 
optimization.)

A constrained sort would be a combination of splitting (parsing) the 
list into sortable sub lists and sorting each sub list, possibly in a 
different manner, then reassembling it back.  And doing that possibly 
recursively till no further improvements are made or can be made.


On a more general note, I think a constrained sort algorithm is a good 
idea and may have more general uses as well.

Something I was thinking of is a sort where instead of giving a 
function, you give it a sort key list.  Then you can possibly sort 
anything in any arbitrary order depending on the key list.

sort(alist, [0,1,2,3,4,5,6,7,8,9])   # Sort numbers forward
sort(alist, [9,8,7,6,5,4,3,2,1,0])   # Reverse sort
sort(alist, [1,3,5,7,9,0,2,4,6,8])   # Odd-Even sort
sort(alist, [int,str,float]) # sort types

These are just suggestions, I haven't worked out the details.  It could 
probably be done currently with pythons built in sort by writing a 
custom compare function that takes a key list.  How fine grained the key 
list is is also something that would need to be worked out.  Could it 
handle words and whole numbers instead of letters and digits?  How does 
one specify which?  What about complex objects?


Here's a "quick sort" function that you might be able to play with.. 
There are shorter versions of this, but this has a few optimizations added.

Overall it's about 10 times slower than pythons built in sort for large 
lists, but that's better than expected considering it's written in 
python and not C.

Cheers,
Ron



# Quick Sort
def qsort(x):
 if len(x)<2:
 return x# Nothing to sort.

 # Is it already sorted?
 j = min = max = x[0]
 for i in x:
 # Get min and max while checking it.
 if imax: max=i
 if imid:
 gt.append(i)
 continue
 eq.append(i)

 # Recursively divide the lists then reassemble it
 # in order as the values are returned.
 return q(lt)+eq+q(gt)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Efficiently Split A List of Tuples

2005-07-17 Thread Ron Adam
Raymond Hettinger wrote:
>>Variant of Paul's example:
>>
>>a = ((1,2), (3, 4), (5, 6), (7, 8), (9, 10))
>>zip(*a)
>>
>>or
>>
>>[list(t) for t in zip(*a)] if you need lists instead of tuples.
> 
> 
> 
> [Peter Hansen]
> 
>>(I believe this is something Guido considers an "abuse of *args", but I
>>just consider it an elegant use of zip() considering how the language
>>defines *args.  YMMV]
> 
> 
> It is somewhat elegant in terms of expressiveness; however, it is also
> a bit disconcerting in light of the underlying implementation.
> 
> All of the tuples are loaded one-by-one onto the argument stack.  For a
> few elements, this is no big deal.  For large datasets, it is a less
> than ideal way of transposing data.
> 
> Guido's reaction makes sense when you consider that most programmers
> would cringe at a function definition with thousands of parameters.
> There is a sense that this doesn't scale-up very well (with each Python
> implementation having its own limits on how far you can push this
> idiom).
> 
>  
> Raymond


Currently we can implicitly unpack a tuple or list by using an 
assignment.  How is that any different than passing arguments to a 
function?  Does it use a different mechanism?



(Warning, going into what-if land.)

There's a question relating to the above also so it's not completely in 
outer space.  :-)


We can't use the * syntax anywhere but in function definitions and 
calls.  I was thinking the other day that using * in function calls is 
kind of inconsistent as it's not used anywhere else to unpack tuples. 
And it does the opposite of what it means in the function definitions.

So I was thinking, In order to have explicit packing and unpacking 
outside of function calls and function definitions, we would need 
different symbols because using * in other places would conflict with 
the multiply and exponent operators.  Also pack and unpack should not be 
the same symbols for obvious reasons.  Using different symbols doesn't 
conflict with * and ** in functions calls as well.

So for the following examples, I'll use '~' as pack and '^' as unpack.

~ looks like a small 'N', for put stuff 'in'.
^ looks like an up arrow, as in take stuff out.

(Yes, I know they are already used else where.  Currently those are 
binary operators.  The '^' is used with sets also. I did say this is a 
"what-if" scenario.  Personally I think the binary operator could be 
made methods of a bit type, then they ,including the '>>' '<<' pair, 
could be freed up and put to better use.  The '<<' would make a nice 
symbol for getting values from an iterator. The '>>' is already used in 
print as redirect.)


Simple explicit unpacking would be:

(This is a silly example, I know it's not needed here but it's just to 
show the basic pattern.)

x = (1,2,3)
a,b,c = ^x # explicit unpack,  take stuff out of x


So, then you could do the following.

zip(^a)# unpack 'a' and give it's items to zip.

Would that use the same underlying mechanism as using "*a" does?  Is it 
also the same implicit unpacking method used in an assignment using 
'='?.  Would it be any less "a bit disconcerting in light of the 
underlying implementation"?



Other possible ways to use them outside of function calls:

Sequential unpacking..

x = [(1,2,3)]
a,b,c = ^^x->  a=1, b=2, c=3

Or..

x = [(1,2,3),4]
a,b,c,d = ^x[0],x[1]   -> a=1, b=2, c=3, d=4

I'm not sure what it should do if you try to unpack an item not in a 
container.  I expect it should give an error because a tuple or list was 
expected.

a = 1
x = ^a# error!


Explicit packing would not be as useful as we can put ()'s or []'s 
around things.  One example that come to mind at the moment is using it 
to create single item tuples.

 x = ~1->   (1,)

Possible converting strings to tuples?

 a = 'abcd'
 b = ~^a   ->   ('a','b','c','d') # explicit unpack and repack

and:

 b = ~a->   ('abcd',)   # explicit pack whole string

for:

 b = a,->   ('abcd',)   # trailing comma is needed here.
# This is an error opportunity IMO


Choice of symbols aside, packing and unpacking are a very big part of 
Python, it just seems (to me) like having an explicit way to express it 
might be a good thing.

It doesn't do anything that can't already be done, of course.  I think 
it might make some code easier to read, and possibly avoid some errors.

Would there be any (other) advantages to it beside the syntax sugar?

Is it a horrible idea for some unknown reason I'm not seeing.  (Other 
than the symbol choices breaking current code.  Maybe other symbols 
would work just as well?)

Regards,
Ron

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Ordering Products

2005-07-18 Thread Ron Adam
Kay Schluehr wrote:

> 
> Ron Adam wrote:
> 
>>Kay Schluehr wrote:

>>On a more general note, I think a constrained sort algorithm is a good
>>idea and may have more general uses as well.
>>
>>Something I was thinking of is a sort where instead of giving a
>>function, you give it a sort key list.  Then you can possibly sort
>>anything in any arbitrary order depending on the key list.
>>
>>sort(alist, [0,1,2,3,4,5,6,7,8,9])   # Sort numbers forward
>>sort(alist, [9,8,7,6,5,4,3,2,1,0])   # Reverse sort
>>sort(alist, [1,3,5,7,9,0,2,4,6,8])   # Odd-Even sort
>>sort(alist, [int,str,float]) # sort types
> 
> 
> Seems like you want to establish a total order of elements statically.
> Don't believe that this is necessary.

I want to establish the sort order at the beginning of the sort process 
instead of using many external compares during the sort process.  Using 
a preprocessed sort key seems like the best way to do that.  How it's 
generated doesn't really matter.  And of course a set of standard 
defaults could be built in.

>>These are just suggestions, I haven't worked out the details.  It could
>>probably be done currently with pythons built in sort by writing a
>>custom compare function that takes a key list.
> 
> 
> Exactly.

The advantage of doing it as above would be the sort could be done 
entirely in C and not need to call a python compare function on each 
item.  It would be interesting to see if and how much faster it would 
be.  I'm just not sure how to do it yet as it's a little more 
complicated than using integer values.

>>How fine grained the key
>>list is is also something that would need to be worked out.  Could it
>>handle words and whole numbers instead of letters and digits?  How does
>>one specify which?  What about complex objects?
> 
> 
> In order to handle complex objects one needs more algebra ;)
> 
> Since the class M only provides one operation I made the problem as
> simple as possible ( complex expressions do not exist in M because
> __mul__ is associative  - this is already a reduction rule ).
> 
> Kay

I'm played around with your example a little bit and think I see how it 
should work... (partly guessing)  You did some last minute editing so M 
and Expr were intermixed.

It looks to me that what you need to do is have the expressions stored 
as nested lists and those can be self sorting.  That can be done when 
init is called I think, and after any operation.

You should be able to add addition without too much trouble too.

a*b   ->  factors [a],[b] -> [a,b] You got this part.

c+d   ->  sums [c],[d] -> [c,d]Need a sums type for this.

Then...

a*b+c*d  ->  sums of factors ->  [[a,b],[c,d]]

This would be sorted from inner to outer.

(a+b)*(b+c)  ->  factors of sums ->  [[a,b],[c,d]]

Maybe you can sub class list to create the different types?  Each list 
needs to be associated to an operation.

The sort from inner to outer still works. Even though the lists 
represent different operations.

You can sort division and minus if you turn them into sums and factors 
first.

1-2   ->  sums [1,-2]

3/4   ->  factors [3,1/4]   ?  hmmm...  I don't like that.

Or that might be...

3/4   ->  factor [3], divisor [4]  ->  [3,[4]]


So you need a divisor type as a subtype of factor.  (I think)


You can then combine the divisors within factors and sort from inner to 
outer.

(a/b)*(c/e)  ->  [a,[b],c,[e]]  -> [a,c,[b,e]]

Displaying these might take a little more work.  The above could get 
represented as...

(a*c)/(b*e)

Which I think is what you want it to do.


Just a few thoughts.  ;-)


Cheers,
Ron

















-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Efficiently Split A List of Tuples

2005-07-18 Thread Ron Adam
Raymond Hettinger wrote:
> [Ron Adam]
> 
>>Currently we can implicitly unpack a tuple or list by using an
>>assignment.  How is that any different than passing arguments to a
>>function?  Does it use a different mechanism?
> 
> 
> It is the same mechanism, so it is also only appropriate for low
> volumes of data:
> 
> a, b, c = *args # three elements, no problem
> f(*xrange(100)) # too much data, not scalable, bad idea
> 
> Whenever you get the urge to write something like the latter, then take
> it as cue to be passing iterators instead of unpacking giant tuples.
> 
> 
> Raymond

Ah... that's what I expected.  So it better to transfer a single 
reference or object than a huge list of separated items.  I suppose that 
would be easy to see in byte code.

In examples like the above, the receiving function would probably be 
defined with *args also and not individual arguments.  So is it 
unpacked, transfered to the function, and then repacked.  or unpacked, 
repacked and then transfered to the function?

And if the * is used on both sides, couldn't it be optimized to skip the 
unpacking and repacking?  But then it would need to make a copy wouldn't 
it?  That should still be better than passing individual references.

Cheers,
Ron

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Efficiently Split A List of Tuples

2005-07-18 Thread Ron Adam
Simon Dahlbacka wrote:

> Oooh.. you make my eyes bleed. IMO that proposal is butt ugly (and
> looks like the C++.NET perversions.)

I haven't had the displeasure of using C++.NET fortunately.


point = [5,(10,20,5)]

size,t = point
x,y,z = t

size,x,y,z = point[0], point[1][0], point[1][1], point[1][2]

size,x,y,z = point[0], ^point[1]# Not uglier than the above.

size,(x,y,z) = point# Not as nice as this.


I forget sometimes that this last one is allowed, so ()'s on the left of 
the assignment is an explicit unpack.  Seems I'm tried to reinvent the 
wheel yet again.

Cheers,
Ron


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Ordering Products

2005-07-18 Thread Ron Adam
Kay Schluehr wrote:
> Here might be an interesting puzzle for people who like sorting
> algorithms ( and no I'm not a student anymore and the problem is not a
> students 'homework' but a particular question associated with a
> computer algebra system in Python I'm currently developing in my
> sparetime ).
> 
> For motivation lets define some expression class first:


This works for (simple) expressions with mixed multiplication and addition.


class F(list):
 def __init__(self,*x):
 #print '\nF:',x
 list.__init__(self,x)
 def __add__(self, other):
 return A(self,other)
 def __radd__(self, other):
 return A(other,self)
 def __mul__(self, other):
 return M(self,other)
 def __rmul__(self, other):
 return M(other,self)
 def __repr__(self):
 return str(self[0])
 def __order__(self):
 for i in self:
 if isinstance(i,A) \
 or isinstance(i,M):
 i.__order__()
 self.sort()

class A(F):
 def __init__(self, *x):
 #print '\nA:',x
 list.__init__(self, x)
 def __repr__(self):
 self.__order__()
 return "+".join([str(x) for x in self])

class M(F):
 def __init__(self,*x):
 #print '\nM:',x
 list.__init__(self,x)
 def __repr__(self):
 self.__order__()
 return "*".join([str(x) for x in self])


a = F('a')
b = F('b')
c = F('c')
d = F('d')

print '\n a =', a

print '\n b+a+2 =', b+a+2

print '\n c*b+d*a+2 =', c*b+d*a+2

print '\n 7*a*8*9+b =', 7*a*8*9+b



 >>>

  a = a

  b+a+2 = 2+a+b

  c*b+d*a+2 = 2+a*d+b*c

  7*a*8*9+b = 9*8*7*a+b  <--  reverse sorted digits?
 >>>


The digits sort in reverse for some strange reason I haven't figured out 
yet, but they are grouped together.  And expressions of the type a*(c+b) 
don't work in this example.

It probably needs some better logic to merge adjacent like groups.  I 
think the reverse sorting my be a side effect of the nesting that takes 
place when the expressions are built.

Having the digits first might be an advantage as you can use a for loop 
to add or multiply them until you get to a not digit.

Anyway, interesting stuff. ;-)

Cheers,
Ron
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Ordering Products

2005-07-19 Thread Ron Adam
Kay Schluehr wrote:


> Hi Ron,
> 
> I really don't want to discourage you in doing your own CAS but the
> stuff I'm working on is already a bit more advanced than my
> mono-operational multiplicative algebra ;)

I figured it was, but you offered a puzzle:

   "Here might be an interesting puzzle for people who like sorting
algorithms ..."

And asked for suggestions:

   "It would be interesting to examine some sorting algorithms on factor
lists with constrained item transpositions. Any suggestions?"

So I took you up on it.   ;-)


BTW.. Usually when people say "I don't want to discourage...", They 
really want or mean the exact oppisite.


This is a organizational problem in my opinion, so the challenge is to 
organize the expressions in a way that can be easily manipulated 
further.  Groupings by operation is one way.  As far as inheritance 
goes, it's just another way to organize things.  And different algebra's 
and sub-algebra's are just possible properties of a group. The groups 
can easily be customized to have their own behaviors or be created to 
represent custom unique operations.

The sort method I'm suggesting here, with examples, is constrained by 
the associated properties of the group that is being sorted.  Basically, 
weather or not it's and associative operation or not.  So when a group 
is asked to sort, it first asks all it's sub groups to sort, then it 
sorts it self if it is an associative group.  Ie.. from inner most group 
to outer most group but only the associative ones.

Playing with it further I get the following outputs.

( The parenthesis surround a group that is associated to the operation. 
  This is the same idea/suggestion I first proposed, it's just been 
developed a little further along.)


  b+a+2 = (2+a+b)<- addition group

  a*(b+45+23) =  ((68+b)*a)  <- addition group within multiply group

  a-4-3-7+b =  ((a-14)+b)<- sub group within add group

  c*b-d*a+2 = (2+((b*c)-(a*d)))  <- mults within subs within adds

  7*a*8*9+b = ((504*a)+b)

  a*(b+c) = ((b+c)*a)

  c*3*a*d*c*b*7*c*d*a = (21*a*a*b*c*c*c*d*d)

  d*b/c*a = (((b*d)/c)*a)

  (d*b)/(c*a) = ((b*d)/(a*c))

  d*b-a/e+d+c = (((b*d)-(a/e))+c+d)

  a/24/2/b = (a/48/b)

  c**b**(4-5) = (c**(b**-1))

  (d**a)**(2*b) = ((d**a)**(2*b))

The next step is to be able to convert groups to other groups; an 
exponent group to a multiply group; a subtract group to an addition 
group with negative prefix's.. and so on.

That would be how expansion and simplifying is done as well as testing 
equivalence of equations.

if m*c**2 == m*c*c:
   print "Eureka!"


> Mixing operators is not really a problem, but one has to make initial
> decisions ( e.g about associativity i.e. flattening the parse-tree )
> and sub-algebra generation by means of inheritance:

What do you mean by 'sub-algebra generation'?


>>>>a,b = seq(2,Expr)
>>>>type(a+b)
> 
> 
> 
>>>>class X(Expr):pass
>>>>x,y = seq(2,X)
>>>>type(x+y)
> 
> 
> 
> This is not particular hard. It is harder to determine correspondence
> rules between operations on different levels. On subalgebras the
> operations of the parent algebra are induced. But what happens if one
> mixes objects of different algebras that interoperate with each other?
> It would be wise to find a unified approach to make distinctive
> operations visually distinctive too. Infix operators may be
> re-introduced just for convenience ( e.g. if we can assume that all
> algebras supporting __mul__ that are relevant in some computation have
> certain properties e.g. being associative ).

Different algebras would need to be able to convert themselves to some 
common representation.  Then they would be able to be mixed with each 
other with no problem.

Or an operation on an algebra group could just accept it as a unique 
term, and during an expansion process it could convert it self (and it's 
members) to the parents type.  That would take a little more work, but I 
don't see any reason why it would be especially difficult.

Using that methodology, an equation with mixed algebra types could be 
expanded as much as possible, then reduced back down again using a 
chosen algebra or the one that results in the most concise representation.

> ##
> 
> After thinking about M ( or Expr ;) a little more I come up with a
> solution of the problem of central elements of an algebra ( at least
> the identity element e is always central ) that commute with all other
> elements.

What is a "central" element?  I can see it involves a set, but the 
context isn't clear.


> Here is my approach:
> 
> # Define a subclass of list, that 

Re: Ordering Products

2005-07-20 Thread Ron Adam
Kay Schluehr wrote:
> Ron Adam wrote:
> 
>> Kay Schluehr wrote:

>> BTW.. Usually when people say "I don't want to discourage...", They
>>  really want or mean the exact oppisite.
> 
> Yes, but taken some renitence into account they will provoke the 
> opposite. Old game theoretic wisdoms ;)

True..  but I think it's not predictable which response you will get
from an individual you aren't familiar with.  I prefer positive
reinforcement over negative provocation myself. :-)


> But you seem to fix behaviour together with an operation i.e.
> declaring that __mul__ is commutative. But in a general case you
> might have elements that commute, others that anti-commute ( i.e. a*b
> = -b*a ) and again others where no special rule is provided i.e. they
> simply don't commute.
> 
> But much worse than this the definition of the operations __add__, 
> __mul__ etc. use names of subclasses A,D explicitely(!) what means
> that the framework can't be extended by inheritance of A,D,M etc.
> This is not only bad OO style but customizing operations ( i.e.
> making __mul__ right associative ) for certain classes is prevented
> this way. One really has to assume a global behaviour fixed once as a
> class attribute.

I don't know if it's bad OO style because I chose a flatter model.
Your original question wasn't "what would be the best class structure to
use where different algebra's may be used".  It was how can sorting be
done to an expression with constraints. And you gave an example which 
set __mul__ as associative as well.

So this is a different problem.  No use trying to point that what I did
doesn't fit this new problem, it wasn't suppose to.  ;-)

I'm not sure what the best class structure would be.  With the current
example,  I would need to copy and edit F and it's associated sub
class's to create a second algebra type, F2, A2, M2.. etc.  Not the best
solution to this additional problem which is what you are pointing out I
believe.

So...  We have factors (objects), groups (expressions), and algebras
(rules), that need to be organized into a class structure that can
be extended easily.

Does that describe this new problem adequately?  I'm not sure what the
best, or possible good solutions would be at the moment.  I'll have to 
think about it a bit.


>> c*3*a*d*c*b*7*c*d*a = (21*a*a*b*c*c*c*d*d)
> 
> 
> I still don't see how you distinguish between factors that might 
> commute and others that don't. I don't want a and b commute but c and
> d with all other elements.

In my example factors don't commute.  They are just units, however
factors within a group unit may commute because a group is allowed to 
commute factors if the operation the group is associated to is commutable.


> If you have fun with those identities you might like to find 
> simplifications for those expressions too:
> 
> a*0   -> 0 a*1   -> a 1/a/b -> b/a a+b+a -> 2*a+b a/a   -> 1 a**1  ->
> a
> 
> etc.

Already did a few of those.  Some of these involve changing a group into 
a different group which was a bit of a challenge since an instance can't 
magically change itself into another type of instance, so the parent 
group has to request the sub-group to return a simplified or expanded 
instance, then the parent can replace the group with the new returned 
instance.

a*a*a -> a**3 change from a M group to a P group.
a*0   -> 0change from a M group to an integer.
a*1   -> achange from a M group to a F unit.
a+b+a -> 2*a+bchange a A subgroup to a M group.
a/a   ->  change a D group to an integer.
a**1  ->  change a P group to a M group to a F unit.

Some of those would be done in the simplify method of the group.  I've 
added an expand method and gotten it to work on some things also.

   a*b**3  ->  a*b*b*b
   c*4 ->  c+c+c+c


>> What do you mean by 'sub-algebra generation'?
>  
> Partially what I described in the subsequent example: the target of
> the addition of two elements x,y of X is again in X. This is not
> obvious if one takes an arbitrary nonempty subset X of Expr.

Would that be similar to the simultaneous equation below?

z = x+y<-  term x+y is z
x = a*z+b  <-  z is in term x
x = a(x+y)+b   <-  x is again in x  (?)

I think this would be...

 >>> x, y = F('x'), F('y')
 >>> z = x+y
 >>> x = a*z+b
 >>> x
(((x+y)*a)+b)

This wouldn't actually solve for x since it doesn't take into account 
the left side of the = in the equation.  And it would need an eval 
method to actually evaluated it.  eval(str(expr)) does work if all the 
factors are given values first.


Cheers,
Ron

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [path-PEP] Path inherits from basestring again

2005-07-25 Thread Ron Adam
Peter Hansen wrote:
> Michael Hoffman wrote:
> 
>> Reinhold Birkenfeld wrote:
>>
>>> Tony Meyer wrote:
>>>
>>>> Do people really like using __div__ to mean join?  
>>>
>>>
>>> I'm not too happy with it, too, but do we have alternatives? ...
>>> Of course, one can use joinwith() if he doesn't like '/'.
>>
>>
>> I've used the path module pretty extensively and always use 
>> joinpath(). Personally, I'm -0 on __div__, but I suppose if anyone 
>> here claimed to have used in the past, rather than it just being some 
>> novelty that might be a good idea, that would be good enough for 
>> keeping it.
> 
> 
> I've tried it both ways, and ended up accepting / as a useful and clean 
> approach, though as a general rule I find operator-overloading to be 
> fairly hideous and to lead to Perlish code.  This one I resisted for a 
> while, then found it fairly pleasant, making it perhaps the exception to 
> the rule...
> 
> Perhaps it's just that in code that builds paths out of several 
> components, it seemed quite straightforward to read when it used / 
> instead of method calls.
> 
> For example, from one program:
> 
>scripts = userfolder / scriptfolder
>scriptpath = scripts / self.config['system']['commandfile']
> 
> instead of what used to be:
> 
>scripts = userfolder.joinpath(scriptfolder)
>scriptpath = scripts.joinpath(self.config['system']['commandfile'])
> 
> Even so I'm only +0 on it.
> 
> -Peter

I think the '+' is used as a join for both strings and lists, so it 
would probably be the better choice as far as consistency with the 
language is concerned.

Cheers,
Ron


As for features we don't have yet, you could use an "inerator". 
Something that gets stuff a little bit at a time.

If you combine an iterator with a inerator, you would have a itinerator 
which would be quite useful for managing 'flight-paths'.   ;-)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [path-PEP] Path inherits from basestring again

2005-07-27 Thread Ron Adam
Toby Dickenson wrote:
> On Wednesday 27 July 2005 05:37, Meyer, Tony wrote:
> 
> 
>>I can see that this would make sense in some situations, but ISTM that it
>>would make a great deal more sense (and be much more intuitive) to have
>>concatenation include the separator character (i.e. be join).  
> 
> 
> def functions_which_modifies_some_file_in_place(path):
>  output = open(path+'.tmp', 'w')
>  .
> 
> I dont want a seperator inserted between path and the new extension.

My impression of '+', is it always join like objects...

str+str -> str
list+list -> list
tuple+tuple -> tuple

So ...

path+path -> path

In all current cases, (that I know of), of differing types, '+' raises 
an error.

Question:  Is a path object mutable?

Would the += operator create an new object or modify the original?

p = path('C://somedir//somefile')

p+='.zip'  what would this do?

p[-1]+='.zip'  Or this?


Cheer's Ron.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [path-PEP] Path inherits from basestring again

2005-07-27 Thread Ron Adam
Michael Hoffman wrote:
> Ron Adam wrote:
> 
>> In all current cases, (that I know of), of differing types, '+' raises 
>> an error.
> 
> 
> Not quite:
> 
>  >>> "hello " + u"world"
> u'hello world'
>  >>> 4.5 + 5
> 9.5
> 
>> Question:  Is a path object mutable?
> 
> 
> No.
> 
> This should answer the rest of your questions.

Yes it does, thanks.

In the case of numeric types, it's an addition and not a join.  I should 
have specified in 'all cases, (I know of), where '+' is used to join 
objects, but I thought that was clear from the context of the 
discussion.  I haven't needed to use unicode yet, so it didn't come to mind.


Although it raises other questions...  ;-)

Could a string prefix be used with paths as well?

path = p"C://somedir//somefile"


Would that clash with the 'u' prefix?  Or would we need a 'p' and a 'up' 
prefix to differentiate the two?

(Just a thought. I'm +0 on this, but this seems to be a new string type, 
and file operations are frequent and common.)



You could have both behaviors with the '+'.

 path_a + path_b  ->  join path_b to path_a using separator.

 path + string ->  append string to path (no separator)

 string + path ->  prepends string to path (no separator)


This would be similar, (but not exactly like), how u'text'+'text' and 
'text'+u'text' work.  They both return unicode strings given a standard 
string.  It would allow the following to work.


 path = path('C:')+path('somedir')+path('somefile.txt')+'.zip'

 ->>   'C://somedir//somefile.txt.zip'



So I guess the question here is which of these is preferable with '+'?

1.  Strings act like paths when one is a path.  They will be joined with 
a separator.

2.  Paths are joined with separators *and* a string is prepended or 
postpended "as is" with no separator.

3.  All path objects (and strings) act like strings.  No separator is 
used when joining path objects with '+'.


(Seems like #3 defeats the purpose of path a bit to me.)


I'm +1 on #2 after thinking about it.

Cheers,
Ron

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [path-PEP] Path inherits from basestring again

2005-07-27 Thread Ron Adam
Peter Hansen wrote:
> Ron Adam wrote:
> 
>> Michael Hoffman wrote:
>>
>>> Ron Adam wrote:
>>>
>>>> In all current cases, (that I know of), of differing types, '+' 
>>>> raises an error.
>>>
>>>
>>> Not quite:
>>>  >>> "hello " + u"world"
>>> u'hello world'
>>>  >>> 4.5 + 5
>>> 9.5
>>>
>> In the case of numeric types, it's an addition and not a join.  I 
>> should have specified in 'all cases, (I know of), where '+' is used to 
>> join objects, but I thought that was clear from the context of the 
>> discussion.  I haven't needed to use unicode yet, so it didn't come to 
>> mind.
> 
> 
> I believe Michael intended to show that "4.5 + 5" actually represents 
> using + with two different types, specifically a float and an int, thus 
> giving at least two common cases where errors are not raised.

Yes, I got that distinction.

Yet, concatenation isn't addition.  Yes, they have some conceptual 
similarities, but they are two different operations that happen to use 
the same symbol for convenience.  Three if you count lists and tuples 
join as being different from string concatinattion.

> (While the issue of "addition" vs. "join" is merely a (human) language 
> issue... one could just as well say that those two numbers are being 
> "joined" by the "+".)
>
> -Peter

It's also a Python language issue.  ;-)

Cheers,
Ron
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Friend wants to learn python

2005-07-29 Thread Ron Stephens
EnderLocke wrote:
> I have a friend who wants to learn python programming. I learned off
> the internet and have never used a book to learn it. What books do you
> recommend?
>
> Any suggestions would be appreciated.

I have just uploaded a podcast specifically about which tutorials and
books might be best for newbies to Python, depending on their
background. It can be reached at
http://www.awaretek.com/python/index.html

Ron

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP on path module for standard library

2005-07-30 Thread Ron Adam
Bengt Richter wrote:


>  >



> 
> Say, how about
> 
> if Pathobject('gui://message_box/yn/continue 
> processing?').open().read().lower()!='y':
> raise SystemExit, "Ok, really not continuing ;-)"
> 
> An appropriate registered subclass for the given platform, returned when the
> Pathobject base class instantiates and looks at the first element on open() 
> and delegates
> would make that possible, and spelled platform-independently as in the code 
> above.

I like it.  ;-)

No reason why a path can't be associated to any tree based object.

> 
> 
> Regards,
> Bengt Richter



I wasn't sure what to comment on, but it does point out some interesting 
possibilities I think.

A path could be associated to any file-like object in an external (or 
internal) tree structure.  I don't see any reason why not.

In the case of an internal file-like object, it could be a series of 
keys in nested dictionaries.  Why not use a path as a dictionary 
interface?

So it sort of raises the question of how tightly a path object should be 
associated to a data structure?  When and where should the path object 
determine what the final path form should be?  And how smart should it 
be as a potential file-like object?

Currently the device name is considered part of the path, but if instead 
you treat the device as an object, it could open up more options.

(Which would extend the pattern of your example above. I think.)

(also a sketch..  so something like...)

# Initiate a device path object.
apath = device('C:').path(initial_path)

# Use it to get and put data
afile = apath.open(mode,position,units)  # defaults ('r','line',next)
aline = afile.read().next()# read next unit, or as an iterator.
afile.write(line)
afile.close()

# Manually manipulate the path
apath.append('something') # add to end of path
apath.remove()# remove end of path
alist = apath.split()  # convert to a list
apath.join(alist) # convert list to a path
astring = str(apath()) # get string from path
apath('astring')  # set path to string
apath.validate()   # make sure it's valid

# Iterate and navigate the path
apath.next()# iterate path objects
apath.next(search_string)# iterate with search string
apath.previous()# go back
apath.enter()   # enter directory
apath.exit()# exit directory

# Close it when done.
apath.close()

etc...


With this you can iterate a file system as well as it's files.  ;-) 

 (Add more or less methods as needed of course.)


apath = device(dev_obj).path(some_path_sting)
    apath.open().write(data).close()


or if you like...

device(dev_obj).append(path_sting).open().write(data).close()



Just a few thoughts,

Cheers,
Ron






-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [path-PEP] Path inherits from basestring again

2005-08-01 Thread Ron Adam
Ivan Van Laningham wrote:


>>People can subclass Path and add it if they really want it.  They can use
>>Jason's original module.  My position is that the PEP without this use of
>>__div__ is (a) better as a standard module, and (b) improves the chance of
>>the PEP being accepted.
>>
> 
> 
> I disagree.  Using __div__ to mean path concatenation is no worse than
> using __add__ to mean string concatenation, 

But '+' doesn't mean string concatenation, it means (calls the __add__ 
method) which when used with objects other than numeric types is usually 
an object join method.  It only means string concatenation when used 
with string objects.

In this case a path object is a string object that behaves somewhat like 
a list and somewhat like a string depending on how it's being accessed. 
  Maybe a string class isn't the best way to store a path.  It seems to 
me a list with string elements would work better.  I'm not sure why it 
was ruled out.  Maybe someone could clarify that for me.

I think it's a mistake to define '+' literally to mean string 
concatenation.  I believe the '/' will be considered a wart by many and 
a boon by others.  If I'm right, there will be endless discussions over 
it if it's implemented.  I'd rather not see that, so I'm still -1 
concerning '/' for that reason among others.


Cheers,
Ron

PS. Could someone repost the links to the current pre-pep and the most 
recent module version so I can look at it closer look?




-- 
http://mail.python.org/mailman/listinfo/python-list


Path as a dictionary tree key? (was Re: PEP on path module for standard library)

2005-08-01 Thread Ron Adam


I'm wondering if a class that acts as a interface to a tree data 
structure stored in a dictionary could also be useful as a base class 
for accessing filesystems, urls, and zip (or rar) files.

Then a path object could then be used as a dictionary_tree key.

This idea seems much more useful to me than the specific file path 
object being proposed.  It could be used for all sorts of things and 
extends Python in a more general way.

So given a toy example and a not yet written Tree class which is just a 
dictionary with alternate methods for managing tree data.

D = {'a': {'b': 'data1', 'c': 'data2'}}
D_tree = Tree(D)

As:
'a'
'b'
'data1'
'c'
'data2'

A path object to get 'data1' would be.

path = Path('a','b')

item = D_tree[path]

or

item = D_tree[Path('a','b')]


That would be in place of..

item = D[path[0]][path[1]] -> item = D['a']['b']


This give a more general purpose for path objects.  Working out ways to 
retrieve path objects from a dictionary_tree also would be useful I 
think.  I think a Tree class would also be a useful addition as well.

Any thoughts on this?


Cheers,
Ron

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Path as a dictionary tree key? (was Re: PEP on path module for standard library)

2005-08-01 Thread Ron Adam
Brian Beck wrote:
> Ron Adam wrote:
> 
>> This give a more general purpose for path objects.  Working out ways 
>> to retrieve path objects from a dictionary_tree also would be useful I 
>> think.  I think a Tree class would also be a useful addition as well.
>>
>> Any thoughts on this?
> 
> 
> I don't think this would be as useful as you think, for Path objects at 
> least.  Path objects represent *a* path, and building a tree as you have 
> proposed involves storing much more information than necessary.  For 
> instance, if I have the path /A/B/X, a tree-structured object would 
> logically also store the siblings of X and B (subpaths of B and A).


But you miss understand...  I'm not suggesting a path object be a tree, 
but it should/could be used as a key for accessing data stored in a tree 
structure.

The path object it self would be much the same thing as what is already 
being discussed.  Using it as a key to a tree data structure is an 
additional use for it.

Regards,
Ron



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Path as a dictionary tree key? (was Re: PEP on path module for standard library)

2005-08-01 Thread Ron Adam


Here's an example of how path objects could possibly be used to store 
and retrieve data from tree_dictionary class.

I used lists here in place of path objects, but I think path objects 
would be better.  I think paths used this way creates a consistant way 
to access data stored in both internal and external tree structurs.

A more realistic example would be to store formated (html) strings in 
the tree and then use imbeded paths to jump from page to page.

The tree class defined here is not complete, it's just the minimum to 
get this example to work.  I wouldn't mind at all if anyone posted 
improvements.  (hint hint) ;-)

Cheers,
Ron Adam


+  output --

Define paths:
path1 = ['hello', 'world']
path2 = ['hello', 'there', 'world']
path3 = ['hello', 'there', 'wide', 'world']

Store path keys in tree:
  hello
world
  None
there
  world
None
  wide
world
  None

Get path list from tree:
['hello', 'world']
['hello', 'there', 'world']
['hello', 'there', 'wide', 'world']

Put items in tree using path:
  hello
world
  1
there
  world
2
  wide
world
  3

Get items from tree using path:
path1: 1
path2: 2
path3: 3


+ source code ---

# Dictionary tree class  (not finished)
class Tree(dict):
 #TODO - remove_item method,
 #   remove_path method,
 #   __init__ method if needed.

 def get_item(self, path):
 d = self
 for key in path[:-1]:
 d=d[key]
 return d[path[-1]]

 def put_item(self, path, item):
 d = self
 for key in path[:-1]:
 d=d[key]
 d[path[-1]]=item

 def store_path(self, path_=None):
 # store item
 key = path_[0]
 if len(path_)==1:
 self[key]=None
 return
 try:
 self[key]
 except KeyError:
 self[key]=Tree()
 self[key].store_path(path_[1:])

 def get_paths(self, key=None, path=[], pathlist=[]):
 if key==None:
 key=self
 keys = key.keys()
 for k in keys:
 if type(self[k]) is Tree:
 path.append(k)
 self[k].get_paths(key[k], path, pathlist)
 else:
 pathlist.append(path+[k])
 return pathlist


def pretty_print_tree(t, tab=0):
 for k in t.keys():
 print '  '*tab, k
 if type(t[k]) is Tree:
 pretty_print_tree(t[k], tab+1)
 else:
 print '  '*(tab+1), t[k]

# Store data in a dictionary.

print 'Define paths:'
path1 = ['hello','world']
print 'path1 =', path1
path2 = ['hello','there','world']
print 'path2 =', path2
path3 = ['hello','there','wide','world']
print 'path3 =', path3

print '\nStore path keys in tree:'
tree = Tree()
tree.store_path(path1)
tree.store_path(path2)
tree.store_path(path3)
pretty_print_tree(tree)

print '\nGet path list from tree:'
path_list = tree.get_paths()
for path in path_list:
 print path

print '\nPut items in tree using path:'
tree.put_item(path1, '1')
tree.put_item(path2, '2')
tree.put_item(path3, '3')
pretty_print_tree(tree)

print '\nGet items from tree using path:'
print 'path1:', tree.get_item(path1)
print 'path2:', tree.get_item(path2)
print 'path3:', tree.get_item(path3)


+ end ---



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: finding sublist

2005-08-02 Thread Ron Adam
giampiero mu wrote:
> hi everyone

Hi, you appear to be fairly new to Python, so you might want to take a 
look at the tutorial at Python.org

Kents suggestion to use RE is good.


This should be faster than your example by quite a bit if you don't want 
to use RE.

def controlla(test, size=4):
 # Return substring if a second substring is found.
 # Return None if no match found.

 for i in range(len(test)-size):
 match=test[i:i+size]
 left=test[:i]
 right=test[i+size:]
 if match in left or match in right:
 return match

print controlla('qqqabcdrrabcd')

-->  'abcd'


Here's a few notes on your example for future reference:

Multiple breaks don't work.  The first break will jump out of the loop 
before the other breaks are reached.

Any function that ends without a return will return None.  So you don't 
need to return 'no'.

Cheers,
Ron


-- 
http://mail.python.org/mailman/listinfo/python-list


RE: how to improve this simple block of code

2006-01-11 Thread Ron Griswold
How 'bout:

X = "132.00";
Y = int(float(X));

Ron Griswold
Character TD
R!OT Pictures
[EMAIL PROTECTED]

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf
Of Mel Wilson
Sent: Wednesday, January 11, 2006 1:08 PM
To: python-list@python.org
Subject: Re: how to improve this simple block of code

py wrote:
> Say I have...
> x = "132.00"
> 
> but I'd like to display it to be "132" ...dropping the trailing
> zeros...

print '%g' % (float(x),)

might work.

Mel.

-- 
http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Converting a string to an array?

2006-01-12 Thread Ron Griswold
Does this do what you are looking for?

>>> s = 'abcdefg';
>>> a = [];
>>> a += s;
>>> a;
['a', 'b', 'c', 'd', 'e', 'f', 'g']

Ron Griswold
Character TD
R!OT Pictures
[EMAIL PROTECTED]


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf
Of Tim Chase
Sent: Thursday, January 12, 2006 12:20 PM
To: python-list@python.org
Subject: Converting a string to an array?

While working on a Jumble-esque program, I was trying to get a 
string into a character array.  Unfortunately, it seems to choke 
on the following

import random
s = "abcefg"
random.shuffle(s)

returning

   File "/usr/lib/python2.3/random.py", line 250, in shuffle
 x[i], x[j] = x[j], x[i]
   TypeError: object doesn't support item assignment

The closest hack I could come up with was

import random
s = "abcdefg"
a = []
a.extend(s)
random.shuffle(a)
s = "".join(a)

This lacks the beauty of most python code, and clearly feels like 
there's somethign I'm missing.  Is there some method or function 
I've overlooked that would convert a string to an array with less 
song-and-dance?  Thanks,

-tim





-- 
http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Creating shortcuts?

2006-01-12 Thread Ron Griswold
Hi Folks,

Is it possible to create a shortcut to a file in Python? I need to do
this in both win32 and OSX. I've already got it covered in Linux by
system(ln...).

Thanks,

Ron Griswold
Character TD
R!OT Pictures
[EMAIL PROTECTED]

-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Creating shortcuts?

2006-01-13 Thread Ron Griswold
Hi Dennis,

Yes, I am equating a unix soft link to a windows shortcut. Both act as
links to a file or directory. 

I have found that windows shortcuts do appear in linux dir listings with
a .lnk extension, however the file is meaningless to linux. On the other
hand, a linux soft link does not appear in a windows directory listing,
not that I really want it to.

As for os.link and os.symlink, these appear to be unix specific. It
would be nice if os.symlink, when run on windows, would create a
shortcut.

Thanks,

Ron Griswold
Character TD
R!OT Pictures
[EMAIL PROTECTED]


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf
Of Dennis Lee Bieber
Sent: Friday, January 13, 2006 12:26 AM
To: python-list@python.org
Subject: Re: Creating shortcuts?

On Thu, 12 Jan 2006 22:53:42 -0800, "Ron Griswold"
<[EMAIL PROTECTED]> declaimed the following in comp.lang.python:

> 
> Is it possible to create a shortcut to a file in Python? I need to do
> this in both win32 and OSX. I've already got it covered in Linux by
> system(ln...).
>

Are you equating a Windows "shortcut" to a Unix "link"? Soft
link,
at that, I suspect -- as a hard link can be done using os.link(), though
a soft link can be done with os.symlink(). Lets see if my terminology is
correct: a "hard link" is an additional directory entry pointing to a
pre-existing file (with a count of how many entries exist for the file);
a "soft link" is basically a special file that contains the full path to
the actual file (and hence, could cross file system boundaries).

I don't think Windows "shortcuts" are the same thing (as my
memory
struggles, I have vague inklings that NTFS actually supports Unix-like
links, but practically nothing uses them). At best, they may be similar
to a soft link, being a particular type of file, being that they are
files with a ".lnk" extension (and hidden by the OS normally)

 
-- 
 > == <
 >   [EMAIL PROTECTED]  | Wulfraed  Dennis Lee Bieber  KD6MOG <
 >  [EMAIL PROTECTED] |   Bestiaria Support Staff   <
 > == <
 >   Home Page: <http://www.dm.net/~wulfraed/><
 >Overflow Page: <http://wlfraed.home.netcom.com/><
-- 
http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Creating shortcuts?

2006-01-13 Thread Ron Griswold
Hi Roger,

Thank you, I will look into this.

Ron Griswold
Character TD
R!OT Pictures
[EMAIL PROTECTED]



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf
Of Roger Upole
Sent: Friday, January 13, 2006 4:59 AM
To: python-list@python.org
Subject: Re: Creating shortcuts?

On Windows, Pywin32 allows you to create and manipulate
shortcuts.  See \win32comext\shell\test\link.py for a small
class that wraps the required interfaces.
hth
Roger

"Ron Griswold" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
Hi Folks,

Is it possible to create a shortcut to a file in Python? I need to do
this in both win32 and OSX. I've already got it covered in Linux by
system(ln...).

Thanks,

Ron Griswold
Character TD
R!OT Pictures
[EMAIL PROTECTED]



== Posted via Newsfeeds.Com - Unlimited-Unrestricted-Secure Usenet
News==
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+
Newsgroups
= East and West-Coast Server Farms - Total Privacy via Encryption
=
-- 
http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Restarting scripts

2006-01-16 Thread Ron Griswold








This may be a little OT, but it does involve python scripts.
I’ve written a server that hosts my companies
asset management system. Of course something like this cannot be down for very
long without data being lost (of course I have taken measures to inform the
user of the down time and ask them nicely to please try again later… but
in the real world people will just bypass the system and save manually). So
what I need is some facility to start the server, and if it goes down,
automatically start it back up again. 

 

If anyone has any ideas on how I might accomplish this, either
with python code, or third party software, I’d love to hear about it.

 

Thanks,

 

Ron Griswold

Character TD

R!OT Pictures

[EMAIL PROTECTED]

 






-- 
http://mail.python.org/mailman/listinfo/python-list

RE: Restarting scripts

2006-01-16 Thread Ron Griswold
I gave this a shot but haven't had much luck with it. When I try
os.execlp I get an error saying unknown file or directory. Whether this
is in reference to Python or my script I don't know, however python is
in my system's path variable and I'm running the script from the dir it
is located in.

At someone else's suggestion I tried using a loader script with
os.spawnv using the os.P_WAIT flag. Unfortunatly this exits right away
(presumably never starting my server). 

I also tried subprocess.Popen and then calling child.wait(). This sent
me into a loop of exceptions saying that the socket the server is
opening is already in use (even on the first try).

Does anyone know of a nice reliable app that already does this for you?
Open source preferably.

Ron Griswold
Character TD
R!OT Pictures
[EMAIL PROTECTED]

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf
Of Peter Hansen
Sent: Monday, January 16, 2006 5:40 PM
To: python-list@python.org
Subject: Re: Restarting scripts

Brian Cole wrote:
> If everything dies by a Python exception...
> 
> if __name__ == '__main__':
> try:
> main(sys.argv)
> except:
> os.execlp("python", "python", sys.argv)
> 
> Pretty nasty peice of code, but it works well for restarting your
script.

Noting that sys.exit() and possibly several other things can "raise 
SystemExit" and while it is a subclass of Exception (and gets caught by 
the above), it is usually considered a clean and intentional exit and 
changing the above to do this is probably best:

try:
main(sys.argv)
except SystemExit:
raise
except:
os.execlp( etc etc

-Peter

-- 
http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


HTML library

2006-01-17 Thread Ron Griswold








Hi Folks,

 

Can someone point me in the direction of an html library
that generates html text for you. For example, if I
had a tuple of tuples, I’d like a function that would create a table for
me. I’ve looked through the standard library and it only seems to have
html parsers. I need to go the other way ;)

 

Thanks,

 

Ron Griswold

Character TD

R!OT Pictures

[EMAIL PROTECTED]

 






-- 
http://mail.python.org/mailman/listinfo/python-list

RE: HTML library

2006-01-17 Thread Ron Griswold
Hi Cliff,

Looks like xist is exactly what I'm looking for. 

Thank you,

Ron Griswold
Character TD
R!OT Pictures
[EMAIL PROTECTED]


-Original Message-
From: Cliff Wells [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, January 17, 2006 9:33 AM
To: Ron Griswold
Cc: python-list@python.org
Subject: Re: HTML library

On Tue, 2006-01-17 at 09:28 -0800, Ron Griswold wrote:
> Hi Folks,
> 
>  
> 
> Can someone point me in the direction of an html library that
> generates html text for you. For example, if I had a tuple of tuples,
> I'd like a function that would create a table for me. I've looked
> through the standard library and it only seems to have html parsers. I
> need to go the other way ;)
> 

The two best things I know of are Nevow's Stan and XIST.  Since I
started using Stan, I've vowed to never write another line of XML.


http://divmod.org/trac/wiki/DivmodNevow

http://www.livinglogic.de/Python/xist/


Regards,
Cliff


-- 
http://mail.python.org/mailman/listinfo/python-list


cgi script error

2006-01-19 Thread Ron Griswold








Hi Folks,

 

I’m getting the following error from my web server
(httpd on linux): “malformed header from script. Bad header=:
htmllib.cgi

 

The script I’m running is a Python script as follows:

 

#!/usr/bin/python

 

def
openDocument( ):

    print """""";

 

def openHTML(
):

    print "";

    

def closeHTML(
):

    print "";

    

def openHead(
):

    print """\n\t""";

 

def closeHead(
):

    print "";

    

def openBody(
):

    print ""

    

def closeBody(
):

    print "";

    

def
title(string=""):

    print "\t%s" % string;

    

def
p(string="", align="left", fontSize=3):

    print "\n%s\n" % (align, fontSize, string);

 

def br( ):

    print "";

 

 

if(__name__ ==
"__main__"):

    

    #openDocument();

    openHTML();

    #openHead();

    #title("Test Page");

    #closeHead();

 

    openBody();

 

    p("this is a test. this is a
test. this is a test. this is
a test. this is a test. this
is a test. this is a test. this
is a test. this is a test. this
is a test.", "center", 4);

 

    closeBody();

 

    closeHTML();

    

It gives the same complaint if I’ve got openDocument
and/or openHead uncommented. The script is executing otherwise the error wouldn’t
show up in the error_log with specific text from the script. Also, I’ve
run the script and redirected it’s output to an
.html file and the server loads it with no problems. 

 

Any ideas are appreciated.

 

Thanks,

 

Ron Griswold

Character TD

R!OT Pictures

[EMAIL PROTECTED]

 






-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Replacement for keyword 'global' good idea? (e.g. 'modulescope' or 'module' better?)

2005-08-09 Thread Ron Adam
[EMAIL PROTECTED] wrote:
> I've heard 2 people complain that word 'global' is confusing.
> 
> Perhaps 'modulescope' or 'module' would be better?
> 
> Am I the first peope to have thought of this and suggested it?
> 
> Is this a candidate for Python 3000 yet?
> 
> Chris


After reading though some of the suggestions in this thread, (but not 
all of them), how about something a bit more flexible but not too different.

For python 3000 if at all...

Have the ability to define names as shared that only live while the 
function that declared them has not exited.

The new statements could be called *share* and *shared*.

 def boo():
 shared x,y,z   # Used names predefined in shared name space.
 return x+1,y+2,z+3 

 def foo():
 x,y,z = 1,2,3
 share x,y,z # These would be visible to sub functions
 # but not visible to parent scopes once the
 # function ends. [*1]

 boo()   # modify shared x,y and z in foo.


[*1.]  Unless they have also declared the same names as share. (See below.)

'Share' is used to define names to be visible in child scopes, and 
'shared' allows access to shared names declared in parent scopes.

Having too keywords is more explicit, although this may work with a 
single key word pretty much as it does now.

A single shared name space would still be used where 'share' adds names 
to be 'shared' and those names are deleted when the function that 
declared them exits.  They don't need to live past the life of the 
function they were first declared in.

In recursive functions, (or when a name is reshared), declaring a name 
as shared could just increment a reference counter, and it wouldn't be 
removed from shared until it reaches zero again.

Using 'share' twice with the same name in the same function should cause 
an error.  Using 'shared' with a name that is not in shared name space 
would cause an error.


Just a few thoughts.

Cheers,
Ron


















-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] implementation of copy standard lib

2005-08-16 Thread Ron Adam
Simon Brunning wrote:

> On 8/14/05, Martijn Brouwer <[EMAIL PROTECTED]> wrote:
> 
>>After profiling a small python script I found that approximately 50% of
>>the runtime of my script was consumed by one line: "import copy".
>>Another 15% was the startup of the interpreter, but that is OK for an
>>interpreted language. The copy library is used by another library I am
>>using for my scripts. Importing copy takes 5-10 times more time that
>>import os, string and re together!
>>I noticed that this lib is implemented in python, not in C. As I can
>>imagine that *a lot* of libs/scripts use the copy library, I think it
>>worthwhile to implement this lib in C.
>>
>>What are your opinions?
> 
> 
> I think that copy is very rarely used. I don't think I've ever imported it.
> 
> Or is it just me?

I use copy.deepcopy() sometimes, and more often [:] with lists. 
Dictionary objects have a copy method.  All non mutable objects are copies.

I too have wondered why copy isn't a builtin, and yet some builtin 
objects do make copies.

Cheers,
Ron
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Replacement for keyword 'global' good idea? (e.g. 'modulescope' or 'module' better?)

2005-08-16 Thread Ron Adam
Antoon Pardon wrote:

> I disagree here. The problem with "global", at least how it is
> implemented in python, is that you only have access to module
> scope and not to intermediate scopes.
> 
> I also think there is another possibility. Use a symbol to mark
> the previous scope. e.g. x would be the variable in local scope.
> @.x would be the variable one scope up. @[EMAIL PROTECTED] would be the
> variable two scopes up etc.

Looks like what you want is easier introspection and the ability to get 
the parent scope from it in a simple way.  Maybe something like a 
builtin '__self__' name that contains the information, then a possible 
short 'sugar' method to access it.   '__self__.__parent__', would become 
@ in your example and '__self__.__perent__.__self__.__parent__' could 
become @[EMAIL PROTECTED]

Somthing other than '@' would be better I think.  A bare leading '.' is 
another possiblity.  Then '..x' would be the x two scopes up.

This isn't the same as globals. Globals work the way they do because if 
they weren't automatically visible to all objects in a module you 
wouldn't be able to access any builtin functions or class's without 
declaring them as global (or importing them) in every function or class 
that uses them.

Cheers,
Ron












-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] implementation of copy standard lib

2005-08-17 Thread Ron Adam
Michael Hudson wrote:
> Simon Brunning <[EMAIL PROTECTED]> writes:
> 
> 
>>I think that copy is very rarely used. I don't think I've ever imported it.
>>
>>Or is it just me?
> 
> 
> Not really.  I've used it once that I can recall, to copy a kind of
> generic "default value", something like:
> 
> def value(self, v, default):
> if hasattr(source, v): return getattr(source, v)
> else: return copy.copy(default)
> 
> (except not quite, there would probably be better ways to write
> exactly that).
> 
> Cheers,
> mwh

My most recent use of copy.deepcopy() was to save the state of a 
recusivly built object so that it could be restored before returning a 
result that involved making nested changes (with multiple methods) to 
the ojbects subparts as part of the result calculation.

The alternative would be to use a flag and shallow copies in all the 
methods that altered the object.  copy.deepcopy() was a lot easier as 
it's only needed in the method that initiates the result calculation.

Cheers,
Ron
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pythonXX.dll size: please split CJK codecs out

2005-08-21 Thread Ron Adam
Martin v. Löwis wrote:

>>Can we at least undo this unfortunate move in time for 2.5? I would be 
>>grateful
>>if *at least* the CJK codecs (which are like 1Mb big) are splitted out of
>>python25.dll. IMHO, I would prefer having *more* granularity, rather than
>>*less*.
> 
> If somebody would formulate a policy (i.e. conditions under which
> modules go into python2x.dll, vs. going into separate files), I'm
> willing to implement it. This policy should best be formulated in
> a PEP.

+1  Yes, I think this needs to be addressed.

> The policy should be flexible wrt. to future changes. I.e. it should
> *not* say "do everything as in Python 2.3", because this means I
> would have to rip off the modules added after 2.3 entirely (i.e.
> not ship them at all). Instead, the policy should give clear guidance
> even for modules that are not yet developed.

Agree.

> It should be a PEP, so that people can comment. For example,
> I think I would be -1 on a policy "make python2x.dll as minimal
> as possible, containing only modules that are absolutely
> needed for startup".

Also agree,  Both the minimal and maximal dll size possible are ideals 
that are not the most optimal choices.

I would put the starting minimum boundary as:

1. "The minimum required to start the python interpreter with no 
additional required files."

Currently python 2.4 (on windows) does not yet meet that guideline, so 
it seems some modules still need to be added while other modules, (I 
haven't checked which), are probably not needed to meet that guideline.

This could be extended to:

2. "The minimum required to run an agreed upon set of simple Python 
programs."

I expect there may be a lot of differing opinions on just what those 
minimum Python programs should be.  But that is where the PEP process 
comes in.


Regards,
Ron


> Regards,
> Martin

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pythonXX.dll size: please split CJK codecs out

2005-08-21 Thread Ron Adam
Martin v. Löwis wrote:
> Ron Adam wrote:
> 
>>I would put the starting minimum boundary as:
>>
>>   1. "The minimum required to start the python interpreter with no
>>additional required files."
>>
>>Currently python 2.4 (on windows) does not yet meet that guideline, so
>>it seems some modules still need to be added while other modules, (I
>>haven't checked which), are probably not needed to meet that guideline.
> 
> 
> I'm not sure, either, but I *think* python24 won't load any .pyd file
> on interactive startup.
> 
> 
>>This could be extended to:
>>
>>   2. "The minimum required to run an agreed upon set of simple Python
>>programs."
>>
>>I expect there may be a lot of differing opinions on just what those
>>minimum Python programs should be.  But that is where the PEP process
>>comes in.
> 
> 
> As I mentioned earlier, there also should be a negative list: modules
> that depend on external libraries should not be incorporated into
> python24.dll. 

This fits under the above, rule #1, of not needing additional files.


Most notably, this rules out zlib.pyd, _bsddb.pyd,
> and _ssl.pyd, all of which people may consider to be useful into these
> simple programs.

I would not consider those as being part of "simple" programs.  But 
that's only an opinion and we need something more objective than opinion.

Now that I think of it.. Rule 2 above should be...

 2. "The minimum (modules) required to run an agreed upon set of 
"common simple" programs.

Frequency of use is also an important consideration.

Maybe there's a way to classify a programs complexity based on a set of 
attributes.

So...  program simplicity could consider:

 1.  Complete program is a single .py file.
 2.  Not larger than 'n' lines.  (some reasonable limit)
 3.  Limited number of import statements.
 (less than 'n' modules imported)
 4.  Uses only stdio and/or basic file operations for input
 and output. (runs in interactive console or command line.)

Then ranking the frequency of imported modules from this set of programs 
could give a good hint as to what might be included and those less 
frequently used that may be excluded.

Setting a pythonxx.dll minimum file size goal could further help.  For 
example if excluding modules result is less than the minimum goal, then 
a few extra more frequently used modules could be included as a bonus.

This is obviously a "practical beats purity" exercise. ;-)

Cheers,
Ron




> Regards,
> Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: variable hell

2005-08-25 Thread Ron Garret
In article <[EMAIL PROTECTED]>,
 Benji York <[EMAIL PROTECTED]> wrote:

> Peter Maas wrote:
> >  >>> suffix = 'var'
> >  >>> vars()['a%s' % suffix] = 45
> >  >>> avar
> > 45
> 
> Quoting from http://docs.python.org/lib/built-in-funcs.html#l2h-76 about 
> the "vars" built in:
> 
> The returned dictionary should not be modified: the effects on the 
> corresponding symbol table are undefined.

If you really want to make something like this work you can define a 
class that would work like this:

vars = funkyclass()
varname = 'x'
vars[varname] = value
vars.x

But this is clearly a design mistake.  Either you know the names of the 
variables when you write the code or you do not.  If you know them you 
can simply assign them directly.  If you do not know them then you can't 
put them in the code to read their values anyway, and what you need is 
just a regular dictionary.

rg
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: variable hell

2005-08-25 Thread Ron Garret
In article <[EMAIL PROTECTED]>,
 Robert Kern <[EMAIL PROTECTED]> wrote:

>  In the
> bowels of my modules, I may not know what the contents are at code-time,

Then how do you write your code?

rg
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: variable hell

2005-08-25 Thread Ron Garret
In article <[EMAIL PROTECTED]>,
 Steve Holden <[EMAIL PROTECTED]> wrote:

> rafi wrote:
> > Reinhold Birkenfeld wrote:
> > 
> > 
>  exec(eval("'a%s=%s' % (count, value)"))
> >>>
> >>>why using the eval?
> >>>
> >>>exec ('a%s=%s' % (count, value))
> >>>
> >>>should be fine
> >>
> >>And this demonstrates why exec as a statement was a mistake ;)
> >>
> >>It actually is
> >>
> >>exec 'a%s=%s' % (count, value)
> > 
> > 
> > Noted.
> > 
> > In the meantime another question I cannot find an answer to: any idea 
> > why does eval() consider '=' as a syntax error?
> > 
> >  >>> eval ('a=1')
> > Traceback (most recent call last):
> >File "", line 1, in ?
> >File "", line 1
> >  a=1
> >   ^
> > SyntaxError: invalid syntax
> > 
> > Thanks
> > 
> Because eval() takes an expression as an argument, and assignment is a 
> statement.

And if you find this distinction annoying, try Lisp.

rg
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: py to exe: suggestions?

2005-08-28 Thread Ron Adam
chris patton wrote:
> I need to convert a python file to an '.exe'. I've tried py2exe, and I
> don't like it because you have to include that huge dll and libraries.
> 
> Thanks for the Help!!

Do you want to create an exe to give to others,  or so that you can use 
it in windows easier just like any other exe?

If the later you may be able to tell py2exe to exclude dll's that are in 
your search path.

If you want something you can send to other windows users, then it's far 
easier to include the dll's and extra files than it is to try and make 
sure all the files the programs needs are installed correctly on the 
recipients computer.

I use:

py2exe - to create the main exe and gather all the needed files together.

Inno Setup - to create a windows installer complete with license, docs, 
Internet support links, and an uninstaller.

And a few other programs such as an icon editor to create icons, and 
Resource Hacker to change the tk window icons to my own. Py2exe will 
insert the exe icon, but not change the little tk window icons which 
reside in the tk dll files.

When it's all done you have an installable application from a single exe 
file that's very professional.  Most users won't need to know or care 
what language you developed your application with as long as it works.

Hope that helps,
Cheers,
Ron











-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Bug in string.find; was: Re: Proposed PEP: New style indexing,was Re: Bug in slice type

2005-08-31 Thread Ron Adam
Antoon Pardon wrote:

> Op 2005-08-31, Bengt Richter schreef <[EMAIL PROTECTED]>:
> 
>>On 31 Aug 2005 07:26:48 GMT, Antoon Pardon <[EMAIL PROTECTED]> wrote:
>>
>>
>>>Op 2005-08-30, Bengt Richter schreef <[EMAIL PROTECTED]>:
>>>
>>>>On 30 Aug 2005 10:07:06 GMT, Antoon Pardon <[EMAIL PROTECTED]> wrote:
>>>>
>>>>
>>>>>Op 2005-08-30, Terry Reedy schreef <[EMAIL PROTECTED]>:
>>>>>
>>>>>>"Paul Rubin" <"http://phr.cx"@NOSPAM.invalid> wrote in message 
>>>>>>news:[EMAIL PROTECTED]
>>>>>>
>>>>>>
>>>>>>>Really it's x[-1]'s behavior that should go, not find/rfind.
>>>>>>
>>>>>>I complete disagree, x[-1] as an abbreviation of x[len(x)-1] is extremely 
>>>>>>useful, especially when 'x' is an expression instead of a name.
>>>>>
>>>>>I don't think the ability to easily index sequences from the right is
>>>>>in dispute. Just the fact that negative numbers on their own provide
>>>>>this functionality.
>>>>>
>>>>>Because I sometimes find it usefull to have a sequence start and
>>>>>end at arbitrary indexes, I have written a table class. So I
>>>>>can have a table that is indexed from e.g. -4 to +6. So how am
>>>>>I supposed to easily get at that last value?
>>>>
>>>>Give it a handy property? E.g.,
>>>>
>>>>table.as_python_list[-1]
>>>
>>>Your missing the point, I probably didn't make it clear.
>>>
>>>It is not about the possibilty of doing such a thing. It is
>>>about python providing a frame for such things that work
>>>in general without the need of extra properties in 'special'
>>>cases.
>>>
>>
>>How about interpreting seq[i] as an abbreviation of seq[i%len(seq)] ?
>>That would give a consitent interpretation of seq[-1] and no errors
>>for any value ;-)
> 
> 
> But the question was not about having a consistent interpretation for
> -1, but about an easy way to get the last value.
> 
> But I like your idea. I just think there should be two differnt ways
> to index. maybe use braces in one case.
> 
>   seq{i} would be pure indexing, that throws exceptions if you
>   are out of bound
> 
>   seq[i] would then be seq{i%len(seq)}

The problem with negative index's are that positive index's are zero 
based, but negative index's are 1 based.  Which leads to a non 
symmetrical situations.

Note that you can insert an item before the first item using slices. But 
not after the last item without using len(list) or some value larger 
than len(list).

 >>> a = list('abcde')
 >>> a[len(a):len(a)] = ['end']
 >>> a
['a', 'b', 'c', 'd', 'e', 'end']

 >>> a[-1:-1] = ['last']
 >>> a
['a', 'b', 'c', 'd', 'e', 'last', 'end'] # Second to last.

 >>> a[100:100] = ['final']
 >>> a
['a', 'b', 'c', 'd', 'e', 'last', 'end', 'final']


Cheers,
Ron














-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Bug in string.find

2005-09-01 Thread Ron Adam
Fredrik Lundh wrote:
> Ron Adam wrote:
> 
> 
>>The problem with negative index's are that positive index's are zero
>>based, but negative index's are 1 based.  Which leads to a non
>>symmetrical situations.
> 
> 
> indices point to the "gap" between items, not to the items themselves.

So how do I express a -0?  Which should point to the gap after the last 
item.


> straight indexing returns the item just to the right of the given gap (this is
> what gives you the perceived assymmetry), slices return all items between
> the given gaps.


If this were symmetrical, then positive index's would return the value 
to the right and negative index's would return the value to the left.

Have you looked at negative steps?  They also are not symmetrical.

All of the following get the center 'd' from the string.

a = 'abcdefg'
print a[3] # d   4 gaps from beginning
print a[-4]# d   5 gaps from end
print a[3:4]   # d
print a[-4:-3] # d
print a[-4:4]  # d
print a[3:-3]  # d
print a[3:2:-1]# d   These are symetric?!
print a[-4:-5:-1]  # d
print a[3:-5:-1]   # d
print a[-4:2:-1]   # d

This is why it confuses so many people.  It's a shame too, because slice 
objects could be so much more useful for indirectly accessing list 
ranges. But I think this needs to be fixed first.

Cheers,
Ron














-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Bug in string.find

2005-09-02 Thread Ron Adam
Terry Reedy wrote:
> "Ron Adam" <[EMAIL PROTECTED]> wrote in message 
> news:[EMAIL PROTECTED]
> 
>>Fredrik Lundh wrote:
>>
>>>Ron Adam wrote:
>>>
>>>>The problem with negative index's are that positive index's are zero
>>>>based, but negative index's are 1 based.  Which leads to a non
>>>>symmetrical situations.
>>>
>>>indices point to the "gap" between items, not to the items themselves.
>>
>>So how do I express a -0?
> 
> 
> You just did ;-) but I probably do not know what you mean.

 b[-1:] = ['Z']# replaces last item
 b[-1:-0] = ['Z']  # this doesn't work

If you are using negative index slices, you need to check for end 
conditions because you can't address the end of the slice in a 
sequential/numerical way.

b = list('abcdefg')
for x in range(-len(b),-1):
 print b[x:x+2]

['a', 'b']
['b', 'c']
['c', 'd']
['d', 'e']
['e', 'f']
[]


b = list('abcdefg')
for x in range(-len(b),-1):
 if x<-2:
 print b[x:x+2]
 else:
 print b[x:]

['a', 'b']
['b', 'c']
['c', 'd']
['d', 'e']
['e', 'f']
['f', 'g']


>> Which should point to the gap after the last  item.
> 
> The slice index of the gap after the last item is len(seq).
> 
> 
>>>straight indexing returns the item just to the right of the given gap 
>>>(this is
>>>what gives you the perceived assymmetry), slices return all items 
>>>between
>>>the given gaps.
>>
>>If this were symmetrical, then positive index's would return the value
>>to the right and negative index's would return the value to the left.
> 
> As I posted before (but perhaps it arrived after you sent this), one number 
> indexing rounds down, introducing a slight asymmetry.

I didn't see that one, but I agree.  Single index's are asymmetric, 
positive slices with two index's are again symetric, negative slices 
with negative strides or steps are again asymmetric.


>>Have you looked at negative steps?  They also are not symmetrical.
> 
> ???

print a[4:1:-1]#  6|g| 5|f| 4|e| 3|d| 2|c| 1|b| 0|a| ?
-> edc

print a[-3:-6:-1]  # -1|g|-2|f|-3|e|-4|d|-5|c|-6|b|-7|a|-8|
-> edc

# special case '::'
print a[6::-1] #  6|g| 5|f| 4|e| 3|d| 2|c| 1|b| 0|a| ?
-> gfedcba

print a[-1:-8:-1]  # -1|g|-2|f|-3|e|-4|d|-5|c|-6|b|-7|a|-8
-> gfedcba



>>All of the following get the center 'd' from the string.
>>
>>a = 'abcdefg'
>>print a[3] # d   4 gaps from beginning
>>print a[-4]# d   5 gaps from end
>  
> It is 3 and 4 gaps *from* the left and right end to the left side of the 
> 'd'.  You can also see the asymmetry as coming from rounding 3.5 and -3.5 
> down to 3 and down to -4.

Since single indexing only refers to existing items and aren't used to 
insert between items, this still works even with the slight asymmetry.


>>print a[3:4]   # d
>>print a[-4:-3] # d
> 
> These are is symmetric, as we claimed.

Yes, no problem here except for addressing the -0th (end gap) position 
without special casing to either a positive index or a[-n:].


>>print a[3:2:-1]# d   These are symetric?!
>>print a[-4:-5:-1]  # d
>>print a[3:-5:-1]   # d
>>print a[-4:2:-1]   # d
> 
> The pattern seems to be: left-gap-index : farther-to-left-index : -1 is 
> somehow equivalent to left:right, but I never paid much attention to 
> strides and don't know the full rule.

a[start:stop:-1]
a[stop:start]# exchange index's
a.reverse()  # reverse string

a[4:1:-1]#  6 |g| 5 |f| 4 |e| 3 |d| 2 |c| 1 |b| 0 |a| ?
a[1:4]   #  ? |a| 0 |b| 1 |c| 2 |d| 3 |e| 4 |f| 5 |g| 6
a.reverse()  # -> edc

Notice the index's are 1 less than with positive strides.


> Stride slices are really a different subject from two-gap slicing.  They 
> were introduced in the early years of Python specificly and only for 
> Numerical Python.  The rules were those needed specificly for Numerical 
> Python arrays.  They was made valid for general sequence use only a few 
> years ago.  I would say that they are only for careful mid-level to expert 
> use by those who actually need them for their code.

I'd like to see those use case's.  I have a feeling there are probably 
better ways to do it now.

Doing a quick search in python24/lib, there are only two places that use 
a negative step or stride value to reverse a sequence.

-- PICKLE.PY
 return binary[::-1]
 ashex = _binascii.hexlify(data[::-1])

I don't think people would miss negative strides much if they were 
removed. Replacing these case's with reverse() methods shouldn't be that 
difficult.

Cheers,
Ron


> Terry J. Reedy

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Bug in string.find

2005-09-03 Thread Ron Adam
Terry Reedy wrote:

>>b[-1:] = ['Z']# replaces last item
>>b[-1:-0] = ['Z']  # this doesn't work
>>
>>If you are using negative index slices, you need to check for end
>>conditions because you can't address the end of the slice in a
>>sequential/numerical way.
> 
> OK, now I understand your question, the answer 'get a one's complement 
> machine', and your point that without a -0 separate from +0, there is a 
> subtle asymmetry that is easy to overlook.

Yes, I don't thinks it falls in the catagory of bugs,  probably closer 
to a minor wart.

>>>As I posted before (but perhaps it arrived after you sent this), one 
>>>number
>>>indexing rounds down, introducing a slight asymmetry.
>>
>>I didn't see that one,
> 
> Perhaps you did not see my other long post, where I drew ascii pictures?

Yes, I saw it.  I think we are expressing the same things in different ways.


> In the US (and UK?), the ground level floor of a multifloor building is the 
> first floor.  In continental Europe (all or just some?), the ground floor 
> is the ground (effectively zeroth) floor while the first floor up is the 
> first stage (resting place on the stairway).

In the VA Hospital here in Tampa, the ground floor in the front elevator 
is on the same level as the lobby, while the ground floor in the 
elevator on the other side of the building is on the floor below. ;-)


>>I don't think people would miss negative strides much if they were
>>removed. Replacing these case's with reverse() methods shouldn't be that
>>difficult.
> 
> Yes, the introduction of reversed partly obsoleted the use of negative 
> strides, at least outside of its numerical array origin.
> 
> Terry J. Reedy



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Bug in string.find; was: Re: Proposed PEP: New style indexing,was Re: Bug in slice type

2005-09-03 Thread Ron Adam
Bengt Richter wrote:

> IMO the problem is that the index sign is doing two jobs, which for zero-based
> reverse indexing have to be separate: i.e., to show direction _and_ a _signed_
> offset which needs to be realtive to the direction and base position.

Yes, that's definitely part of it.


> A list-like class, and an option to use a zero-based reverse index will 
> illustrate:
> 
class Zbrx(object):
> 
>  ... def __init__(self, value=0):
>  ... self.value = value
>  ... def __repr__(self): return 'Zbrx(%r)'%self.value
>  ... def __sub__(self, other): return Zbrx(self.value - other)
>  ... def __add__(self, other): return Zbrx(self.value + other)
>  ...
>  >>> class Zbrxlist(object):
>  ... def normslc(self, slc):
>  ... sss = [slc.start, slc.stop, slc.step]
>  ... for i,s in enumerate(sss):
>  ... if isinstance(s, Zbrx): sss[i] = len(self.value)-1-s.value
>  ... return tuple(sss), slice(*sss)
>  ... def __init__(self, value):
>  ... self.value = value
>  ... def __getitem__(self, i):
>  ... if isinstance(i, int):
>  ... return '[%r]: %r'%(i, self.value[i])
>  ... elif isinstance(i, Zbrx):
>  ... return '[%r]: %r'%(i, self.value[len(self.value)-1-i.value])
>  ... elif isinstance(i, slice):
>  ... sss, slc = self.normslc(i)
>  ... return '[%r:%r:%r]: %r'%(sss+ (list.__getitem__(self.value, 
> slc),))
>  ... def __setitem__(self, i, v):
>  ... if isinstance(i, int):
>  ... list.__setitem__(self, i, v)
>  ... elif isinstance(i, slice):
>  ... sss, slc = self.normslc(i)
>  ... list.__setitem__(self.value, slc, v)
>  ... def __repr__(self): return 'Zbrxlist(%r)'%self.value
>  ...
>  >>> zlast = Zbrx(0)
>  >>> zbr10 = Zbrxlist(range(10))
>  >>> zbr10[zlast]
>  '[Zbrx(0)]: 9'
>  >>> zbr10[zlast:]
>  '[9:None:None]: [9]'
>  >>> zbr10[zlast:zlast] = ['end']
>  >>> zbr10
>  Zbrxlist([0, 1, 2, 3, 4, 5, 6, 7, 8, 'end', 9])
>  >>> ztop = Zbrx(-1)
>  >>> zbr10[ztop:ztop] = ['final']
>  >>> zbr10
>  Zbrxlist([0, 1, 2, 3, 4, 5, 6, 7, 8, 'end', 9, 'final'])
>  >>> zbr10[zlast:]
>  "[11:None:None]: ['final']"
>  >>> zbr10[zlast]
>  "[Zbrx(0)]: 'final'"
>  >>> zbr10[zlast+1]
>  '[Zbrx(1)]: 9'
>  >>> zbr10[zlast+2]
>  "[Zbrx(2)]: 'end'"
> 
>  >>> a = Zbrxlist(list('abcde'))
>  >>> a
>  Zbrxlist(['a', 'b', 'c', 'd', 'e'])
> 
> Forgot to provide a __len__ method ;-)
>  >>> a[len(a.value):len(a.value)] = ['end']
>  >>> a
>  Zbrxlist(['a', 'b', 'c', 'd', 'e', 'end'])
> 
> lastx refers to the last items by zero-based reverse indexing
>  >>> a[lastx]
>  "[Zbrx(0)]: 'end'"
>  >>> a[lastx:lastx] = ['last']
>  >>> a
>  Zbrxlist(['a', 'b', 'c', 'd', 'e', 'last', 'end'])
> 
> As expected, or do you want to define different semantics?
> You still need to spell len(a) in the slice somehow to indicate
> beond the top. E.g.,
> 
>  >>> a[lastx-1:lastx-1] = ['final']
>  >>> a
>  Zbrxlist(['a', 'b', 'c', 'd', 'e', 'last', 'end', 'final'])
> 
> Perhaps you can take the above toy and make something that works
> they way you had in mind? Nothing like implementation to give
> your ideas reality ;-)

Thanks, I'll play around with it.  ;-)

As you stated before the index is doing two jobs, so limiting it in some 
way may be what is needed.  Here's a few possible (or impossible) options.

(Some of these aren't pretty.)


* Disallow *all* negative values, use values of start/stop to determine 
direction. Indexing from far end needs to be explicit (len(n)-x).

a[len(a):0]reverse order
a[len(a):0:2]  reveres order, even items

(I was wondering why list's couldn't have len,min, and max attribute 
that are updated when ever the list is modified in place of using 
len,min, and max functions? Would the overhead be that much?)

   a[len.a:0]


* Disallow negative index's,  use negative steps to determine indexing 
direction. Order of index's to determine output order.

a[len(a):0:-1] forward order, zero based indexing from end.
a[0:len(a):-1] reverse order, zero based from end.
a[0:1:-1]  last item

I works, but single a[-1] is used extremely often.  I don't think having 
to do a[0:1:-1] would be very popular.


* A reverse index symbol/operator could be used ...

a[~0]  ->   last item,  This works BTW. :-)  ~0 == -1
a[~1]  ->   next to last item

(Could this be related to the original intended use?)


a[~0:~0]   slice after end ?.  Doesn't work correctly.

What is needed here is to index from the left instead of the right.

a[~0] -> item to left of end gap.

*IF* this could be done; I'm sure there's some reason why this won't 
work. ;-), then all indexing operations with '~' could be symmetric with 
all positive indexing operations. Then in Python 3k true negative 
index's could cause an exception... less bugs I bet.  And then negative 
steps could reverse lists with a lot less co

<    1   2   3   4   5   6   >