Re: Fast recursive generators?

2011-10-28 Thread Gabriel Genellina
En Fri, 28 Oct 2011 15:10:14 -0300, Michael McGlothlin  
 escribió:



I'm trying to generate a list of values where each value is dependent
on the previous value in the list and this bit of code needs to be
repeatedly so I'd like it to be fast. It doesn't seem that
comprehensions will work as each pass needs to take the result of the
previous pass as it's argument. map() doesn't seem likely. filter() or
reduce() seem workable but not very clean. Is there a good way to do
this? About the best I can get is this:

l = [ func ( start ) ]
f = lambda a: func ( l[-1] ) or a
filter ( f, range ( big_number, -1, -1 ) )


I guess I'm looking for something more like:

l = do ( lambda a: func ( a ), big_number, start )


What about a generator function?

def my_generator():
  prev = 1
  yield prev
  while True:
this = 2*prev
yield this
prev = this

print list(itertools.islice(my_generator(), 10))

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Fast recursive generators?

2011-10-28 Thread Terry Reedy

On 10/28/2011 8:49 PM, Michael McGlothlin wrote:


Better to think of a sequence of values, whether materialized as a 'list' or
not.


The final value will actually be a string but it seems it is usually
faster to join a list of strings than to concat them one by one.


.join() takes an iterable of strings as its argument


Comprehensions combine map and filter, both of which conceptually work on
each item of a pre-existing list independently. (I am aware that the
function passed can stash away values to create dependence.


The problem is I don't really have a pre-existing list.


So, as I said, map, filter, and comprehensions are not applicable to 
your problem.



def do(func, N, value):
yield value
for i in range(1,N):
value = func(value)
yield value

For more generality, make func a function of both value and i.
If you need a list, "l = list(do(f,N,x))", but if you do not, you can do
"for item in do(f,N,x):" and skip making the list.


Generators seem considerably slower than using comprehension or
map/filter


So what? A saw may cut faster than a hammer builds, but I hope you don't 
grab a saw when you need a hammer.


> /reduce.

Do you actually have an example where ''.join(reduce(update, N, start)) 
is faster than ''.join(update_gen(N, start))? Resuming a generator n 
times should be *faster* than n calls to the update function of reduce 
(which will actually have to build a list).


---
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: How to mix-in __getattr__ after the fact?

2011-10-28 Thread Lie Ryan

On 10/29/2011 05:20 AM, Ethan Furman wrote:


Python only looks up __xxx__ methods in new-style classes on the class
itself, not on the instances.

So this works:

8<
class Cow(object):
pass

def attrgetter(self, a):
print "CAUGHT: Attempting to get attribute", a

bessie = Cow()

Cow.__getattr__ = attrgetter

print bessie.milk
8<


a minor modification might be useful:

bessie = Cow()
bessie.__class__.__getattr__ = attrgetter


--
http://mail.python.org/mailman/listinfo/python-list


Re: Assigning generator expressions to ctype arrays

2011-10-28 Thread Patrick Maupin
On Oct 28, 8:01 pm, Steven D'Aprano > > ALREADY LOSES DATA if the
iterator isn't the right size and it raises an
> > exception.
>
> Yes. What's your point? This fact doesn't support your proposal in the
> slightest.

You earlier made the argument that "If the slice has too few elements,
you've just blown away the entire iterator for no good reason.  If the
slice is the right length, but the iterator doesn't next raise
StopIteration, you've just thrown away one perfectly good value. Hope
it
wasn't something important."

In other words, you WERE arguing that it's very important not to lose
information.  Now that I show that information is ALREADY LOST in a
similar scenario, you are arguing that's irrelevant.  Whatever.

> You have argued against using a temporary array. I quote:
>
> [Aside: how do you know this is not just inefficient but *incredibly* so?]

By profiling my app, looking for places to save time without having to
resort to C.

> But that's exactly what happens in tuple unpacking:
> the generator on the right hand side is unpacked into
> a temporary tuple, and then assigned to the targets
> on the left hand side.

Agreed.  But Terry was making the (correct) argument that people using
ctypes are often looking for efficiency.

> If the number of elements on both sides aren't the same,
> the assignment fails. That is, tuple unpacking behaves
> like (snip)

Yes, and that loses information from the right hand side if it fails
and the right hand side happens to be an iterator.  For some reason
you're OK with that in this case, but it was the worst thing in the
world a few messages ago.

> This has the important property that the
> assignment is atomic: it either succeeds in full,
> or it doesn't occur.

Not right at all.

>The only side-effect is to exhaust the generator,
> which is unavoidable given that generators don't have a
> length.

Yes.  But earlier you were acting like it would be problematic for me
to lose information out of the generator if there were a failure, and
now the sanctity of the information on the LHS is so much more than on
the RHS.  Frankly, that's all a crock.  In your earlier email, you
argued that my proposal loses information, when in fact, in some cases
it PRESERVES information (the very information you are trying to
transfer) that ISN'T preserved when this temp array is tossed, and the
only information it LOSES is information the programmer declared his
clear intent to kill.  But that's an edge case anyway.

> Without temporary storage for the right hand side,
> you lose the property of atomicism. That would be
> unacceptable.

As if the temporary storage workaround exhibited the "necessary"
atomicity, or as if you have even showed a valid argument why the
atomicity is important for this case.

> In the case of the ctypes array, the array slice assignment is also
> treated as atomic: it either succeeds in full, or it fails in full.
> This is an important property. Unlike tuple unpacking, the array is even more
> conservative about what is on the right hand size: it doesn't accept
> iterators at all, only sequences. This is a sensible default,

How it works is not a bad start, but it's arguably unfinished.

> because it is *easy* to work around: if you want to unpack the iterator, just 
> make a temporary list: (snip)

I already said I know the workaround.  I don't know why you can't
believe that.  But one of the purposes of the iterator protocol is to
reduce the number of cases where you have to create huge lists.

> Assignment remains atomic, and the generator will be unpacked into a
> temporary list at full C speed.

Which can be very slow if your list has several million items in it.

> If you don't care about assignment being atomic -- and it's your right to
> weaken the array contract in your own code

Currently, there IS no contract between ctype arrays and generators.

I'm suggesting that it would be useful to create one, and further
suggesting that if a programmer attempts to load a ctypes array from a
generator of the wrong size, it is certainly important to let the
programmer know he screwed up, but it is not at all important that
some elements of the ctypes array, that the programmer was in fact
trying to replace, were in fact correctly replaced before the size
error was noticed.

> -- feel free to write your own
> helper function based on your earlier suggestion: (snip)

Doing this in Python is also slow and painful, and the tradeoff of
this vs the temp list depends on the system, the caching, the amount
of memory available, and the size of the data to be transferred.  I
could do it in C, but that defeats my whole purpose of using ctypes to
avoid having to ship C code to customers.

> But this non-atomic behaviour would be entirely inappropriate as the
> default behaviour for a ctypes array.

That's obviously your opinion, but your supporting arguments are quite
weak.

Regards,
Pat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Need Windows user / developer to help with Pynguin

2011-10-28 Thread Andrew Berg
On 10/27/2011 5:36 PM, Lee Harr wrote:
> What message do you get when trying to download?
It said something like "You're trying to download from a forbidden
country. That's all we know." Anyway, I was able to get the files.

Once everything is set up, it seems to work. I haven't done any serious
testing, but it opens and I can move the penguin and use the examples.

BTW, you may want to look into py2exe or cx_Freeze to make Windows
binaries. Also, the Python installer associates .py and .pyw files with
python.exe and pythonw.exe respectively, so if you add the extension as
Terry mentioned, it should work.

-- 
CPython 3.2.2 | Windows NT 6.1.7601.17640 | Thunderbird 7.0
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Assigning generator expressions to ctype arrays

2011-10-28 Thread Steven D'Aprano
On Fri, 28 Oct 2011 16:27:37 -0700, Patrick Maupin wrote:

> And, BTW, the example you give of, e.g.
> 
> a,b,c = (some generator expression)
> 
> ALREADY LOSES DATA if the iterator isn't the right size and it raises an
> exception.

Yes. What's your point? This fact doesn't support your proposal in the 
slightest. You have argued against using a temporary array. I quote:

"It is incredibly inefficient to have to create a temp array."

[Aside: how do you know this is not just inefficient but *incredibly* so?]

But that's exactly what happens in tuple unpacking: the generator on the 
right hand side is unpacked into a temporary tuple, and then assigned to 
the targets on the left hand side. If the number of elements on both 
sides aren't the same, the assignment fails. That is, tuple unpacking 
behaves like this pseudo-code:

targets = left hand side
values = tuple(right hand side)
if len(targets) != len(values):
  fail
otherwise: 
  for each pair target, value:
target = value

This has the important property that the assignment is atomic: it either 
succeeds in full, or it doesn't occur. The only side-effect is to exhaust 
the generator, which is unavoidable given that generators don't have a 
length.

Without temporary storage for the right hand side, you lose the property 
of atomicism. That would be unacceptable.

In the case of the ctypes array, the array slice assignment is also 
treated as atomic: it either succeeds in full, or it fails in full. This 
is an important property. Unlike tuple unpacking, the array is even more 
conservative about what is on the right hand size: it doesn't accept 
iterators at all, only sequences. This is a sensible default, because it 
is *easy* to work around: if you want to unpack the iterator, just make a 
temporary list:

array[:] = list(x+1 for x in range(32))

Assignment remains atomic, and the generator will be unpacked into a 
temporary list at full C speed.

If you don't care about assignment being atomic -- and it's your right to 
weaken the array contract in your own code -- feel free to write your own 
helper function based on your earlier suggestion:

"It merely needs to fill the slice and then ask for one more and check 
that StopIteration is raised."

def array_assign(array, values):
try:
if len(values) == len(array):
array[:] = values  # runs at full C speed
except TypeError:
try:
for i in xrange(len(array)):
array[i] = next(values)  # or values.next in Python 2.5
except StopIteration:
raise TypeError('too few items')
try:
next(values)
except StopIteration:
pass
else:
raise TypeError('too many values')


But this non-atomic behaviour would be entirely inappropriate as the 
default behaviour for a ctypes array.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: save tuple of simple data types to disk (low memory foot print)

2011-10-28 Thread Steven D'Aprano
On Fri, 28 Oct 2011 22:47:42 +0200, Gelonida N wrote:

> Hi,
> 
> I would like to save many dicts with a fixed amount of keys tuples to a
> file  in a memory efficient manner (no random, but only sequential
> access is required)

What do you call "many"? Fifty? A thousand? A thousand million? How many 
items in each dict? Ten? A million?

What do you mean "keys tuples"?


> As the keys are the same for each entry  I considered converting them to
> tuples.

I don't even understand what that means. You're going to convert the keys 
to tuples? What will that accomplish?


> The tuples contain only strings, ints (long ints) and floats (double)
> and the data types for each position within the tuple are fixed.
> 
> The fastest and simplest way is to pickle the data or to use json. Both
> formats however are not that optimal.

How big are your JSON files? 10KB? 10MB? 10GB?

Have you tried using pickle's space-efficient binary format instead of 
text format? Try using protocol=2 when you call pickle.Pickler.

Or have you considered simply compressing the files?


> I could store ints and floats with pack. As strings have variable length
> I'm not sure how to save them efficiently (except adding a length first
> and then the string.

This isn't 1980 and you're very unlikely to be using 720KB floppies. 
Premature optimization is the root of all evil. Keep in mind that when 
you save a file to disk, even if it contains only a single bit of data, 
the actual space used will be an entire block, which on modern hard 
drives is very likely to be 4KB. Trying to compress files smaller than a 
single block doesn't actually save you any space.


> Is there already some 'standard' way or standard library to store such
> data efficiently?

Yes. Pickle and JSON plus zip or gzip.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Fast recursive generators?

2011-10-28 Thread Michael McGlothlin
>> I'm trying to generate a list of values
>
> Better to think of a sequence of values, whether materialized as a 'list' or
> not.

The final value will actually be a string but it seems it is usually
faster to join a list of strings than to concat them one by one.

>> where each value is dependent
>> on the previous value in the list and this bit of code needs to be
>> repeatedly so I'd like it to be fast. It doesn't seem that
>> comprehensions will work as each pass needs to take the result of the
>> previous pass as it's argument. map() doesn't seem likely. filter() or
>
> Comprehensions combine map and filter, both of which conceptually work on
> each item of a pre-existing list independently. (I am aware that the
> function passed can stash away values to create dependence.

The problem is I don't really have a pre-existing list. I just want a
tight loop to generate the list from the initial value. Probably I
could do something similar to my filter() method below but it seems
really convoluted. I thought maybe there was a better way I just
didn't know about. Python is full of neat tricks but it can be hard to
remember them all.

>> reduce() seem workable but not very clean. Is there a good way to do
>> this? About the best I can get is this:
>>
>> l = [ func ( start ) ]
>> f = lambda a: func ( l[-1] ) or a
>> filter ( f, range ( big_number, -1, -1 ) )
>>
>>
>> I guess I'm looking for something more like:
>>
>> l = do ( lambda a: func ( a ), big_number, start )
>
> Something like
>
> def do(func, N, value):
>    yield value
>    for i in range(1,N):
>        value = func(value)
>        yield value
>
> ?
> For more generality, make func a function of both value and i.
> If you need a list, "l = list(do(f,N,x))", but if you do not, you can do
> "for item in do(f,N,x):" and skip making the list.

Generators seem considerably slower than using comprehension or
map/filter/reduce. I have tons of memory to work in (~100GB actual
RAM) and the resulting data isn't that big so I'm more concerned about
CPU.


Thanks.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Assigning generator expressions to ctype arrays

2011-10-28 Thread Patrick Maupin
On Oct 28, 4:51 pm, Patrick Maupin  wrote:
> On Oct 28, 3:19 am, Terry Reedy  wrote:
>
> > On 10/28/2011 3:21 AM, Steven D'Aprano wrote:
>
> > > If the slice has too few elements, you've just blown away the entire
> > > iterator for no good reason.
> > > If the slice is the right length, but the iterator doesn't next raise
> > > StopIteration, you've just thrown away one perfectly good value. Hope it
> > > wasn't something important.
>
> > You have also over-written values that should be set back to what they
> > were, before the exception is raised, which is why I said the test needs
> > to be done with a temporary array.
>
> Sometimes when exceptions happen, data is lost. You both make a big
> deal out of simultaneously (a) not placing burden on the normal case
> and (b) defining the normal case by way of what happens during an
> exception.  Iterators are powerful and efficient, and ctypes are
> powerful and efficient, and the only reason you've managed to give why
> I shouldn't be able to fill a ctype array slice from an iterator is
> that, IF I SCREW UP and the iterator doesn't produce the right amount
> of data, I will have lost some data.
>
> Regards,
> Pat

And, BTW, the example you give of, e.g.

a,b,c = (some generator expression)

ALREADY LOSES DATA if the iterator isn't the right size and it raises
an exception.

It doesn't overwrite a or b or c, but you're deluding yourself if you
think that means it hasn't altered the system state.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: save tuple of simple data types to disk (low memory foot print)

2011-10-28 Thread Roy Smith
In article ,
 Gelonida N  wrote:

> I would like to save many dicts with a fixed amount of keys
> tuples to a file  in a memory efficient manner (no random, but only
> sequential access is required)

There's two possible scenarios here.  One, which you seem to be 
exploring, is to carefully study your data and figure out the best way 
to externalize it which reduces volume.

The other is to just write it out in whatever form is most convenient 
(JSON is a reasonable thing to try first), and compress the output.  Let 
the compression algorithms worry about extracting the entropy.  You may 
be surprised at how well it works.  It's also an easy experiment to try, 
so if it doesn't work well, at least it didn't cost you much to find out.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Forking simplejson

2011-10-28 Thread Nathan Rice
I've found that in a lot of cases getting a patch submitted is only
half about good engineering.  The other half is politics.  I like one
of those things, I don't like the other, and I don't want to take time
out of my coding schedule to write something if in the end a reviewer
shoots down my patch for contrived reasons.  I don't know what the
python committers are like but I guess you could say I'm once bitten
twice shy.

Nathan

On Fri, Oct 28, 2011 at 4:52 PM, Terry Reedy  wrote:
> On 10/28/2011 1:20 PM, Nathan Rice wrote:
>>
>> Just a random note, I actually set about the task of re-implementing a
>> json encoder which can be subclassed, is highly extensible, and uses
>> (mostly) sane coding techniques (those of you who've looked at
>> simplejson's code will know why this is a good thing).  So far
>> preliminary tests show the json only subclass of the main encoder
>> basically tied in performance with the python implementation of
>> simplejson.  The C version of simplejson does turn in a performance
>> about 12x faster, but that's apples to oranges.  The design of the
>> encoder would also make a XML serializer pretty straight forward to
>> implement as well (not that I care about XML, *blech*).
>>
>> I'm torn between just moving on to some of my other coding tasks and
>> putting some time into this to make it pass the simplejson/std lib
>> json tests.  I really do think the standard lib json encoder is bad
>
> Python is the result of people who thought *something* was 'bad'
>
>> and I would like to see an alternative in there
>
> and volunteered the effort to make something better.
>
>> but I'm hesitant to get involved.
>
> As someone who is involved and tries to encourage others, I am curious why.
>
> --
> Terry Jan Reedy
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Assigning generator expressions to ctype arrays

2011-10-28 Thread Patrick Maupin
On Oct 28, 3:19 am, Terry Reedy  wrote:
> On 10/28/2011 3:21 AM, Steven D'Aprano wrote:
>
> > If the slice has too few elements, you've just blown away the entire
> > iterator for no good reason.
> > If the slice is the right length, but the iterator doesn't next raise
> > StopIteration, you've just thrown away one perfectly good value. Hope it
> > wasn't something important.
>
> You have also over-written values that should be set back to what they
> were, before the exception is raised, which is why I said the test needs
> to be done with a temporary array.
>

Sometimes when exceptions happen, data is lost. You both make a big
deal out of simultaneously (a) not placing burden on the normal case
and (b) defining the normal case by way of what happens during an
exception.  Iterators are powerful and efficient, and ctypes are
powerful and efficient, and the only reason you've managed to give why
I shouldn't be able to fill a ctype array slice from an iterator is
that, IF I SCREW UP and the iterator doesn't produce the right amount
of data, I will have lost some data.

Regards,
Pat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Forking simplejson

2011-10-28 Thread Terry Reedy

On 10/28/2011 1:20 PM, Nathan Rice wrote:

Just a random note, I actually set about the task of re-implementing a
json encoder which can be subclassed, is highly extensible, and uses
(mostly) sane coding techniques (those of you who've looked at
simplejson's code will know why this is a good thing).  So far
preliminary tests show the json only subclass of the main encoder
basically tied in performance with the python implementation of
simplejson.  The C version of simplejson does turn in a performance
about 12x faster, but that's apples to oranges.  The design of the
encoder would also make a XML serializer pretty straight forward to
implement as well (not that I care about XML, *blech*).

I'm torn between just moving on to some of my other coding tasks and
putting some time into this to make it pass the simplejson/std lib
json tests.  I really do think the standard lib json encoder is bad


Python is the result of people who thought *something* was 'bad'


and I would like to see an alternative in there


and volunteered the effort to make something better.

> but I'm hesitant to get involved.

As someone who is involved and tries to encourage others, I am curious why.

--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


save tuple of simple data types to disk (low memory foot print)

2011-10-28 Thread Gelonida N
Hi,

I would like to save many dicts with a fixed amount of keys
tuples to a file  in a memory efficient manner (no random, but only
sequential access is required)

As the keys are the same for each entry  I considered converting them to
tuples.

The tuples contain only strings, ints (long ints) and floats (double)
and the data types for each position within the tuple are fixed.

The fastest and simplest way is to pickle the data or to use json.
Both formats however are not that optimal.


I could store ints and floats with pack. As strings have variable length
I'm not sure how to save them efficiently
(except adding a length first and then the string.

Is there already some 'standard' way or standard library to store
such data efficiently?

Thanks in advance for any suggestion.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Fast recursive generators?

2011-10-28 Thread Terry Reedy

On 10/28/2011 2:10 PM, Michael McGlothlin wrote:

I'm trying to generate a list of values


Better to think of a sequence of values, whether materialized as a 
'list' or not.



where each value is dependent
on the previous value in the list and this bit of code needs to be
repeatedly so I'd like it to be fast. It doesn't seem that
comprehensions will work as each pass needs to take the result of the
previous pass as it's argument. map() doesn't seem likely. filter() or


Comprehensions combine map and filter, both of which conceptually work 
on each item of a pre-existing list independently. (I am aware that the 
function passed can stash away values to create dependence.



reduce() seem workable but not very clean. Is there a good way to do
this? About the best I can get is this:

l = [ func ( start ) ]
f = lambda a: func ( l[-1] ) or a
filter ( f, range ( big_number, -1, -1 ) )


I guess I'm looking for something more like:

l = do ( lambda a: func ( a ), big_number, start )


Something like

def do(func, N, value):
yield value
for i in range(1,N):
value = func(value)
yield value

?
For more generality, make func a function of both value and i.
If you need a list, "l = list(do(f,N,x))", but if you do not, you can do 
"for item in do(f,N,x):" and skip making the list.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Assigning generator expressions to ctype arrays

2011-10-28 Thread Terry Reedy

On 10/28/2011 2:05 PM, Patrick Maupin wrote:


On Oct 27, 10:23 pm, Terry Reedy  wrote:

I do not think everyone else should suffer substantial increase in space
and run time to avoid surprising you.


What substantial increase?


of time and space, as I said, for the temporary array that I think would 
be needed and which I also described in the previous paragraph that you 
clipped



 There's already a check that winds up
raising an exception.  Just make it empty an iterator instead.


It? I have no idea what you intend that to refer to.



It violates the principle of least surprise

for ctypes to do what is most efficient in 99.9% of uses?


It doesn't work at all with an iterator, so it's most efficient 100%
of the time now.  How do you know how many people would use iterators
if it worked?


I doubt it would be very many because it is *impossible* to make it work 
in the way that I think people would want it to.



It could, but at some cost. Remember, people use ctypes for efficiency,



yes, you just made my argument for me.  Thank you.  It is incredibly
inefficient to have to create a temp array.


But necessary to work with blank box iterators. Now you are agreeing 
with my argument.



so the temp array path would have to be conditional.



I don't understand this at all.  Right now, it just throws up its
hands and says "I don't work with iterators."


If ctype_array slice assignment were to be augmented to work with 
iterators, that would, in my opinion (and see below), require use of 
temporary arrays. Since slice assignment does not use temporary arrays 
now (see below), that augmentation should be conditional on the source 
type being a non-sequence iterator.



Why would it be a problem to change this?


CPython comes with immutable fixed-length arrays (tuples) that do not 
allow slice assignment and mutable variable-length arrays (lists) that 
do. The definition is 'replace the indicated slice with a new slice 
built from all values from an iterable'. Point 1: This works for any 
properly functioning iterable that produces any finite number of items. 
Iterators are always exhausted.


Replace can be thought of as delete follewed by add, but the 
implementation is not that naive. Point 2: If anything goes wrong and an 
exception is raised, the list is unchanged. This means that there must 
be temporary internal storage of either old or new references. An 
example that uses an improperly functioning generator.


>>> a
[0, 1, 2, 3, 4, 5, 6, 7]
>>> def g():
yield None
raise ValueError

>>> a[3:6]=g()
Traceback (most recent call last):
  File "", line 1, in 
a[3:6]=g()
  File "", line 3, in g
raise ValueError
ValueError
>>> a
[0, 1, 2, 3, 4, 5, 6, 7]

A c_uint array is a new kind of beast: a fixed-length mutable array. So 
it has to have a different definition of slice assignment than lists. 
Thomas Heller, the ctypes author, apparently chose 'replacement by a 
sequence with exactly the same number of items, else raise an 
exception'. though I do not know what the doc actually says.


An alternative definition would have been to replace as much of the 
slice as possible, from the beginning, while ignoring any items in 
excess of the slice length. This would work with any iterable. However, 
partial replacement of a slice would be a surprising innovation to most.


The current implementation assumes that the reported length of a 
sequence matches the valid indexes and dispenses with temporary storage. 
This is shown by the following:


from ctypes import c_uint
n = 20

class Liar:
def __len__(self): return n
def __getitem__(self, index):
if index < 10:
return 1
else:
raise ValueError()

x = (n * c_uint)()
print(list(x))
x[:] = range(n)
print(list(x))
try:
x[:] = Liar()
except:
pass
print(list(x))
>>>
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]

I consider such unintended partial replacement to be a glitch. An 
exception could be raised, but without adding temp storage, the array 
could not be restored. And making a change *and* raising an exception 
would be a different sort of glitch. (One possible with augmented 
assignment involving a mutable member of a tuple.) So I would leave this 
as undefined behavior for an input outside the proper domain of the 
function.


Anyway, as I said before, you are free to propose a specific change 
('work with iterators' is too vague) and provide a corresponding patch.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: How to mix-in __getattr__ after the fact?

2011-10-28 Thread Ethan Furman

dhyams wrote:

Python 2.7.2

I'm having trouble in a situation where I need to mix-in the
functionality of __getattr__ after the object has already been
created.  Here is a small sample script of the situation:

=snip

import types

class Cow(object):
  pass
  # this __getattr__ works as advertised.
  #def __getattr__(self,a):
  #   print "CAUGHT INCLASS: Attempting to get attribute ",a


def attrgetter(self,a):
   print "CAUGHT: Attempting to get attribute ",a

bessie = Cow()

bessie.__getattr__ = types.MethodType(attrgetter,bessie,Cow)

# here, I should see my printout "attempting to get attribute"
# but I get an AttributeException
print bessie.milk
==snip

If I call __getattr__ directly, as in bessie.__getattr__('foo'), it
works as it should obviously; so the method is bound and ready to be
called.  But Python does not seem to want to call __getattr__
appropriately if I mix it in after the object is already created.  Is
there a workaround, or am I doing something wrongly?

Thanks,


Python only looks up __xxx__ methods in new-style classes on the class 
itself, not on the instances.


So this works:

8<
class Cow(object):
  pass

def attrgetter(self, a):
   print "CAUGHT: Attempting to get attribute", a

bessie = Cow()

Cow.__getattr__ = attrgetter

print bessie.milk
8<

~Ethan~
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to mix-in __getattr__ after the fact?

2011-10-28 Thread Jerry Hill
On Fri, Oct 28, 2011 at 1:34 PM, dhyams  wrote:
> If I call __getattr__ directly, as in bessie.__getattr__('foo'), it
> works as it should obviously; so the method is bound and ready to be
> called.  But Python does not seem to want to call __getattr__
> appropriately if I mix it in after the object is already created.  Is
> there a workaround, or am I doing something wrongly?

Python looks up special methods on the class, not the instance (see
http://docs.python.org/reference/datamodel.html#special-method-lookup-for-new-style-classes).

It seems like you ought to be able to delegate the special attribute
access from the class to the instance somehow, but I couldn't figure
out how to do it in a few minutes of fiddling at the interpreter.

-- 
Jerry
-- 
http://mail.python.org/mailman/listinfo/python-list


Fast recursive generators?

2011-10-28 Thread Michael McGlothlin
I'm trying to generate a list of values where each value is dependent
on the previous value in the list and this bit of code needs to be
repeatedly so I'd like it to be fast. It doesn't seem that
comprehensions will work as each pass needs to take the result of the
previous pass as it's argument. map() doesn't seem likely. filter() or
reduce() seem workable but not very clean. Is there a good way to do
this? About the best I can get is this:

l = [ func ( start ) ]
f = lambda a: func ( l[-1] ) or a
filter ( f, range ( big_number, -1, -1 ) )


I guess I'm looking for something more like:

l = do ( lambda a: func ( a ), big_number, start )


Thanks,
Michael McGlothlin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Assigning generator expressions to ctype arrays

2011-10-28 Thread Patrick Maupin
On Oct 27, 10:23 pm, Terry Reedy  wrote:


> I do not think everyone else should suffer substantial increase in space
> and run time to avoid surprising you.

What substantial increase?  There's already a check that winds up
raising an exception.  Just make it empty an iterator instead.

> > It violates the principle of least surprise
>
> for ctypes to do what is most efficient in 99.9% of uses?

It doesn't work at all with an iterator, so it's most efficient 100%
of the time now.  How do you know how many people would use iterators
if it worked?

>
> It could, but at some cost. Remember, people use ctypes for efficiency,

yes, you just made my argument for me.  Thank you.  It is incredibly
inefficient to have to create a temp array.

> so the temp array path would have to be conditional.

I don't understand this at all.  Right now, it just throws up its
hands and says "I don't work with iterators."  Why would it be a
problem to change this?
-- 
http://mail.python.org/mailman/listinfo/python-list


How to mix-in __getattr__ after the fact?

2011-10-28 Thread dhyams
Python 2.7.2

I'm having trouble in a situation where I need to mix-in the
functionality of __getattr__ after the object has already been
created.  Here is a small sample script of the situation:

=snip

import types

class Cow(object):
  pass
  # this __getattr__ works as advertised.
  #def __getattr__(self,a):
  #   print "CAUGHT INCLASS: Attempting to get attribute ",a


def attrgetter(self,a):
   print "CAUGHT: Attempting to get attribute ",a

bessie = Cow()

bessie.__getattr__ = types.MethodType(attrgetter,bessie,Cow)

# here, I should see my printout "attempting to get attribute"
# but I get an AttributeException
print bessie.milk
==snip

If I call __getattr__ directly, as in bessie.__getattr__('foo'), it
works as it should obviously; so the method is bound and ready to be
called.  But Python does not seem to want to call __getattr__
appropriately if I mix it in after the object is already created.  Is
there a workaround, or am I doing something wrongly?

Thanks,
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Forking simplejson

2011-10-28 Thread Nathan Rice
Just a random note, I actually set about the task of re-implementing a
json encoder which can be subclassed, is highly extensible, and uses
(mostly) sane coding techniques (those of you who've looked at
simplejson's code will know why this is a good thing).  So far
preliminary tests show the json only subclass of the main encoder
basically tied in performance with the python implementation of
simplejson.  The C version of simplejson does turn in a performance
about 12x faster, but that's apples to oranges.  The design of the
encoder would also make a XML serializer pretty straight forward to
implement as well (not that I care about XML, *blech*).

I'm torn between just moving on to some of my other coding tasks and
putting some time into this to make it pass the simplejson/std lib
json tests.  I really do think the standard lib json encoder is bad
and I would like to see an alternative in there but I'm hesitant to
get involved.

Nathan


On Thu, Oct 27, 2011 at 11:24 AM, Amirouche Boubekki
 wrote:
>
>
> 2011/10/27 Chris Rebert 
>>
>> On Wed, Oct 26, 2011 at 2:14 AM, Amirouche Boubekki
>>  wrote:
>> > Héllo,
>> >
>> > I would like to fork simplejson [1] and implement serialization rules
>> > based
>> > on protocols instead of types [2], plus special cases for protocol free
>> > objects, that breaks compatibility. The benefit will be a better API for
>> > json serialization of custom classes and in the case of iterable it will
>> > avoid a calls like:
>> >
>>  simplejson.dumps(list(my_iterable))
>> >
>> > The serialization of custom objects is documented in the class instead
>> > of
>> > the ``default`` function of current simplejson implementation [3].
>> >
>> > The encoding algorithm works with a priority list that is summarized in
>> > the
>> > next table:
>> >
>> >     +---+---+
>> >     | Python protocol   | JSON          |
>> >     |  or special case  |               |
>> >     +===+===+
>> 
>> >     | (§) unicode       | see (§)       |
>> 
>> > (§) if the algorithm arrives here, call unicode (with proper encoding
>> > rule)
>> > on the object and use the result as json serialization
>>
>> I would prefer a TypeError in such cases, for the same reason
>> str.join() doesn't do an implicit str() on its operands:
>> - Explicit is better than implicit.
>> - (Likely) errors should never pass silently.
>> - In the face of ambiguity, refuse the temptation to guess.
>
> granted it's better.
>
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
>
-- 
http://mail.python.org/mailman/listinfo/python-list


getting columns attributes in declarative style with sqlalchemy

2011-10-28 Thread Gabriele
Hi,

   I'm tryed to write my first application using SqlAlchemy. I'm using
declarative style. I need to get the attributes of the columns of my
table. This is an example of my very simple model-class:

class Country(base):
__tablename__ = "bookings_countries"

id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True)
canceled = sqlalchemy.Column(sqlalchemy.Boolean, nullable=False,
default=False)
description = sqlalchemy.Column(Description)

def __init__(self, canceled, description):
self.canceled = canceled
self.description = description

def __repr__(self):
return "" % (self.description)

I want to populate a wx grid using the name of the fields as labels of
the columns.

I found three different solutions of my problem, but none satisfeid
me:

a) using column_descriptions in query object.

col = (model.Country.canceled, model.Country.description)
q =
self.parent.session.query(*col).order_by(model.Country.description)
l = [column["name"] for column in q.column_descriptions]

in this way, l contains exactly what I need, but I don't like because
I must execute an useless query only for having informations about the
structure of my table. Ok, it's not so big problem, but IMO is not a
very good solution

b) reflecting the table
c) using inspector lib from sqlachemy.engine

I don't like because I have to use directly the name of the table in
the database...

It sounds me better and logical to use my class Country... but I can't
find the simple way for doing that... maybe it's very simple, but...

Someone can help me?

Thanks

Gabriele




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: __dict__ attribute for built-in types

2011-10-28 Thread Peter Pearson
On Fri, 28 Oct 2011 00:52:40 +0200, candide  wrote:
[snip]
> hasattr(42, '__dict__')
>> False
[snip]
>
> Let'have a try :
>
> >>> hasattr(43, '__dict__')
> False
> >>>
>
> so we have proved by induction that no integer instance has a 
> dictionnary attribute ;)

You left out an important step in this proof by induction.  Observe:

>>> n = 0
>>> hasattr(n, "__dict__")
False
>>> if hasattr(n, "__dict__") is False:
...   hasattr(n+1, "__dict__") is False
... 
True

There, now it's proven by induction.

-- 
To email me, substitute nowhere->spamcop, invalid->net.
-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: python-ldap 2.4.4

2011-10-28 Thread Michael Ströder
Find a new release of python-ldap:

  http://pypi.python.org/pypi/python-ldap/2.4.4

python-ldap provides an object-oriented API to access LDAP directory
servers from Python programs. It mainly wraps the OpenLDAP 2.x libs for
that purpose. Additionally it contains modules for other LDAP-related
stuff (e.g. processing LDIF, LDAPURLs and LDAPv3 schema).

Project's web site:

  http://www.python-ldap.org/

Ciao, Michael.


Released 2.4.4 2011-10-26

Changes since 2.4.3:

Modules/
* Format intermediate messages as 3-tuples instead of
  4-tuples to match the format of other response messages.
  (thanks to Chris Mikkelson)
* Fixes for memory leaks (thanks to Chris Mikkelson)

Lib/
* New experimental(!) sub-module ldap.syncrepl implementing syncrepl
  consumer (see RFC 4533, thanks to Chris Mikkelson)

Doc/
* Cleaned up rst files
* Added missing classes
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is Python in the browser a dead-end ?

2011-10-28 Thread Billy Earney
I believe that python maybe had missed an opportunity to get in early and be
able to take over a large market share from javascript.  But that doesn't
mean python is dead in the browser, it just means it will have more
competition if it wants to replace javascript for Rich Internet
Applications.

there's also emscripten, which takes c code and compiles it into
javascript.  the c python compiler is one of its primary examples as you can
see here:

http://syntensity.com/static/python.html

I've also contributed some code to Skupt so that a person can do something
like


 put python code here.


Hopefully eventually, the code I submitted will be of use someday! :)

On Fri, Oct 28, 2011 at 7:07 AM, Amirouche Boubekki <
amirouche.boube...@gmail.com> wrote:

> Héllo,
>
> There was a thread recently about the missed opportunity for Python to be a
> language that could replace Javascript in the browser.
>
> They are several attempts at doing something in this spirit here are the
> ones I'm aware of:
>
> - pyjamas aka. pyjs it is to Python what GWT is to Java : http://pyjs.org/
> - py2js which compiles Python to Javascript without high level widget like
> pyjamas does https://github.com/qsnake/py2js
> - skulpt unlike py2js and pyjs there is no compilation phase
> http://www.skulpt.org/
>
> And SubScript [1], which I am the creator, a Python-like language [2] and
> Sissi [3] which aims at being the stdlib of SubScript, currently it's
> implemented with py2js. It relies heavly on Google Closure tools [4]. I'm
> planning to add streamer [5] support if I can wrap my head around it.
>
> I'm done with self advertisement.
>
> Before starting SubScript I gave it some thoughts, and I think that it can
> bring value to web development.
>
> Here are articles that backs up my vision:
>
> - Javascript gurus claims that Javascript is indeed the "assembly" of the
> web [6], which means that Javascript is viable target for a compiler
>
> - Similar projects exists with Coffescript and ClojureScript which seems to
> have the lead and are gaining traction in web development community. Dart is
> another example.
>
> - The thing that makes Javascript development difficult is that it's hard
> to debug, but map JS [7] is making its way into Firefox which will help
> greatly
>
> Why do people think that this projects are doomed ?
>
> I ask this question to understand other point of views before commiting
> myself at finish SubScript & Sissi, I don't want to code something that will
> be useless.
>
> Thanks in advance,
>
> Amirouche
>
> [1] https://bitbucket.org/abki/subscript
> [2] currently Subscript code is parsed by Python parsing library the exact
> same syntax should be supported
> [3] https://bitbucket.org/abki/sissi/overview
> [4] http://code.google.com/intl/fr/closure/
> [5] https://github.com/Gozala/streamer
> [6]
> http://www.hanselman.com/blog/JavaScriptIsAssemblyLanguageForTheWebSematicMarkupIsDeadCleanVsMachinecodedHTML.aspx
> [7] https://bugzilla.mozilla.org/show_bug.cgi?id=618650
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Is Python in the browser a dead-end ?

2011-10-28 Thread Amirouche Boubekki
Héllo,

There was a thread recently about the missed opportunity for Python to be a
language that could replace Javascript in the browser.

They are several attempts at doing something in this spirit here are the
ones I'm aware of:

- pyjamas aka. pyjs it is to Python what GWT is to Java : http://pyjs.org/
- py2js which compiles Python to Javascript without high level widget like
pyjamas does https://github.com/qsnake/py2js
- skulpt unlike py2js and pyjs there is no compilation phase
http://www.skulpt.org/

And SubScript [1], which I am the creator, a Python-like language [2] and
Sissi [3] which aims at being the stdlib of SubScript, currently it's
implemented with py2js. It relies heavly on Google Closure tools [4]. I'm
planning to add streamer [5] support if I can wrap my head around it.

I'm done with self advertisement.

Before starting SubScript I gave it some thoughts, and I think that it can
bring value to web development.

Here are articles that backs up my vision:

- Javascript gurus claims that Javascript is indeed the "assembly" of the
web [6], which means that Javascript is viable target for a compiler

- Similar projects exists with Coffescript and ClojureScript which seems to
have the lead and are gaining traction in web development community. Dart is
another example.

- The thing that makes Javascript development difficult is that it's hard to
debug, but map JS [7] is making its way into Firefox which will help greatly


Why do people think that this projects are doomed ?

I ask this question to understand other point of views before commiting
myself at finish SubScript & Sissi, I don't want to code something that will
be useless.

Thanks in advance,

Amirouche

[1] https://bitbucket.org/abki/subscript
[2] currently Subscript code is parsed by Python parsing library the exact
same syntax should be supported
[3] https://bitbucket.org/abki/sissi/overview
[4] http://code.google.com/intl/fr/closure/
[5] https://github.com/Gozala/streamer
[6]
http://www.hanselman.com/blog/JavaScriptIsAssemblyLanguageForTheWebSematicMarkupIsDeadCleanVsMachinecodedHTML.aspx
[7] https://bugzilla.mozilla.org/show_bug.cgi?id=618650
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: __dict__ attribute for built-in types

2011-10-28 Thread Christian Heimes
Am 28.10.2011 10:01, schrieb Steven D'Aprano:
 hasattr(int, '__dict__')  # Does the int class/type have a __dict__?
> True
 hasattr(42, '__dict__')  # Does the int instance have a __dict__?
> False

Also __dict__ doesn't have to be an instance of __dict__. Builtin types
usually have a dictproxy instane as their __dict__.

>>> type(int.__dict__)

>>> int.__dict__["egg"] = "spam"
Traceback (most recent call last):
  File "", line 1, in 
TypeError: 'dictproxy' object does not support item assignment

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Entre local et global

2011-10-28 Thread Laurent Claessens

Woops. This was aimed to the french speaking python's usenet. Sorry.

Laurent

Le 28/10/2011 11:29, Laurent a écrit :

Le 28/10/2011 10:43, ll.snark a écrit :

 On 27 oct, 17:06, Laurent Claessens   wrote:

  >   J'aimerais donc pouvoir indiquer dans fonca, que la variable lst est
  >   celle définie dans fonc1.
  >   Je ne veux pas d'une variable locale à fonca, ni une variable globale
  >   à tout mon fichier (cf exemple ci-dessous)

  >   Une idée ?


[snip]
--
http://mail.python.org/mailman/listinfo/python-list


Re: __dict__ attribute for built-in types

2011-10-28 Thread candide

Le 28/10/2011 11:08, Hrvoje Niksic a écrit :


longer be allowed for the interpreter to transparently cache them.  The
same goes for integers and other immutable built-in objects.



On the other hand, immutability and optimization don't explain the whole 
thing because you can't do something like [].bar = 42.






If you really need to attach state to strings, subclass them as Steven
explained.  All code that accepts strings (including all built-ins) will
work just fine, transparent caching will not happen, and attributes are
writable.


OK, thanks, I'll consider it seriously. Actually I have a directed graph 
whose nodes are given as string but it's oversimplistic to identify node 
and string (for nodes requiring specific methods).

--
http://mail.python.org/mailman/listinfo/python-list


Re: __dict__ attribute for built-in types

2011-10-28 Thread candide

Le 28/10/2011 05:02, Patrick Maupin a écrit :


You can easily do that by subclassing a string:

class AnnotatedStr(str):
 pass

x = AnnotatedStr('Node1')
x.title = 'Title for node 1'




Less or more what I did. But requires to transport the string graph 
structure to the AnnotatedStr one.


--
http://mail.python.org/mailman/listinfo/python-list


Re: __dict__ attribute for built-in types

2011-10-28 Thread candide

Le 28/10/2011 10:01, Steven D'Aprano a écrit :


didn't think of it. This is hardly a surprise. Wanting to add arbitrary
attributes to built-ins is not exactly an everyday occurrence.





Depends. Experimented programmers don't even think of it. But less 
advanced programmers can consider of it. It's is not uncommun to use a 
Python class like a C  structure, for instance :


class C:pass

C.member1=foo
C.member2=bar


Why not with a built-in type instead of a custom class?





the natural logarithm of a googol (10**100). But it's a safe bet that
nothing so arbitrary will happen.


betting when programming ?  How curious! ;)



Also, keep in mind the difference between a *class* __dict__ and an
*instance* __dict__.



You mean this distinction

>>> hasattr('', '__dict__')
False
>>> hasattr(''.__class__, '__dict__')
True
>>>


?
--
http://mail.python.org/mailman/listinfo/python-list


Re: Entre local et global

2011-10-28 Thread Laurent

Le 28/10/2011 10:43, ll.snark a écrit :

On 27 oct, 17:06, Laurent Claessens  wrote:

 >  J'aimerais donc pouvoir indiquer dans fonca, que la variable lst est
 >  celle définie dans fonc1.
 >  Je ne veux pas d'une variable locale à fonca, ni une variable globale
 >  à tout mon fichier (cf exemple ci-dessous)

 >  Une idée ?

 Je ne suis pas très sûr que ça réponse à la question, mais une piste
 serait de faire de fonca une classe avec une méthode __call__ et une
 variable de classe:

 class fonc2(object):
   fonc1.lst=[]
   def __call__(self,v):
   if len(fonc1.lst)<3:
   fonc1.lst=fonc1.lst+[v]
   print fonc1.lst




Effectivement, ça répond en partie à la question.
Il faudra que je vois si en faisant comme ça, je peux résoudre mon pb
de départ sur les décorateurs. MAis je suppose qu'entre une fonction
et une classe qui possède une
méthode __call__, il n'y a pas tant de différence que ça...



Moi je dirais que oui, il y a une différence énorme : dans une classe tu 
peux mettre beaucoup plus de choses ;)


class Exemple(object):
def __init__(self,a):
self.a=a
def __call__(self,x):
return a+x

f=Exemple(2)
g=Exemple(10)

f(1) # 3
g(1) # 11
f.a=6
f(1) # 7

Faire ça avec une fonction, ça me semblerait ardu, sauf à faire comme tu 
faisais : une fonction qui contient une fonction et qui la retourne. 
(mais à mon avis c'est moins lisible)



Merci pour la réponse en tous cas. Je suppose donc qu'il n'y a pas de
mot clé, un peu comme global, mais qui
désigne une portée plus petite ?


Je ne sais pas.



Je l'aurais sans doute vu dans la doc
ou dans des bouquin si ça avait été le cas, mais je trouve
que c'est bizarre.


A priori, ça ne me semble pas tellement bizarre. J'utiliserais une 
fonction seulement pour quelque chose qui a une entrée et une sortie qui 
dépend uniquement de l'entrée.
Pour quelque chose dont le comportement dépend du contexte (c'est le cas 
de ce que tu demandes), j'utiliserais une classe dans les attributs de 
laquelle je mettrais les éléments du contexte.
Bon, je dis ça, mais je ne suis pas assez fort pour prétendre être 
certain de ce qui est "bizarre" ou non :)


Bonne journée
Laurent
--
http://mail.python.org/mailman/listinfo/python-list


Re: __dict__ attribute for built-in types

2011-10-28 Thread Hrvoje Niksic
candide  writes:

> Le 28/10/2011 00:57, Hrvoje Niksic a écrit :
>
>> was used at class definition time to suppress it.  Built-in and
>> extension types can choose whether to implement __dict__.
>>
>
> Is it possible in the CPython implementation to write something like this :
>
> "foo".bar = 42
>
> without raising an attribute error ?

No, and for good reason.  Strings are immutable, so that you needn't
care which particular instance of "foo" you're looking at, they're all
equivalent.  The interpreter uses that fact to cache instances of short
strings such as Python identifiers, so that most places that look at a
string like "foo" are in fact dealing with the same instance.  If one
could change an attribute of a particular instance of "foo", it would no
longer be allowed for the interpreter to transparently cache them.  The
same goes for integers and other immutable built-in objects.

If you really need to attach state to strings, subclass them as Steven
explained.  All code that accepts strings (including all built-ins) will
work just fine, transparent caching will not happen, and attributes are
writable.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: spawnl issues with Win 7 access rights

2011-10-28 Thread Tim Golden

On 27/10/2011 20:53, Terry Reedy wrote:

On 10/27/2011 6:36 AM, Tim Golden wrote:

On 27/10/2011 11:27, Propad wrote:

the suggestion to add the optional second parameter fixed the problem,
spawnl now works on the Win 7 computer I'm responsible for (with
Python 2.2). So the suggested cause seems to be right.


FWIW, although it's not obvious, the args parameter to spawnl
is intended to become the sys.args (in Python terms) of the
newly-spawned process. Which is why the first element is expected
to be the name of the process. It took me some time to realise
this myself :)

Anyway, glad we could be of help.


Can we make this fix automatic for Win7 to fix #8036?



It's tempting, but I think not. In principle, the caller can
pass any value as the first arg of spawn: it'll simply end up
as sys.argv[0] in Python terms. If spawnl were the way of the
future, I'd be inclined to argue for the change. As it it, though,
I'd simply apply the patch and, possibly, add a line to the docs
indicating that the args must be non-empty.

(I started to import the patch yesterday but something got in the
way; I'll see if I can get it done today).

TJG
--
http://mail.python.org/mailman/listinfo/python-list


Re: Assigning generator expressions to ctype arrays

2011-10-28 Thread Terry Reedy

On 10/28/2011 3:21 AM, Steven D'Aprano wrote:


If the slice has too few elements, you've just blown away the entire
iterator for no good reason.



If the slice is the right length, but the iterator doesn't next raise
StopIteration, you've just thrown away one perfectly good value. Hope it
wasn't something important.


You have also over-written values that should be set back to what they 
were, before the exception is raised, which is why I said the test needs 
to be done with a temporary array.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: __dict__ attribute for built-in types

2011-10-28 Thread Steven D'Aprano
On Fri, 28 Oct 2011 00:52:40 +0200, candide wrote:

> Le 28/10/2011 00:19, Steven D'Aprano a écrit :
>>
>> What, you think it goes against the laws of physics that nobody thought
>> to mention it in the docs?
> 
> 
> No but I'm expecting from Python documentation to mention the laws of
> Python ...

You seem to have missed my point. You said "I can't imagine" that the 
Python docs fail to mention that built-ins don't allow the addition of 
new attributes. I can, easily. The people writing the documentation are 
only human, and if they failed to mention it, oh well, perhaps they 
didn't think of it. This is hardly a surprise. Wanting to add arbitrary 
attributes to built-ins is not exactly an everyday occurrence.


>>> But beside this, how to recognise classes whose object doesn't have a
>>> __dict__ attribute ?
>>
>> The same way as you would test for any other attribute.
>>
> hasattr(42, '__dict__')
>> False
> 
> OK but I'm talking about classes, not instances  : 42 has no __dict__
> attribute but, may be, 43 _has_ such attribute, who knows in advance ?
> ;)

True, it is theoretically possible that (say) only odd numbers get a 
__dict__, or primes, or the smallest multiple of seventeen larger than 
the natural logarithm of a googol (10**100). But it's a safe bet that 
nothing so arbitrary will happen.

Dunder attributes ("Double UNDERscore") like __dict__ are reserved for 
use by Python, and __dict__ has known semantics. You can safely assume 
that either *all* instances of a type will have a __dict__, or *no* 
instances will have one. If some class violates that, oh well, your code 
can't be expected to support every badly-designed stupid class in the 
world.

Also, keep in mind the difference between a *class* __dict__ and an 
*instance* __dict__. 

>>> hasattr(int, '__dict__')  # Does the int class/type have a __dict__?
True
>>> hasattr(42, '__dict__')  # Does the int instance have a __dict__?
False



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Dynamically creating properties?

2011-10-28 Thread Steven D'Aprano
On Thu, 27 Oct 2011 16:00:57 -0700, DevPlayer wrote:

> def isvalid_named_reference( astring ):
> # "varible name" is really a named_reference 
> # import string   # would be cleaner

I don't understand the comment about "variable name".

> valid_first_char =
> '_abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'
> valid_rest =
> '_0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'

This would be better:

import string
valid_first_char = '_' + string.ascii_letters
valid_rest = string.digits + valid_first_char



> # I think it's ok here for the rare type-check 
> # as unicode named-references are not allowed 
> if type(astring) is not str: return False

In Python 3 they are:

http://www.python.org/dev/peps/pep-3131/


> if len(astring) == 0: return False
> if astring[0] not in valid_first_char: return False
> for c in astring[1:]:
> if c not in valid_rest: return False
> 
> # Python keywords not allowed as named references (variable names)
> for astr in ['and', 'assert', 'break', 'class', 'continue',
> 'def', 'del', 'elif', 'else', 'except', 'exec',
> 'finally', 'for', 'from', 'global', 'if', 'import',
> 'in', 'is', 'lambda', 'not', 'or', 'pass', 'print',
> 'raise', 'return', 'try', 'while', 'yield',]:
> if astring == astr: return False

You missed 'as' and 'with'. And 'nonlocal' in Python 3. Possibly others.

Try this instead:

from keywords import iskeyword
if iskeyword(astring): return False




-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Assigning generator expressions to ctype arrays

2011-10-28 Thread Steven D'Aprano
On Thu, 27 Oct 2011 17:09:34 -0700, Patrick Maupin wrote:

> On Oct 27, 5:31 pm, Steven D'Aprano  +comp.lang.pyt...@pearwood.info> wrote:
>> From the outside, you can't tell how big a generator expression is. It
>> has no length:
> 
> I understand that.
> 
>> Since the array object has no way of telling whether the generator will
>> have the correct size, it refuses to guess.
> 
> It doesn't have to guess.  It can assume that I, the programmer, know
> what the heck I am doing, and then validate that assumption -- trust,
> but verify.  It merely needs to fill the slice and then ask for one more
> and check that StopIteration is raised.

Simple, easy, and wrong.

It needs to fill in the slice, check that the slice has exactly the right 
number of elements (it may have fewer), and then check that the iterator 
is now empty.

If the slice has too few elements, you've just blown away the entire 
iterator for no good reason.

If the slice is the right length, but the iterator doesn't next raise 
StopIteration, you've just thrown away one perfectly good value. Hope it 
wasn't something important.


>> I would argue that it should raise a TypeError with a less misleading
>> error message, rather than a ValueError, so "bug".
> 
> And I would argue that it should simply work, unless someone can present
> a more compelling reason why not.

I think that "the iterator protocol as it exists doesn't allow it to work 
the way you want" is a pretty compelling reason.


>> The simple solution is to use a list comp instead of a generator
>> expression.
> 
> I know how to work around the issue.  I'm not sure I should have to. It
> violates the principle of least surprise for the ctypes array to not be
> able to interoperate with the iterator protocol in this fashion.

Perhaps you're too easily surprised by the wrong things.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Need Windows user / developer to help with Pynguin

2011-10-28 Thread Lee Harr

>> I started a wiki page 
>> here: 
>>  
>> http://code.google.com/p/pynguin/wiki/InstallingPynguinOnWindows
>>  
>> but I can't even test if it actually 
>> works  
>>  

> Apparently, the US is a forbidden country and Google won't let 
> me  
> download. Otherwise, I'd test it for you. 

I don't understand. Where are you located?

What message do you get when trying to download?

  
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unicode literals and byte string interpretation.

2011-10-28 Thread Steven D'Aprano
On Thu, 27 Oct 2011 20:05:13 -0700, Fletcher Johnson wrote:

> If I create a new Unicode object u'\x82\xb1\x82\xea\x82\xcd' how does
> this creation process interpret the bytes in the byte string? 

It doesn't, because there is no byte-string. You have created a Unicode 
object from a literal string of unicode characters, not bytes. Those 
characters are:

Dec Hex  Char
130 0x82 ‚
177 0xb1 ±
130 0x82 ‚
234 0xea ê
130 0x82 ‚
205 0xcd Í

Don't be fooled that all of the characters happen to be in the range 
0-255, that is irrelevant.


> Does it
> assume the string represents a utf-16 encoding, at utf-8 encoding,
> etc...?

None of the above. It assumes nothing. It takes a string of characters, 
end of story.

> For reference the string is これは in the 'shift-jis' encoding.

No it is not. The way to get a unicode literal with those characters is 
to use a unicode-aware editor or terminal:

>>> s = u'これは'
>>> for c in s:
... print ord(c), hex(ord(c)), c
... 
12371 0x3053 こ
12428 0x308c れ
12399 0x306f は


You are confusing characters with bytes. I believe that what you are 
thinking of is the following: you start with a byte string, and then 
decode it into unicode:

>>> bytes = '\x82\xb1\x82\xea\x82\xcd'  # not u'...'
>>> text = bytes.decode('shift-jis')
>>> print text
これは


If you get the encoding wrong, you will get the wrong characters:

>>> print bytes.decode('utf-16')
놂춂


If you start with the Unicode characters, you can encode it into various 
byte strings:

>>> s = u'これは'
>>> s.encode('shift-jis')
'\x82\xb1\x82\xea\x82\xcd'
>>> s.encode('utf-8')
'\xe3\x81\x93\xe3\x82\x8c\xe3\x81\xaf'






-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list