Interpreting string containing \u000a

2008-06-18 Thread Francis Girard
Hi,

I have an ISO-8859-1 file containing things like
Hello\u000d\u000aWorld, i.e. the character '\', followed by the
character 'u' and then '0', etc.

What is the easiest way to automatically translate these codes into
unicode characters ?

Thank you

Francis Girard
--
http://mail.python.org/mailman/listinfo/python-list


Re: Interpreting string containing \u000a

2008-06-18 Thread Francis Girard
Thank you very much ! I didn't know about this 'unicode-escape'. That's
great!

Francis

2008/6/18 Duncan Booth [EMAIL PROTECTED]:

 Francis Girard [EMAIL PROTECTED] wrote:

  I have an ISO-8859-1 file containing things like
  Hello\u000d\u000aWorld, i.e. the character '\', followed by the
  character 'u' and then '0', etc.
 
  What is the easiest way to automatically translate these codes into
  unicode characters ?
 

  s = rHello\u000d\u000aWorld
  print s
 Hello\u000d\u000aWorld
  s.decode('iso-8859-1').decode('unicode-escape')
 u'Hello\r\nWorld'
 

 --
 Duncan Booth http://kupuguy.blogspot.com
 --
 http://mail.python.org/mailman/listinfo/python-list

--
http://mail.python.org/mailman/listinfo/python-list

Re: Looking for lots of words in lots of files

2008-06-18 Thread Francis Girard
Hi,

Use a suffix tree. First make yourself a suffix tree of your thousand files
and the use it.
This is a classical problem for that kind of structure.

Just search suffix tree or suffix tree python on google to find a
definition and an implementation.

(Also Jon Bentley's Programming Pearls is a great book to read)

Regards

Francis Girard

2008/6/18 brad [EMAIL PROTECTED]:

 Just wondering if anyone has ever solved this efficiently... not looking
 for specific solutions tho... just ideas.

 I have one thousand words and one thousand files. I need to read the files
 to see if some of the words are in the files. I can stop reading a file once
 I find 10 of the words in it. It's easy for me to do this with a few dozen
 words, but a thousand words is too large for an RE and too inefficient to
 loop, etc. Any suggestions?

 Thanks
 --
 http://mail.python.org/mailman/listinfo/python-list

--
http://mail.python.org/mailman/listinfo/python-list

Re: Software for Poets (Was: Re: Text-to-speech)

2005-03-21 Thread Francis Girard
This is about poetry. I think the next reply should be done privately unless 
someone else is interested in it.

Hi,

Le dimanche 20 Mars 2005 23:04, Paul Rubin a écrit :
 Francis Girard [EMAIL PROTECTED] writes:
  4- Propose a synonym that will fit in a verse, i.e. with the right amount
  of syllabs
 
  5- Suggest a missing word or expression in a verse by applying the
  Shannon text generation principle
  ...
  First, do you think it may be a useful tool ?
  What other features you think can make it usefull for a poet ?

 I'm skeptical of this notion.  You can think of writing a poem as
 building up a tree structure where there's a root idea you're trying
 to express, branches in the choices of images/comparisons/etc. that
 you use to express the idea, and leaves that are the actual words in
 the poem.  Rhyme means that a left-to-right traversal of the leaves
 (i.e. reading the words) results in a pattern with a certain
 structure.  You're proposing a tool that helps explore the search
 space in the nodes near the bottom level of the tree, to find words
 with the right characteristics.

 I think the constraint of rhyme and meter is best served by widening
 the search space at the upper levels of the tree and not the lower
 levels.  That is, if you've got an image and you don't find rhyming
 words for it with easy natural diction, a computerized search for more
 and more obscure words to express that image in rhyme is the last
 thing you want.  

Absolutly right.

 Rather, you want to discard the image and choose a 
 different one to express the idea.  That means seeking more images by 
 mentally revisiting and staying inside the emotion at the center of
 poem, a much more difficult thing to do than solving the mere math
 problem of finding a string of rhyming words with similar semantics to
 a non-rhyming sequence that you already have.  

Again, right. Your description comes very close to my own experience of 
writing poems and I never read something as clear as what I'm reading here. 
Poetry practice is described most of the time in poetic terms just like 
religion is described in religious terms. And one has to impregnate himself 
with these words to, little by little, gain some understanding of it. Your 
description proves that it is possible to describe it otherwise. I am truly 
marvelled.

The question is : how do you discard the image to choose another one ? How 
this process takes place ? I observed myself while writing a poem (I, myself, 
may not be good example since I am certainly not a good poet) and discovered 
that it is while playing with the words, trying to find the right one, with 
the right number of syllabs, that I discover a new image, and re-write the 
whole verse, even re-arranging the whole strophe or poem. My goal with the 
two last tasks (4 and 5) was to help the poor guy struggling with the words, 
not to produce the correct final verse, but only to help him in one of the 
phase of his writing.

 But when you find the
 right image, the words and rhythm fall into place without additional
 effort.


I don't believe much in this. Poetry and writing in general is work, work, 
work and more work.

 This is why writing good poems is hard, and is also why the results of
 doing it well is powerful.  I don't think it can be programmed into a
 computer using any current notions.

Again right. My goal, of course, is not to substitute the poet by a computer. 
Only help him in some of his mechanical tasks.

Regards

Francis Girard

--
http://mail.python.org/mailman/listinfo/python-list


Re: Pre-PEP: Dictionary accumulator methods

2005-03-20 Thread Francis Girard
Hi,

I really do not like it. So -1 for me. Your two methods are very specialized 
whereas the dict type is very generic. Usually, when I see something like 
this in code, I can smell it's a patch to overcome some shortcomings on a 
previous design, thereby making the economy of re-designing. Simply said, 
that's bad programming.

After that patch to provide a solution for only two of the more common 
use-cases, you are nonetheless stucked with the old solution for all the 
other use-cases (what if the value type is another dictionary or some 
user-made class ?).

Here's an alternate solution which I think answers all of the problems you 
mentionned while being generic.

=== BEGIN SNAP

def update_or_another_great_name(self, key, createFunc, updtFunc):
  try:
self[key] = updtFunc(self[key])
  ## This is slow with Python = since the key has to be  searched 
  ## twice But the new built-in method just has to update the value the
  ## first time the key is found. Therefore speed should be ok.
return True
  except KeyError:
self[key] = createFunc()
return false

## Now your two specialized methods can be easily written as :

## A built-in should be provided for this (if not already proposed) :
def identical(val):
  return val

def count(self, key, qty=1):
  self.update_or_another_great_name(key, identical, 
partial(operator.add, qty))
  ## partial is coming from : http://www.python.org/peps/pep-0309.html
  ## Using only built-in function (assuming identical) as arguments makes it
  ## ok for speed (I guess).
  
def appendlist(self, key, *values):
  self.update_or_another_great_name(key, 
partial(list, values),
partial(ListType.extend, X = values))
  ## The first partial usage here is an abuse just to make sure that the
  ## list is not actually constructed before needed. It should work.
  ## The second usage is more uncertain as we need to bind the arguments from
  ## the right. Therefore I have to use the name of the parameter and I am not
  ## sure if there's one. As this list is very prolific, someone might have an
  ## idea on how to improve this.
  
=== END SNAP

By using only built-in constructs, this should be fast enough. Otherwise, 
optimizing these built-ins is a much more clean and sane way of thinking then 
messing the API with ad-hoc propositions.

Reviewing the problems you mention :

 The readability issues with the existing constructs are:

 * They are awkward to teach, create, read, and review.

The method update_or_another_great_name is easy to understand, I think. But it 
might not always be easy to use it efficiently with built-ins. But this is 
always the case. Recipees can be added to show how to efficiently use the 
method.

 * Their wording tends to hide the real meaning (accumulation).

Solved.

 * The meaning of setdefault() 's method name is not self-evident.

Solved.


 The performance issues with the existing constructs are:

 * They translate into many opcodes which slows them considerably.

I really don't know what will be the outcome of the solution I propose. I 
certainly do not know anything about how my Python code translates into 
opcodes.

 * The get() idiom requires two dictionary lookups of the same key.

Solved

 * The setdefault() idiom instantiates a new, empty list prior to every

Solved

 call. * That new list is often not needed and is immediately discarded.

Solved

 * The setdefault() idiom requires an attribute lookup for extend/append.

Solved

 * The setdefault() idiom makes two function calls.

Solved

And perhaps, what you say here is also true for your two special use-cases :

 For other
 uses, plain Python code suffices in terms of speed, clarity, and avoiding
 unnecessary instantiation of empty containers:

 if key not in d:
 d.key = {subkey:value}
 else:
 d[key][subkey] = value


Much better than adding special cases on a generic class. Special cases always 
demultiply and if we open the door 

Regards,

Francis Girard


Le samedi 19 Mars 2005 02:24, Raymond Hettinger a crit:
 I would like to get everyone's thoughts on two new dictionary methods:

 def count(self, value, qty=1):
 try:
 self[key] += qty
 except KeyError:
 self[key] = qty

 def appendlist(self, key, *values):
 try:
 self[key].extend(values)
 except KeyError:
 self[key] = list(values)

 The rationale is to replace the awkward and slow existing idioms for
 dictionary based accumulation:

 d[key] = d.get(key, 0) + qty
 d.setdefault(key, []).extend(values)

 In simplest form, those two statements would now be coded more readably as:

d.count(key)
d.appendlist(key, value)

 In their multi-value forms, they would now be coded as:

   d.count(key, qty)
   d.appendlist(key, *values)

 The error

Software for Poets (Was: Re: Text-to-speech)

2005-03-20 Thread Francis Girard
Hello M. Hartman,

It's a very big opportunity for me to find someone that both is a poet and 
knows something about programming.

First, please excuse my bad english ; I'm a french canadian.

I am dreaming to write a software to help french poets to write strict 
rigourous classical poetry. Since calssical poetry is somewhat mathematical, 
a lot of tasks can be automatised :

1- Counting the number of syllabs (pied in french) in a verse

2- Checking the rimes ; determining the strength of a rime

3- Checking compliance of a poem to a fixed pre-determined classical form (in 
french, we have distique, tercet, quatrain, quintain, sixain, huitain, 
dizain, triolet, vilanelle, rondeau, rondel, ballade, chant royal, sonnet, 
etc.)

4- Propose a synonym that will fit in a verse, i.e. with the right amount of 
syllabs

5- Suggest a missing word or expression in a verse by applying the Shannon 
text generation principle

First, do you think it may be a useful tool ?
What other features you think can make it usefull for a poet ?

The first task of cutting sentences into syllabs (phonetically of course, not 
typographically) is already done. It's been difficult to get it right and to 
make it guess correctly with a very very high percentage. 

I can very well imagine that the next task is even more difficult. I need to 
translate text into phonems. Do you know some software that does it ? I guess 
that voice synthetisers that translates written text into spoken text must 
first translate the text into phonems. Right ? Do you know if there some way 
that I can re-use some sub-modules from these projects that will translate 
text into phonems ?

Regards,

Francis Girard

Le dimanche 20 Mars 2005 04:40, Charles Hartman a écrit :
 Does anyone know of a cross-platform (OSX and Windows at least) library
 for text-to-speech? I know  there's an OSX API, and probably also for
 Windows. I know PyTTS exists, but it seems to talk only to the Windows
 engine. I'd like to write a single Python module to handle this on both
 platforms, but I guess I'm asking too much -- it's too hardware
 dependent, I suppose. Any hints?

 Charles Hartman
 Professor of English, Poet in Residence
 http://cherry.conncoll.edu/cohar
 http://villex.blogspot.com

--
http://mail.python.org/mailman/listinfo/python-list


Re: Unicode BOM marks

2005-03-08 Thread Francis Girard
Hi,

 Well, no. For example, Python source code is not typically concatenated,
 nor is source code in any other language. 

We did it with C++ files in order to have only one compilation unit to 
accelarate compilation time over network. Also, all the languages with some 
include directive will have to take care of it. I guess a unicode aware C 
pre-compiler already does.

 As for the super-cat: there is actually no problem with putting U+FFFE
 in the middle of some document - applications are supposed to filter it
 out. The precise processing instructions in the Unicode standard vary
 from Unicode version to Unicode version, but essentially, you are
 supposed to ignore the BOM if you see it.

Ok. I'm re-assured.

 A Unicode string is a sequence of integers. The numbers are typically
 represented as base-2, but the details depend on the C compiler.
 It is specifically *not* UTF-16, big or little endian (i.e. a single
 number is *not* a sequence of bytes). It may be UCS-2 or UCS-4,
 depending on a compile-time choice (which can be determined by looking
 at sys.maxunicode, which in turn can be either 65535 or 1114111).

 The programming interface to the individual characters is formed by
 the unichr and ord builtin functions, which expect and return integers
 between 0 and sys.maxunicode.

Ok. I guess that Python gives the flexibility of being configurable (when 
compiling Python) to internally represent unicode strings as fixed 2 or 4 
bytes per characters (UCS). 

Thank you
Francis Girard

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unicode BOM marks

2005-03-08 Thread Francis Girard
Hi,

Thank you for your answer. That confirms what Martin v. Lwis says. You can 
choose between UCS-2 or UCS-4 for internal unicode representation.

Francis Girard

Le mardi 8 Mars 2005 00:44, Jeff Epler a crit:
 On Mon, Mar 07, 2005 at 11:56:57PM +0100, Francis Girard wrote:
  BTW, the python unicode built-in function documentation says it returns
  a unicode string which scarcely means something. What is the python
  internal unicode encoding ?

 The language reference says farily little about unicode objects.  Here's
 what it does say: [http://docs.python.org/ref/types.html#l2h-48]
 Unicode
 The items of a Unicode object are Unicode code units. A Unicode
 code unit is represented by a Unicode object of one item and can
 hold either a 16-bit or 32-bit value representing a Unicode
 ordinal (the maximum value for the ordinal is given in
 sys.maxunicode, and depends on how Python is configured at
 compile time). Surrogate pairs may be present in the Unicode
 object, and will be reported as two separate items. The built-in
 functions unichr() and ord() convert between code units and
 nonnegative integers representing the Unicode ordinals as
 defined in the Unicode Standard 3.0. Conversion from and to
 other encodings are possible through the Unicode method encode
 and the built-in function unicode().

 In terms of the CPython implementation, the PyUnicodeObject is laid out
 as follows:
 typedef struct {
 PyObject_HEAD
 int length; /* Length of raw Unicode data in buffer
 */ Py_UNICODE *str;/* Raw Unicode buffer */
 long hash;  /* Hash value; -1 if not set */
 PyObject *defenc;   /* (Default) Encoded version as Python
string, or NULL; this is used for
implementing the buffer protocol */
 } PyUnicodeObject;
 Py_UNICODE is some C integral type that can hold values up to
 sys.maxunicode (probably one of unsigned short, unsigned int, unsigned
 long, wchar_t).

 Jeff

--
http://mail.python.org/mailman/listinfo/python-list


Unicode BOM marks

2005-03-07 Thread Francis Girard
Hi,

For the first time in my programmer life, I have to take care of character 
encoding. I have a question about the BOM marks. 

If I understand well, into the UTF-8 unicode binary representation, some 
systems add at the beginning of the file a BOM mark (Windows?), some don't.
(Linux?). Therefore, the exact same text encoded in the same UTF-8 will 
result in two different binary files, and of a slightly different length. 
Right ?

I guess that this leading BOM mark are special marking bytes that can't be, in 
no way, decoded as valid text.
Right ?
(I really really hope the answer is yes otherwise we're in hell when moving 
file from one platform to another, even with the same Unicode encoding).

I also guess that this leading BOM mark is silently ignored by any unicode 
aware file stream reader to which we already indicated that the file follows 
the UTF-8 encoding standard.
Right ?

If so, is it the case with the python codecs decoder ?

In python documentation, I see theseconstants. The documentation is not clear 
to which encoding these constants apply. Here's my understanding :

BOM : UTF-8 only or UTF-8 and UTF-32 ?
BOM_BE : UTF-8 only or UTF-8 and UTF-32 ?
BOM_LE : UTF-8 only or UTF-8 and UTF-32 ?
BOM_UTF8 : UTF-8 only
BOM_UTF16 : UTF-16 only
BOM_UTF16_BE : UTF-16 only
BOM_UTF16_LE : UTF-16 only
BOM_UTF32 : UTF-32 only
BOM_UTF32_BE : UTF-32 only
BOM_UTF32_LE : UTF-32 only

Why should I need these constants if codecs decoder can handle them without my 
help, only specifying the encoding ?

Thank you

Francis Girard




Python tells me to use an encoding declaration at the top of my files (the 
message is referring to http://www.python.org/peps/pep-0263.html).

I expected to see there a list of acceptable 

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unicode BOM marks

2005-03-07 Thread Francis Girard
Le lundi 7 Mars 2005 21:54, Martin v. Lwis a crit:

Hi,

Thank you for your very informative answer. Some interspersed remarks  follow.


 I personally would write my applications so that they put the signature
 into files that cannot be concatenated meaningfully (since the
 signature simplifies encoding auto-detection) and leave out the
 signature from files which can be concatenated (as concatenating the
 files will put the signature in the middle of a file).


Well, no text files can't be concatenated ! Sooner or later, someone will use 
cat on the text files your application did generate. That will be a lot of 
fun for the new unicode aware super-cat.

  I guess that this leading BOM mark are special marking bytes that can't
  be, in no way, decoded as valid text.
  Right ?

 Wrong. The BOM mark decodes as U+FEFF:
   codecs.BOM_UTF8.decode(utf-8)

 u'\ufeff'

I meant valid text to denote human readable actual real natural language 
text. My intent with this question was to get sure that we can easily 
distinguish a UTF-8 with the signature from one without. Your answer implies 
a yes.

  I also guess that this leading BOM mark is silently ignored by any
  unicode aware file stream reader to which we already indicated that the
  file follows the UTF-8 encoding standard.
  Right ?

 No. It should eventually be ignored by the application, but whether the
 stream reader special-cases it or not is depends on application needs.


Well, for most of us, I think, the need is to transparently decode the input 
into a unique internal unicode encoding (UFT-16 for both java and Qt ; Qt 
docs saying there might be a need to switch to UFT-32 some day) and then be 
able to manipulate this internal text with the usual tools your programming 
system provides. By transparent, I mean, at least, to be able to 
automatically process the two variants of the same UTF-8 encoding. We should 
only have to specify UTF-8 and the streamer takes care of the rest.

BTW, the python unicode built-in function documentation says it returns a 
unicode string which scarcely means something. What is the python 
internal unicode encoding ?


 No; the Python UTF-8 codec is unaware of the UTF-8 signature. It reports
 it to the application when it finds it, and it will never generate the
 signature on its own. So processing the UTF-8 signature is left to the
 application in Python.

Ok.

  In python documentation, I see theseconstants. The documentation is not
  clear to which encoding these constants apply. Here's my understanding :
 
  BOM : UTF-8 only or UTF-8 and UTF-32 ?

 UTF-16.

  BOM_BE : UTF-8 only or UTF-8 and UTF-32 ?
  BOM_LE : UTF-8 only or UTF-8 and UTF-32 ?

 UTF-16

Ok.

  Why should I need these constants if codecs decoder can handle them
  without my help, only specifying the encoding ?

 Well, because the codecs don't. It might be useful to add a
 utf-8-signature codec some day, which generates the signature on
 encoding, and removes it on decoding.

Ok.

My sincere thanks,

Francis Girard

 Regards,
 Martin

--
http://mail.python.org/mailman/listinfo/python-list


Re: yield_all needed in Python

2005-03-02 Thread Francis Girard
Le mercredi 2 Mars 2005 21:32, Skip Montanaro a écrit :
 def f():
     yield from (x for x in gen1(arg))

 Skip

This suggestion had been made in a previous posting and it has my preference :

def f():
yield from gen1(arg)

Regards

Francis

--
http://mail.python.org/mailman/listinfo/python-list


Re: yield_all needed in Python

2005-03-01 Thread Francis Girard
Hi,

You absolutely and definitively have my vote. 

When I first learned the generators , I was even wondering if there was 
something wrong in what I do when faced with the sub-generators problem you 
describe. I was wondering why am I doing this extra for-loop ? Is there 
something wrong ? Can I return the sub-iterator itself and let the final 
upper loop do the job ? But no, I absolutely have to 'yield'. What then ? 

Therefore, the suggestion you make, or something similar, would have actually 
ease my learning, at least for me.

Regards,

Francis Girard

Le mardi 1 Mars 2005 19:22, Douglas Alan a écrit :
 For me, it's a matter of providing the ability to implement
 subroutines elegantly within generators.  Without yield_all, it is not
 elegent at all to use subroutines to do some of the yielding, since
 the calls to the subroutines are complex, verbose statements, rather
 than simple ones.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Module RE, Have a couple questions

2005-03-01 Thread Francis Girard
Le mardi 1 Mars 2005 16:52, Marc Huffnagle a écrit :
 [line for line in document if (line.find('word') != -1 \
 and line.find('wordtwo') != -1)]

Hi,

Using re might be faster than scanning the same line twice :

=== begin snap
## rewords.py

import re
import sys

def iWordsMatch(lines, word, word2):
  reWordOneTwo = re.compile(r.*(%s|%s).* % (word,word2))
  return (line for line in lines if reWordOneTwo.match(line))
  
for line in iWordsMatch(open(rewords.py), re, return):
  sys.stdout.write(line)
=== end snap

Furthermore, using list comprehension generator (2.4 only I think) and file 
iterator, you can scan files as big as you want with very little memory 
usage.

Regards,

Francis Girard

--
http://mail.python.org/mailman/listinfo/python-list


Re: Module RE, Have a couple questions

2005-03-01 Thread Francis Girard
Hi,

Le mardi 1 Mars 2005 22:15, [EMAIL PROTECTED] a écrit :
 Now I don't know this stuff very well but I dont think the code

  [line for line in document if (line.find('word') != -1 \
          and line.find('wordtwo') != -1)]

 would do this as it answers the question in how you thought I asked.

Just use or instead of and and you'll get what you need.

 var1 = this is a test\nand another test
 [line for line in var1 if line.find('t') != -1]

You are scanning the letters in the string.
You first need to split your input into lines, in order to scan over strings 
in a list of strings. Use :

var1 = this is a test\nand another test
[line for line in var1.splitlines() if line.find('t') != -1]

 for line in iWordsMatch(data, microsoft, windows)

Same as above. Use:

for line in iWordsMatch(data.splitlines(), microsoft, windows)

Why Microsoft and Windows ? I am very pleased to see that Microsoft Windows is 
now used instead of foo bar.

Regards

--
http://mail.python.org/mailman/listinfo/python-list


Re: yield_all needed in Python

2005-03-01 Thread Francis Girard
Hi,

No, this won't do. What is needed is a way to yield the results of a generator 
from inside another generator with having to do a for-yield-loop inside the 
outter generator.

Regards,

Francis Girard

Le mardi 1 Mars 2005 22:35, Adam Przybyla a crit:
 ... mayby that way:
 ython 2.2.3 (#1, Oct 15 2003, 23:33:35)
 [GCC 3.3.1 20030930 (Red Hat Linux 3.3.1-6)] on linux2
 Type help, copyright, credits or license for more information.

  from __future__ import generators
  def x():

 ... for i in range(10): yield i
 ...

  x()

 generator object at 0x82414e0

  for k in x(): print k,

 ...
 0 1 2 3 4 5 6 7 8 9

  for k in x(): print k,

 ...
 0 1 2 3 4 5 6 7 8 9

  for k in x(): print k,

 ...
 0 1 2 3 4 5 6 7 8 9

  yield_all=[k for k in x()]
  yield_all

 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

--
http://mail.python.org/mailman/listinfo/python-list


Re: yield_all needed in Python

2005-03-01 Thread Francis Girard
Oops. I meant without having instead of with having which is a syntax 
error.

Regards

Le mardi 1 Mars 2005 22:53, Francis Girard a crit:
 No, this won't do. What is needed is a way to yield the results of a
 generator from inside another generator with having to do a for-yield-loop
 inside the outter generator.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Splitting strings - by iterators?

2005-02-25 Thread Francis Girard
Hi,

Using finditer in re module might help. I'm not sure it is lazy nor 
performant. Here's an example :

=== BEGIN SNAP
import re

reLn = re.compile(r[^\n]*(\n|$))

sStr = \

This is a test string.
It is supposed to be big.
Oh well.


for oMatch in reLn.finditer(sStr):
  print oMatch.group()
=== END SNAP

Regards,

Francis Girard

Le vendredi 25 Février 2005 16:55, Jeremy Sanders a écrit :
 I have a large string containing lines of text separated by '\n'. I'm
 currently using text.splitlines(True) to break the text into lines, and
 I'm iterating over the resulting list.

 This is very slow (when using 40 lines!). Other than dumping the
 string to a file, and reading it back using the file iterator, is there a
 way to quickly iterate over the lines?

 I tried using newpos=text.find('\n', pos), and returning the chopped text
 text[pos:newpos+1], but this is much slower than splitlines.

 Any ideas?

 Thanks

 Jeremy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Iterator / Iteratable confusion

2005-02-15 Thread Francis Girard
Le mardi 15 Fvrier 2005 02:26, Terry Reedy a crit:
 Francis Girard [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]

 (Note for oldtimer nitpickers: except where relevant, I intentionally
 ignore the old and now mostly obsolete pseudo-__getitem__-based iteration
 protocol here and in other posts.)

 Le dimanche 13 Fvrier 2005 23:58, Terry Reedy a crit :
  Iterators are a subgroup of iterables. Being able to say iter(it)
  without
  having to worry about whether 'it' is just an iterable or already an
  iterator is one of the nice features of the new iteration design.
 
 I have difficulties to represent an iterator as a subspecie of an
 iteratable

 You are not the only one.  That is why I say it in plain English.

 You are perhaps thinking of 'iterable' as a collection of things.  But in
 Python, an 'iterable' is a broader and more abstract concept: anything with
 an __iter__ method that returns an iterator.


Yes, I certainly do define an iteratable as something _upon_ which you 
iterate (i.e. a container of elements). The iterator is something that serves 
the purpose to iterate _upon_ something else, i.e. the iteratable. For me, it 
makes little sense to iterate _upon_ an iterator. The fact that, in Python, 
both iterators and iteratables must support the __iter__ method is only an 
implementation detail. Concepts must come first.

 To make iterators a separate, disjoint species then requires that they not
 have an __iter__ method.  Some problems:
 A. This would mean either
 1) We could not iterate with iterators, such as generators, which are
 *not* derived from iterables, or, less severely

Well, generators are a bit special as they are both (conceptually) iterators 
and iteratables by their very intimate nature -- since the elements are 
_produced_ as needed, i.e. only when you do iterate.
But as for ordinary iterators, I don't see any good conceptual reason why a 
generator-iterator should support the __iter__ method. There might be other 
reasons though (for example related with the for ... in ... construct which I 
discuss later in this reply).

 2) We would, usually, have to 'protect'  iter() calls with either
 hasattr(it, '__iter__') or try: iter(it)...except: pass with probably no
 net average time savings.

Well, I'm not interested in time savings for now. Only want to discuss more 
conceptual issues.

 B. This would prohibit self-reiterable objects, which require .__iter__ to
 (re)set the iteration/cursor variable(s) used by .next().

To sharply distinguish in code what is conceptually different is certainly 
very good and safe design in general. But what I am thinking about would not 
_prohibit_ it. Neitheir is C++ STL prohibiting it.

 C. There are compatibility issues not just just with classes using the old
 iteration protocol but also with classes with .next methods that do *not*
 raise StopIteration.  The presence of .__iter__ cleanly marks an object as
 one following the new iterable/iterator protocol.  Another language might
 accomplish the same flagging with inheritance from a base object, but that
 is not Python.

(That is not C++ templates either. See below.)

Why not __next__ (or something else) instead of next for iterators and, 
yes, __iter__ for iteratables ?

  [snip]...C++ STL where there is a clear (I resist to
  say clean) distinction between iteratable and iterator.

 leaves out self-iterating iterables -- collection objects with a .next
 method.  

Nope. See the definition of an iterator in C++ STL below. Anything respecting 
the standard protocol is an iterator. It might be the container itself. The 
point is that with the standard C++ STL protocol, you are not ___obliged___ 
to define an iterator as _also_ being an  iteratable. Both concepts are 
clearly separated.

 I am sure that this is a general, standard OO idiom and not a 
 Python-specific construct.  Perhaps, ignoring these, you would prefer the
 following nomenclature:
 iterob   = object with .__iter__
 iterable= iterob without .next
 iterator = iterob with .next

 Does STL allow/have iterators that are *not* tied to an iterable?

Yes of course. A forward iterator, for example is _anything_ that supports 
the following :

===
In what follows, we shall adopt the following convention.

X : A type that is a model of Trivial Iterator 
T : The value type of X 
x, y, y : Object of type X 
t : Object of type T 

Copy constructor : X(x) -- X
Copy constructor : X x(y); or  X x = y;
Assignment : x = y [1]  -- X 
Swap : swap(x,y)  -- void 

Equality : x == y -- Convertible to bool
Inequality : x != y -- Convertible to bool

Default constructor  : X x or X() -- X
Dereference : *x  -- Convertible to T 
Dereference assignment : *x = t -- X is mutable
Member access : x-m [2] -- T is a type for which x.m is defined 
==

Anything that respects this convention is  a forward iterator. They might 
produce their own content as we iterate upon

Re: Iterator / Iteratable confusion

2005-02-14 Thread Francis Girard
Le dimanche 13 Fvrier 2005 23:58, Terry Reedy a crit:
 Iterators are a subgroup of iterables. Being able to say iter(it) without
 having to worry about whether 'it' is just an iterable or already an
 iterator is one of the nice features of the new iteration design.

 Terry J. Reedy

Hi,

I have difficulties to represent an iterator as a subspecie of an iteratable 
as they seem profoundly different to me. But it just might be that my mind is 
too strongly influenced by the C++ STL where there is a clear (I resist to 
say clean) distinction between iteratable and iterator.

One of the result of not distinguishing them is that, at some point in your 
programming, you are not sure anymore if you have an iterator or an 
iteratable ; and you might very well end up calling iter() or __iter__() 
everywhere.

I am not concerned with the small performance issue involved here (as I very 
seldom am) but with clarity. After all, why should you have to call __iter__ 
on an iterator you just constructed (as in my dummy example 2) ? One might 
wonder, what ? isn't this already the iterator ? But this, I agree, might 
very well just be a beginner (as I am) question, trying to learn the new 
iterator semantics.

I have a strong feeling that the problem arises from the difficulty to marry 
the familiar for ... in ... construct with iterators. If you put an 
iteratable in the second argument place of the construct (as is traditionally 
done) then the syntax construct itself is an implicit iterator. Now, if you 
have explicit iterators then they don't fit well with the implicit iterator 
hidden in the syntax.

To have iterators act as iteratables might very well had been a compromise to 
solve the problem.

I am not sure at all that this is a nice feature to consider an iterator at 
the same level that an iteratable. It makes it a bit more akward to have the 
mind impulse, so to speak, to build iterators on top of other iterators to 
slightly modify the way iteration is done. But I think I'm getting a bit too 
severe here as I think that the compomise choice made by Python is very 
acceptable.

Regards,

PS : I am carefully reading Micheal Spencer very interesting reply.

Thank you,

Francis Girard





--
http://mail.python.org/mailman/listinfo/python-list


Re: Newbie help

2005-02-14 Thread Francis Girard
Mmm. This very much look like a homework from school. Right ?
Francis Girard

Le lundi 14 Février 2005 04:03, Chad Everett a écrit :
 Hey guys,

 Hope you can help me again with another problem.  I am trying to learn
 Python on my own and need some help with the following.

 I am writing a program that lets has the pc pick a number and the user
 has five guess to get the number.
 1. BUG:  If the number is say 35 and I guess 41 the program tells me
 that I guessed the correct number and tells me I guessed 31.

 2.When I do get the correct number I can not get the program to stop
 asking me for the number.


 Your help is greatly appreciated.

 Chad

 # Five Tries to Guess My Number
 #
 # The computer picks a random number between 1 and 100
 # The player gets Five tries to guess it and the computer lets
 # the player know if the guess is too high, too low
 # or right on the money
 #
 # Chad Everett 2/10/2005

 import random

 print \tWelcome to 'Guess My Number'!
 print \nI'm thinking of a number between 1 and 100.
 print You Only Have Five Guesses.\n

 # set the initial values
 number = random.randrange(100) + 1
 guess = int(raw_input(Go Ahead and Take a guess: ))
 tries = 1


 # guessing loop


 while guess != number:

  if (guess  number):
  print Guess Lower...
  else:
  print Guess Higher...

  guess = int(raw_input(Take Another guess: ))
  tries += 1

  print You guessed it!  The number was, number
  print And it only took you, tries, tries!\n

  if tries == 5:
  print Sorry You Lose
  print The Number was , number

 raw_input(\n\nPress the enter key to exit.)

 THIS IS WHAT THE RESULTS LOOKS LIKE WHEN I RUN THE PROGRAM

 Welcome to 'Guess My Number'!

 I'm thinking of a number between 1 and 100.
 You Only Have Five Guesses.

 Go Ahead and Take a guess: 99
 Guess Lower...
 Take Another guess: 98
 You guessed it!  The number was 85
 And it only took you 2 tries!

 Guess Lower...
 Take Another guess: 44
 You guessed it!  The number was 85
 And it only took you 3 tries!

 Guess Higher...
 Take Another guess: 55
 You guessed it!  The number was 85
 And it only took you 4 tries!

 Guess Higher...
 Take Another guess: 33
 You guessed it!  The number was 85
 And it only took you 5 tries!

 Sorry You Lose
 The Number was  85
 Guess Higher...
 Take Another guess:

--
http://mail.python.org/mailman/listinfo/python-list


Re: Commerical graphing packages?

2005-02-14 Thread Francis Girard
Le lundi 14 Février 2005 11:02, David Fraser a écrit :
 Erik Johnson wrote:
  I am wanting to generate dynamic graphs for our website and would
  rather not invest the time in developing the code to draw these starting
  from graphics primitives. I am looking for something that is... fairly
  robust but our needs are relatively modest: X-Y scatter plots w/ data
  point symbols, multiple data set X-Y line plots, bar charts, etc.
 
  Preferably this would come from a company that can provide support 
  decent documentation, and a package that can be installed without a bunch
  of extra hassle (e.g., needs Numeric Python, needs to have the GD library
  installed, needs separate JPEG encoders, font libraries, etc.)
 
  I am aware of ChartDirector (http://www.advsofteng.com/ ) which
  explicitly supports python and seems to be about the right level of
  sophistication. I don't really know of any other packages in this space,
  do you? I am seeking feedback and reccomendations from people who have
  used this package or similar ones. I am particularly interested to hear
  about any limitations or problems you ran into with whatever package you
  are using.
 
  Thanks for taking the time to read my post! :)

 It's worth checking out matplotlib as well although it may not meet all
 your criteria ... but have a look, its a great package

PyX might also be interesting, depending on your needs.

Regards

Francis Girard

--
http://mail.python.org/mailman/listinfo/python-list


Re: Newbie help

2005-02-14 Thread Francis Girard
Sorry.
You have my cheers.

I'm suggesting you to use the python debugger through some GUI (Eric3 
http://www.die-offenbachs.de/detlev/eric3.html for example is a nice and easy 
to use GUI). It will greatly help you appreciate Python control flow in 
execution and you will learn a lot more trying to find your bugs that way. It 
sometimes help to suffer a little. But, of course, I think you will always 
find an helping hand if needed.

Regards

Francis Girard

Le lundi 14 Fvrier 2005 21:58, Chad Everett a crit:
 Nope,  I am trying to learn it on my own.  I am using the book by Michael
 Dawson.

 Thanks,
 Francis Girard [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]
 Mmm. This very much look like a homework from school. Right ?
 Francis Girard

 Le lundi 14 Fvrier 2005 04:03, Chad Everett a crit :
  Hey guys,
 
  Hope you can help me again with another problem.  I am trying to learn
  Python on my own and need some help with the following.
 
  I am writing a program that lets has the pc pick a number and the user
  has five guess to get the number.
  1. BUG:  If the number is say 35 and I guess 41 the program tells me
  that I guessed the correct number and tells me I guessed 31.
 
  2.When I do get the correct number I can not get the program to stop
  asking me for the number.
 
 
  Your help is greatly appreciated.
 
  Chad
 
  # Five Tries to Guess My Number
  #
  # The computer picks a random number between 1 and 100
  # The player gets Five tries to guess it and the computer lets
  # the player know if the guess is too high, too low
  # or right on the money
  #
  # Chad Everett 2/10/2005
 
  import random
 
  print \tWelcome to 'Guess My Number'!
  print \nI'm thinking of a number between 1 and 100.
  print You Only Have Five Guesses.\n
 
  # set the initial values
  number = random.randrange(100) + 1
  guess = int(raw_input(Go Ahead and Take a guess: ))
  tries = 1
 
 
  # guessing loop
 
 
  while guess != number:
 
   if (guess  number):
   print Guess Lower...
   else:
   print Guess Higher...
 
   guess = int(raw_input(Take Another guess: ))
   tries += 1
 
   print You guessed it!  The number was, number
   print And it only took you, tries, tries!\n
 
   if tries == 5:
   print Sorry You Lose
   print The Number was , number
 
  raw_input(\n\nPress the enter key to exit.)
 
  THIS IS WHAT THE RESULTS LOOKS LIKE WHEN I RUN THE PROGRAM
 
  Welcome to 'Guess My Number'!
 
  I'm thinking of a number between 1 and 100.
  You Only Have Five Guesses.
 
  Go Ahead and Take a guess: 99
  Guess Lower...
  Take Another guess: 98
  You guessed it!  The number was 85
  And it only took you 2 tries!
 
  Guess Lower...
  Take Another guess: 44
  You guessed it!  The number was 85
  And it only took you 3 tries!
 
  Guess Higher...
  Take Another guess: 55
  You guessed it!  The number was 85
  And it only took you 4 tries!
 
  Guess Higher...
  Take Another guess: 33
  You guessed it!  The number was 85
  And it only took you 5 tries!
 
  Sorry You Lose
  The Number was  85
  Guess Higher...
  Take Another guess:

--
http://mail.python.org/mailman/listinfo/python-list


Re: Alternative to raw_input ?

2005-02-13 Thread Francis Girard
Le vendredi 11 Février 2005 18:00, den a écrit :
 import msvcrt
 msvcrt.getch()

I frequently had the problem to have something similar but *portable*.
Something as short and simple.

Someone have an idea ?

Thank you

Francis Girard

--
http://mail.python.org/mailman/listinfo/python-list


Re: A great Alan Kay quote

2005-02-13 Thread Francis Girard
Le vendredi 11 Fvrier 2005 21:45, Curt a crit:
 On 2005-02-10, Francis Girard [EMAIL PROTECTED] wrote:
  I think I've been enthouasistic too fast. While reading the article I
  grew more and more uncomfortable with sayings like :

 snip

 Yes, you may have grown uncomfortable because what you read has, at best,
 only the most tenuous of relations with what was written.  There is no way
 in God's frigid hell that your sayings (which were never uttered by Alan
 Kay) can be construed as anything other than a hopefully transient
 psychotic episode by anyone who read the interview with his head in a place
 other than where the moon doesn't shine.

 Please be so kind as to free your own from the breathless confines of your
 own fundamental delirium.

Wow ! Peace. I apologize. Didn't want to upset anyone. Of course it was my own 
ad lib interpretation of what Alan Kay said. That's what I meant by 
sayings. But I should had been clearer. Anyway, it only implies myself.

I live at a place where it rains most of the time and my head is indeed in a 
place where the moon doesn't shine, which may give a good explanation of my 
own fundamental delirium. 

For another fundamental delirium (which I certainly enjoy), see :

Steve Wart about why Smalltalk never caught on:

http://hoho.dyndns.org/~holger/smalltalk.html

as someone named Petite abeille (a name I also certainly do find full of 
flavour) suggested me.

My deepest apologies,

Francis Girard

--
http://mail.python.org/mailman/listinfo/python-list


Re: check if object is number

2005-02-13 Thread Francis Girard
Le vendredi 11 Février 2005 20:11, Steven Bethard a écrit :
 Is there a good way to determine if an object is a numeric type?
 Generally, I avoid type-checks in favor of try/except blocks, but I'm
 not sure what to do in this case:

  def f(i):
  ...
  if x  i:
  ...

 The problem is, no error will be thrown if 'i' is, say, a string:

 py 1  'a'
 True
 py 100  'a'
 True

 But for my code, passing a string is bad, so I'd like to provide an
 appropriate error.



This is a very bad Python feature that might very well be fixed in version 3.0 
according to your own reply to a previous thread. This problem clearly shows 
that this Python feature does hurt.

Here's a transcript of the answer :

 Yes, that rule being to compare objects of different types by their type
 names (falling back to the address of the type object if the type names
 are the same, I believe).  Of course, this is arbitrary, and Python does
 not guarantee you this ordering -- it would not raise backwards
 compatibility concerns to, say, change the ordering in Python 2.5.

  What was the goal behind this rule ?

 I believe at the time, people thought that comparison should be defined
 for all Python objects.  Guido has since said that he wishes the
 decision hadn't been made this way, and has suggested that in Python
 3.0, objects of unequal types will not have a default comparison.

 Probably this means ripping the end off of default_3way_compare and
 raising an exception.  As Fredrik Lundh pointed out, they could, if they
 wanted to, also rip out the code that special-cases None too.

 Steve 



Regards

Francis Girard


--
http://mail.python.org/mailman/listinfo/python-list


Iterator / Iteratable confusion

2005-02-13 Thread Francis Girard
Hi,

I wrote simple dummy examples to teach to myself iterators and generators.

I think that the iteration protocol brings a very neat confusion between the 
iterator and what it iterates upon (i.e. the iteratable). This is outlined in 
example 3 in my dummy examples.

What are your feelings about it ? 

Regards

Francis Girard

 BEGINNING OF EXAMPLES 

from itertools import tee, imap, izip, islice
import sys
import traceback

sEx1Doc = \


Example 1:


An iteratable class is a class supporting the __iter__ method which should
return an iterator instance, that is, an instance of a class supporting
the next method. 

An iteratable, strictly speaking, doesn't have to support the next method 
and 
an iterator doesn't have to support the __iter__ method (but this breaks 
the
iteration protocol as we will later see).

The for ... in ... construct now expect an iteratable instance in its 
second argument place. It first invoke its __iter__ method and then repeatedly
invoke the next method on the object returned by __iter__ until the
StopIteration exception is raised.

The designing goal is to cleanly separate a container class with the way we
iterate over its internal elements.

class Ex1_IteratableContClass:
  def __init__(self):
print Ex1_IteratableContClass.__init__
self._vn = range(0,10)

  def elAt(self, nIdx):
print Ex1_IteratableContClass.elAt
return self._vn[nIdx]

  def __iter__(self):
print Ex1_IteratableContClass.__iter__
return Ex1_IteratorContClass(self)

class Ex1_IteratorContClass:
  def __init__(self, cont):
print Ex1_IteratorContClass.__init__
self._cont = cont
self._nIdx = -1

  def next(self):
print Ex1_IteratorContClass.next
self._nIdx += 1
try:
  return self._cont.elAt(self._nIdx)
except IndexError:
  raise StopIteration

print
print sEx1Doc
print
for n in Ex1_IteratableContClass():
  print n,
  sys.stdout.flush()
  

sEx2Doc = \


Example 2:


Let's say that we want to give two ways to iterate over the elements of our 
example container class. The default way is to iterate in direct order and we
want to add the possibility to iterate in reverse order. We simply add another
iterator class. We do not want to modify the container class as, 
precisely,
the goal is to decouple the container from iteration.

class Ex2_IteratableContClass:
  def __init__(self):
print Ex2_IteratableContClass.__init__
self._vn = range(0,10)

  def elAt(self, nIdx):
print Ex2_IteratableContClass.elAt
return self._vn[nIdx]

  def __iter__(self):
print Ex2_IteratableContClass.__iter__
return Ex1_IteratorContClass(self)

class Ex2_RevIteratorContClass:
  def __init__(self, cont):
print Ex2_RevIteratorContClass.__init__
self._cont = cont
self._nIdx = 0

  def next(self):
print Ex2_RevIteratorContClass.next
self._nIdx -= 1
try:
  return self._cont.elAt(self._nIdx)
except IndexError:
  raise StopIteration

print
print sEx2Doc
print
print Default iteration works as before
print
for n in Ex2_IteratableContClass():
  print n,
  sys.stdout.flush()

print
print But reverse iterator fails with an error : 
print

cont = Ex2_IteratableContClass()
try:
  for n in Ex2_RevIteratorContClass(cont):
print n,
sys.stdout.flush()
except:
  traceback.print_exc()

sEx3Doc = \


Example 3.


The previous example fails with the iteration over non sequence error 
because
the Ex2_RevIteratorContClass iterator class doesn't support the __iter__
method. We therefore have to supply one, even at the price of only returning
self.

I think this is ugly because it baffles the distinction between iterators and
iteratables and we artificially have to add an __iter__ method to the 
iterator itself, which should return ... well, an iterator, which it already 
is !

I presume that the rationale behind this is to keep the feature that the 
second
argument place of the for ... in ... should be filled by a container (i.e.
an iteratable), not an iterator.

Another way that Python might have done this without breaking existing code is 
to make the for ... in ... construct invoke the __iter__ method if it 
exists, otherwise, directly call the next method on the supplied object.

But this is only a minor quirk as the decoupling of the iterator from the 
iteratable is nonetheless achieved at the (small) price of adding an almost
useless method.

So here it is.


class Ex3_RevIteratorContClass:
  def __init__(self, cont):
print

Re: A great Alan Kay quote

2005-02-13 Thread Francis Girard
Le dimanche 13 Février 2005 19:05, Arthur a écrit :
 On Sun, 13 Feb 2005 18:48:03 +0100, Francis Girard 

 My deepest apologies,
 
 Francis Girard

 Sorry if I helped get you into this, Francis.


No, no, don't worry. I really expressed my own opinions and feelings. At the 
same time, I certainly understand that these opinions might had upset some of 
the readers. I am sorry for this as I don't think this is the right 
discussion group for such opinions. Therefore it is useless to hurt people 
with something that is not a positive contribution. That's all. I will try to 
refrain in the future. I am just not used to talk to so many people at the 
same time.

Regards,

Francis Girard


--
http://mail.python.org/mailman/listinfo/python-list


Re: A great Alan Kay quote

2005-02-10 Thread Francis Girard
Le jeudi 10 Février 2005 04:37, Arthur a écrit :
 On Wed, 9 Feb 2005 21:23:06 +0100, Francis Girard

 [EMAIL PROTECTED] wrote:
 I love him.

 I don't.

 It's also interesting to see GUIs with windows, mouse (etc.), which
  apparently find their origin in is mind, probably comes from the desire
  to introduce computers to children.

 Alfred Bork, now
 Professor Emeritus
 Information and Computer Science
 University of California, Irvine 92697

 had written an article in 1980 called

 Interactive Learning which began

 We are at the onset of a major revolution in education, a revolution
 unparalleled since the invention of the printing press. The computer
 will be the instrument of this revolution.

 In 2000 he published:

 Interactive Learning: Twenty Years Later

 looking back on his orignal article and its optimistic predictions and
 admitting I was not a very good prophet

 What went wrong?

 Among other things he points (probably using a pointing device) at the
 pointing device

 
 Another is the rise of the mouse as a computer device. People had the
 peculiar idea that one could deal with the world of learning purely by
 pointing.

 
 The articles can be found here:

 http://www.citejournal.org/vol2/iss4/seminal.cfm

 One does not need to agree or disagree, it seems to me about this or
 that point on interface, or influence, or anything else. What one does
 need to do is separate hope from actuality, and approach the entire
 subject area with some sense of what is at stake, and with some true
 sense of the complexity of the issues, in such a  way that at this
 stage of the game the only authentic stance is one of humility,

 Kay fails the humility test, dramatically. IMO.

I think I've been enthouasistic too fast. While reading the article I grew 
more and more uncomfortable with sayings like :

- Intel and Motorola don't know how to do micro-processors and did not 
understand anything in our own architecture
- Languages of today are features filled doggy bags
- Java failed where I succeeded
- I think beautifully like a mathematician while the rest is pop culture
- etc.

I'm not sure at all he likes Python. Python is too pragmmatic for him. And its 
definition does not hold in the palm of his hand. 

I think he's a bit nostalgic.

Francis Girard
or





 Art

--
http://mail.python.org/mailman/listinfo/python-list


Re: A great Alan Kay quote

2005-02-10 Thread Francis Girard
Thank you.

Francis Girard

Le jeudi 10 Fvrier 2005 02:48, Scott David Daniels a crit:
 Francis Girard wrote:
  ...
  It's also interesting to see GUIs with windows, mouse (etc.), which
  apparently find their origin in is mind, probably comes from the desire
  to introduce computers to children.

 OK, presuming origin in is mind was meant to say origin in his mind,
 I'd like to stick up for Doug Engelbart (holds the patent on the mouse)
 here.  I interviewed with his group at SRI in the ancient past, when
 they were working on the Augmentation Research project -- machine
 augmentation of human intelligence.  They, at the time, were working on
 input pointing devices and hadn't yet settled.  The helmet that read
 brain waves was doing astoundingly well (90% correct on up, down, left,
 right, don't move), but nowhere near well enough to use for positioning
 on edits.  This work produced the mouse, despite rumors of Xerox Parc or
 Apple inventing the mouse.

 Xerox Parc, did, as far as I understand, do the early development on
 interactive graphic display using a mouse for positioning on a
 graphics screen.  Engelbart's mouse navigated on a standard 80x24
 character screen.

 Augment did real research on what might work, with efforts to measure
 ease of use and reliability.  They did not simply start with a good
 (or great) guess and charge forward.  They produced the mouse, and the
 earliest linked documents that I know of.

  http://sloan.stanford.edu/MouseSite/1968Demo.html

 --Scott David Daniels
 [EMAIL PROTECTED]

--
http://mail.python.org/mailman/listinfo/python-list


Re: A great Alan Kay quote

2005-02-10 Thread Francis Girard
Le jeudi 10 Février 2005 19:47, PA a écrit :
 On Feb 10, 2005, at 19:43, Francis Girard wrote:
  I think he's a bit nostalgic.

 Steve Wart about why Smalltalk never caught on:

 http://hoho.dyndns.org/~holger/smalltalk.html

 Cheers

 --
 PA, Onnay Equitursay
 http://alt.textdrive.com/

!

Francis Girard

--
http://mail.python.org/mailman/listinfo/python-list


Re: convert list of tuples into several lists

2005-02-09 Thread Francis Girard
Le mercredi 9 Février 2005 14:46, Diez B. Roggisch a écrit :
 zip(*[(1,4),(2,5),(3,6)])

 --
 Regards,

 Diez B. Roggisch

That's incredibly clever! I would had never thought to use zip to do this ! 
I would had only think to use it for the contrary, i.e.

 zip([1,2,3], [4,5,6])
[(1, 4), (2, 5), (3, 6)]

Notice though that the solution doesn't yield the exact contrary as we end up 
with a list of tuples instead of a list of lists :

 zip(*[(1,4),(2,5),(3,6)])
[(1, 2, 3), (4, 5, 6)]

But this can be easily fix if lists are really wanted :

 map(list, zip(*[(1,4),(2,5),(3,6)]))
[[1, 2, 3], [4, 5, 6]]


Anyway, I find Diez solution brillant ! I'm always amazed to see how skilled a 
programmer can get when comes the time to find a short and sweet solution.

One can admire that zip(*zip(*a_list_of_tuples)) == a_list_of_tuples

Thank you
You gave me much to enjoy

Francis girard


--
http://mail.python.org/mailman/listinfo/python-list


Re: A great Alan Kay quote

2005-02-09 Thread Francis Girard

Today he is Senior Fellow at Hewlett-Packard Labs and president of Viewpoints 
Research Institute, a nonprofit organization whose goal is to change how 
children are educated by creating a sample curriculum with supporting media 
for teaching math and science. This curriculum will use Squeak as its media, 
and will be highly interactive and constructive. Kays deep interests in 
children and education have been the catalysts for many of his ideas over the 
years. 


I love him.

It's also interesting to see GUIs with windows, mouse (etc.), which apparently 
find their origin in is mind, probably comes from the desire to introduce 
computers to children.

Francis Girard

Le mercredi 9 Fvrier 2005 20:29, Grant Edwards a crit:
 On 2005-02-09, James [EMAIL PROTECTED] wrote:
  Surely
 
  Perl is another example of filling a tiny, short-term need, and then
  being a real problem in the longer term.
 
  is better lol ;)

 That was the other one I really liked, and Perl was the first
 language I thought of when I saw the phrase agglutination of
 features.  C++ was the second one.

 --
 Grant Edwards   grante Yow!  -- In 1962, you
 could at   buy a pair of SHARKSKIN visi.comSLACKS,
 with a Continental Belt, for $10.99!!

--
http://mail.python.org/mailman/listinfo/python-list


Re: After 40 years ... Knuth vol 4 is to be published!!

2005-02-07 Thread Francis Girard
I think that Led Zeppelin, Pink Floyd and the Beatles (this time with John 
Lennon back from the cemetry) also made a come back. Addison-Wesley decided 
to preprint a photo of the messiah (just for us!)

Yippee!
Francis Girard

Le samedi 5 Fvrier 2005 13:39, Laura Creighton a crit:
 More than forty years in the making, the long-anticipated Volume 4
 of The Art of Computer Programming is about to make its debuta. in
 parts. Rather than waiting for the complete book, Dr. Knuth and
 Addison-Wesley are publishing it in installments (fascicles) a la
 Charles Dickens.

 See http://www.bookpool.com/.x/xx/ct/163 for an excerpt and more info
 on Volume 4.

 And Addison-Wesley is offering Bookpool customers an exclusive sneak
 peek -- the first official excerpt from the series.

 (above from the same site)
 Yippee!
 Laura

--
http://mail.python.org/mailman/listinfo/python-list


Re: Collapsing a list into a list of changes

2005-02-07 Thread Francis Girard
No. Just wanted to naively use one of the base tool Python had to offer. I 
tought it was very usable and very readable. I might be wrong.

Is there someone on this list using this tool and happy with it ? Or is my 
mind too much targeted on FP paradigm and most of you really think that all 
the functions that apply another function to each and every elements of a 
list are bad (like reduce, map, filter) ?

Francis Girard

Le lundi 7 Février 2005 19:25, Steven Bethard a écrit :
 Francis Girard wrote:
  I'm very sorry that there is no good use case for the reduce function
  in Python, like Peter Otten pretends. That's an otherwise very useful
  tool for many use cases. At least on paper.

 Clarity aside[1], can you give an example of where reduce is as
 efficient as the eqivalent loop in Python?

 Steve

 [1] I generally find most reduce solutions much harder to read, but
 we'll set aside my personal limitations for the moment. ;)

--
http://mail.python.org/mailman/listinfo/python-list


Re: Collapsing a list into a list of changes

2005-02-07 Thread Francis Girard
Le lundi 7 Fvrier 2005 19:51, John Lenton a crit:
 On Mon, Feb 07, 2005 at 07:07:10PM +0100, Francis Girard wrote:
  Zut !
 
  I'm very sorry that there is no good use case for the reduce function
  in Python, like Peter Otten pretends. That's an otherwise very useful
  tool for many use cases. At least on paper.
 
  Python documentation should say There is no good use case for the reduce
  function in Python and we don't know why we bother you offering it.

 I am guessing you are joking, right? 

Of course I am joking. I meant the exact contrary. Only wanted to show that 
Peter did exaggerate. Thank you for citing me with the full context.

 I think Peter exaggerates when he 
 says that there will be no good use cases for reduce; it is very
 useful, in writing very compact code when it does exactly what you
 want (and you have to go through hoops to do it otherwise). It also
 can be the fastest way to do something. For example, the fastest way
 to get the factorial of a (small enough) number in pure python is

   factorial = lambda n: reduce(operator.mul, range(1, n+1))

Great. 

Regards

Francis Girard

--
http://mail.python.org/mailman/listinfo/python-list


Re: Collapsing a list into a list of changes

2005-02-07 Thread Francis Girard
Le lundi 7 Février 2005 20:30, Steven Bethard a écrit :
 Francis Girard wrote:
  Is there someone on this list using this tool and happy with it ? Or is
  my mind too much targeted on FP paradigm and most of you really think
  that all the functions that apply another function to each and every
  elements of a list are bad (like reduce, map, filter) ?

 I know there are definitely proponents for map and filter, especially
 for simple cases like:

  map(int, lst)
  filter(str.strip, lst)

 Note that, unlike reduce, map and filter aren't really going to increase
 the number of function calls.  Consider the equivalent list comprehensions:

  [int(x) for x in lst]
  [x for x in lst if str.strip(x)][1]

 The list comprehensions also require the same number of function calls
 in these cases.  Of course, in cases where a function does not already
 exist, map and filter will require more function calls.  Compare:

  map(lambda x: x**2 + 1, lst)

 with

  [x**2 + 1 for x in lst]

 Where the LC allows you to essentially inline the function.  (You can
 dis.dis these to see the difference if you like.)



I see.



 As far as my personal preferences go, while the simple cases of map and
 filter (the ones using existing functions) are certainly easy enough for
 me to read, for more complicated cases, I find things like:

  [x**2 + 1 for x in lst]
  [x for x in lst if (x**2 + 1) % 3 == 1]

 much more readable than the corresponding:

  map(lambda x: x**2 + 1, lst)
  filter(lambda x: (x**2 + 1) % 3 == 1, lst)


I agree.

 especially since I avoid lambda usage, and would have to write these as:

Why avoid lambda usage ? You find them too difficult to read (I mean in 
general) ?


  def f1(x):
  return x**2 + 1
  map(f1, lst)

  def f2(x):
  return (x**2 + 1) % 3 == 1
  map(f2, lst)

 (I actually find the non-lambda code clearer, but still more complicated
 than the list comprehensions.)

 Given that I use list comprehensions for the complicated cases, I find
 it to be more consistent if I use list comprehensions in all cases, so I
 even tend to avoid the simple map and filter cases.


 As far as reduce goes, I've never seen code that I thought was clearer
 using reduce than using a for-loop.  I have some experience with FP, and
 I can certainly figure out what a given reduce call is doing given
 enough time, but I can always understand the for-loop version much more
 quickly.


I think it's a question of habits. But I agree that we should always go with 
code we find easy to read and that we think others find the same.
I think I will stop using that function in Python if Python practionners find 
it difficult to read in general. There is no point in being cool just for 
being cool.

Thank you
Francis Girard


 Of course, YMMV.

 STeVe


 [1] While it's not technically equivalent, this would almost certainly
 be written as:
  [x for x in lst if x.strip()]
 which in fact takes better advantage of Python's duck-typing -- it will
 work for unicode objects too.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Collapsing a list into a list of changes

2005-02-07 Thread Francis Girard
Le lundi 7 Février 2005 21:21, Steven Bethard a écrit :
 Francis Girard wrote:
  Le lundi 7 Février 2005 20:30, Steven Bethard a écrit :
 especially since I avoid lambda usage, and would have to write these as:
 
  Why avoid lambda usage ? You find them too difficult to read (I mean in
  general) ?

 Yup, basically a readability thing.  I also tend to find that if I
 actually declare the function, I can often find a way to refactor things
 to make that function useful in more than one place.

 Additionally, 'lambda' is on Guido's regrets list, so I'm avoiding it's
 use in case it gets yanked for Python 3.0.  I think most code can be
 written now without it, and in most cases code so written is clearer.
 Probably worth looking at is a thread I started that went through some
 stdlib uses of lambda and how they could be rewritten:

 http://mail.python.org/pipermail/python-list/2004-December/257990.html

 Many were rewritable with def statements, list comprehensions, the
 operator module functions, or unbound or bound methods.

 Steve

I see. I personnaly use them frequently to bind an argument of a function with 
some fixed value. Modifying one of the example in 
http://mail.python.org/pipermail/python-list/2004-December/257990.html
I frequently have something like :

SimpleXMLRPCServer.py:  server.register_1arg-place_function(lambda x: x+2, 
'add2')

If Guido don't like lambdas, he would have to give me some way to easily do 
this. Something like (or similar to fit Python syntax) :

SimpleXMLRPCServer.py:  server.register_1arg-place_function(\+2, 'add2')

This would be great.

Regards

Francis Girard



 server.register_function(operator.add, 'add')


--
http://mail.python.org/mailman/listinfo/python-list


Re: Collapsing a list into a list of changes

2005-02-07 Thread Francis Girard
Le lundi 7 Février 2005 22:53, Steven Bethard a écrit :
 Francis Girard wrote:
  I see. I personnaly use them frequently to bind an argument of a function
  with some fixed value. Modifying one of the example in
  http://mail.python.org/pipermail/python-list/2004-December/257990.html
  I frequently have something like :
 
  SimpleXMLRPCServer.py:  server.register_1arg-place_function(lambda x:
  x+2, 'add2')

 PEP 309 has already been accepted, so I assume it will appear in Python
 2.5:

  http://www.python.org/peps/pep-0309.html

 Your code could be written as:

  server.register_1arg-place_function(partial(operator.add, 2))

 If your task is actually what you describe above -- to bind an argument
 of a function with some fixed value[1] -- then in general, you should
 be able to write this as[2]:

  function_with_fixed_value = partial(function, fixed_value)


Great ! Great ! Great !
Really, Python is going in the right direction all of the time !
Very good news.

Regards

Francis Girard


 Steve

 [1] As opposed to binding a name to be used in an _expression_ as you do
 in your example.

 [2] The partial function can also be used to fix multiple argument
 values and keyword argument values, if that's necessary for your purposes.

--
http://mail.python.org/mailman/listinfo/python-list


Re: WYSIWYG wxPython IDE....?

2005-02-07 Thread Francis Girard
Le mardi 8 Février 2005 00:56, Simon John a écrit :
 With the news of a GPL Qt4 for Windows, I decided to go with PyQt:

 http://mats.imk.fraunhofer.de/pipermail/pykde/2005-February/009527.html

 I just knocked up my application (GUI, backend is still in progress)
 using QtDesigner in about 5 minutes, and it's layout is just how I want
 it!

Exactly the same for me. Qt is just a pure marvel. I hope this will not kill 
wx though. We need diversity. It might very well be that wx gpl on windows 
was one of the factor that made Troll decide to do the same with Qt.

Francis Girard

--
http://mail.python.org/mailman/listinfo/python-list


Re: remove duplicates from list *preserving order*

2005-02-06 Thread Francis Girard
Hi,

I think your last solution is not good unless your list is sorted (in which 
case the solution is trivial) since you certainly do have to see all the 
elements in the list before deciding that a given element is not a duplicate. 
You have to exhaust the iteratable before yielding anything.

Besides that, I was thinking that the best solution presented here proceed in 
two steps :

1- Check that an element is not already in some ordered data type of already 
seen element

2- If not, put the element in that structure

That's probably twice the work.

There might be some way to do both at the same time, i.e.

- Put the element in the data structure only if not already there and tell me, 
with some boolean, if you did put the element.

Then you have to do the work of finding the right place where to insert 
(fetch) the element only once. I don't see any easy way to do this in python, 
other than rolling your sleeves and code your own data structure in Python, 
which would be slowlier than the provided dict or set C implementation.

I think this a hole into the Pythin libs.

Regards

Francis Girard

Le jeudi 3 Février 2005 21:39, Steven Bethard a écrit :
 I'm sorry, I assume this has been discussed somewhere already, but I
 found only a few hits in Google Groups...  If you know where there's a
 good summary, please feel free to direct me there.


 I have a list[1] of objects from which I need to remove duplicates.  I
 have to maintain the list order though, so solutions like set(lst), etc.
 will not work for me.  What are my options?  So far, I can see:

 def filterdups(iterable):
  result = []
  for item in iterable:
  if item not in result:
  result.append(item)
  return result

 def filterdups(iterable):
  result = []
  seen = set()
  for item in iterable:
  if item not in seen:
  result.append(item)
  seen.add(item)
  return result

 def filterdups(iterable):
  seen = set()
  for item in iterable:
  if item not in seen:
  seen.add(item)
  yield item

 Does anyone have a better[2] solution?

 STeve

 [1] Well, actually it's an iterable of objects, but I can convert it to
 a list if that's helpful.

 [2] Yes I know, better is ambiguous.  If it helps any, for my
 particular situation, speed is probably more important than memory, so
 I'm leaning towards the second or third implementation.

--
http://mail.python.org/mailman/listinfo/python-list


Re: returning True, False or None

2005-02-06 Thread Francis Girard
I think it is evil to do something at your own risk ; I will certainly not 
embark some roller coaster at my own risk.

I also think it is evil to scan the whole list (as max ought to do) when 
only scanning the first few elements would suffice most of the time.

Regards

Francis Girard

Le vendredi 4 Février 2005 21:13, Steven Bethard a écrit :
 Fredrik Lundh wrote:
  Steven Bethard wrote:
  Raymond Hettinger wrote:
  return max(lst)
 
  Very clever!  Thanks!
 
  too clever.  boolean  None isn't guaranteed by the language
  specification:

 Yup.  I thought about mentioning that for anyone who wasn't involved in
 the previous thread discussing this behavior, but I was too lazy.  ;)
 Thanks for pointing it out again.

 This implementation detail was added in Python 2.1a1, with the following
 note[1]:

 The outcome of comparing non-numeric objects of different types is
 not defined by the language, other than that it's arbitrary but
 consistent (see the Reference Manual).  An implementation detail changed
 in 2.1a1 such that None now compares less than any other object.  Code
 relying on this new behavior (like code that relied on the previous
 behavior) does so at its own risk.

 Steve

 [1] http://www.python.org/2.1/NEWS.txt

--
http://mail.python.org/mailman/listinfo/python-list


Re: Collapsing a list into a list of changes

2005-02-06 Thread Francis Girard
This is a prefect use case for the good old reduce function:

--BEGIN SNAP

a_lst = [None,0,0,1,1,1,2,2,3,3,3,2,2,2,4,4,4,5]

def straightforward_collapse(lst):
  return reduce(lambda v,e: v[-1]!=e and v+[e] or v, lst[1:], [lst[0]])

def straightforward_collapse_secu(lst):
  return lst and reduce(lambda v,e: v[-1]!=e and v+[e] or v, lst[1:], 
[lst[0]]) or []
  
print straightforward_collapse(a_lst)
print straightforward_collapse_secu([])

--END SNAP

Regards

Francis Girard

Le vendredi 4 Février 2005 20:08, Steven Bethard a écrit :
 Mike C. Fletcher wrote:
  Alan McIntyre wrote:
  ...
 
  I have a list of items that has contiguous repetitions of values, but
  the number and location of the repetitions is not important, so I just
  need to strip them out.  For example, if my original list is
  [0,0,1,1,1,2,2,3,3,3,2,2,2,4,4,4,5], I want to end up with
  [0,1,2,3,2,4,5].
 
  ...
 
  Is there an elegant way to do this, or should I just stick with the
  code above?
 
def changes( dataset ):
 
  ... last = None
  ... for value in dataset:
  ... if value != last:
  ... yield value
  ... last = value
  ... print list(changes(data ))
 
  which is quite readable/elegant IMO.

 But fails if the list starts with None:

 py lst = [None,0,0,1,1,1,2,2,3,3,3,2,2,2,4,4,4,5]
 py def changes(dataset):
 ... last = None
 ... for value in dataset:
 ... if value != last:
 ... yield value
 ... last = value
 ...
 py list(changes(lst))
 [0, 1, 2, 3, 2, 4, 5]

 A minor modification that does appear to work:

 py def changes(dataset):
 ... last = object()
 ... for value in dataset:
 ... if value != last:
 ... yield value
 ... last = value
 ...
 py list(changes(lst))
 [None, 0, 1, 2, 3, 2, 4, 5]

 STeVe

--
http://mail.python.org/mailman/listinfo/python-list


Re: string issue

2005-02-06 Thread Francis Girard
Yes, I also this that comprehension is the clearer way to handle this. You 
might also consider the good old filter function :

ips = filter(lambda ip: '255' not in ip, ips)

Francis Girard

Le vendredi 4 Février 2005 20:38, rbt a écrit :
 Thanks guys... list comprehension it is!

 Bill Mill wrote:
 ips = [ip for ip in ips if '255' not in ip]
 ips

--
http://mail.python.org/mailman/listinfo/python-list


Re: Next step after pychecker

2005-02-02 Thread Francis Girard
Le mercredi 2 Février 2005 00:28, Philippe Fremy a écrit :
 I really hope that pypy will provide that kind of choice. Give me python
 with eiffel like contracts, super speed optimisation thank to type
 inference and I will be super happy.

That's also my dream. Type inference not so much for speed but for safety. 
There are type fans (like most skilled C++ programmer), anti-type supporters 
(like the talented Alex Martelli). Type inference sits in the middle.

Didn't understand how pypy intend to support type inference. I read something 
about automatically translating Python into the smooth blend Pyrex. I am 
not sure what it exactly means and how they plan to face the problems we 
foresee.

Francis Girard

--
http://mail.python.org/mailman/listinfo/python-list


Re: Next step after pychecker

2005-02-02 Thread Francis Girard
To complete Philippe's answer :

As Bird and Wadler continue :

The major consequence of the discipline imposed by strong-typing is that any 
expression which cannot be assigned a sensible type is regarded as not 
being well-formed and is rejected by the computer before evaluation. Such 
expression have _no_ value: they are simply regarded as illegal.

But Skip, I am sure that you can easily find an example by yourself. For 
example, replace + by a function that does different things depending on 
its argument type.

Francis Girard

Le mercredi 2 Février 2005 10:27, Philippe Fremy a écrit :
 Skip Montanaro wrote:
  Francis Every well-formed expression of the language can be
  assigned a Francis type that can be deduced from the constituents of the
  Francis expression alone. Bird and Wadler, Introduction to Functional
  Francis Programming, 1988
 
  Francis This is certainly not the case for Python since one and the
  Francis same variable can have different types depending upon the
  Francis execution context. Example :
 
  Francis 1- if a is None:
  Francis 2-   b = 1
  Francis 3- else:
  Francis 4-   b = Phew
  Francis 5- b = b + 1
 
  Francis One cannot statically determine the type of b by examining
  the Francis line 5- alone.
 
  Do you have an example using a correct code fragment?  It makes no sense
  to infer types in code that would clearly raise runtime errors:

 On the contrary, the point of type inference is to detect such errors.
 If the program was always well-formed, there would be no point in
 developing a type inference tool.

   Philippe

--
http://mail.python.org/mailman/listinfo/python-list


Re: Next step after pychecker

2005-02-01 Thread Francis Girard
Hi,

I do not want to discourage Philippe Fremy but I think that this would be very 
very difficult to do without modifying Python itself.

What FP languages rely upon to achieve type inference is a feature named 
strong typing. A clear definition of strong typing is :

Every well-formed expression of the language can be assigned a type that can 
be deduced from the constituents of the expression alone. Bird and Wadler, 
Introduction to Functional Programming, 1988

This is certainly not the case for Python since one and the same variable can 
have different types depending upon the execution context. Example :

1- if a is None:
2-   b = 1
3- else:
4-   b = Phew
5- b = b + 1

One cannot statically determine the type of b by examining the line 5- alone.

Therefore, Fremy's dream can very well turn to some very complex expert system 
to make educated warning. Being educated is a lot harder than to be 
brutal.

It's funny that what mainly prevents us from easily doing a type inferencer is 
at this very moment discussed with almost religious flame in the variable 
declaration thread.

Anyway, strong typing as defined above would change the Python language in 
some of its fundamental design. It would certainly be valuable to attempt the 
experience, and rename the new language (mangoose would be a pretty name).

Francis Girard
FRANCE

Le mardi 1 Février 2005 16:49, Diez B. Roggisch a écrit :
  But it can be useful to restrict type variety in certain situations
  e.g. prime number calculation :) And it would probably also be useful
  to check violations of restrictions before running the program in
  normal mode.

 But that's what (oca)ml and the like do - they exactly don't force you to
 specify a type, but a variable has an also variable type, that gets
 inferned upon the usage and is then fixed.

 --
 Regards,

 Diez B. Roggisch

--
http://mail.python.org/mailman/listinfo/python-list


Re: Next step after pychecker [StarKiller?]

2005-02-01 Thread Francis Girard
Hi,

Do you have some more pointers to the StarKiller project ? According to the 
paper some implementation of this very interesting project exists.

Thank you

Francis Girard

Le mardi 1 Février 2005 11:21, Sylvain Thenault a écrit :
 On Tue, 01 Feb 2005 05:18:12 +0100, Philippe Fremy wrote:
  Hi,

 Hi

  I would like to develop a tool that goes one step further than pychecker
  to ensure python program validity. The idea would be to get close to what
  people get on ocaml: a static verification of all types of the program,
  without any kind of variable declaration. This would definitely brings a
  lot of power to python.
 
  The idea is to analyse the whole program, identify constraints on
  function arguments and check that these constraints are verified by other
  parts of the program.

 Did you take a look at the starkiller [1] and pypy projects [2] ?

  What is in your opinion the best tool to achieve this ? I had an
  extensive look at pychecker, and it could certainly be extended to do
  the job. Things that still concern me are that it works on the bytecode,
  which prevents it from working with jython and the new .NET python.
 
  I am currently reading the documentation on AST and visitor, but I am
  not sure that this will be the best tool either. The AST seems quite
  deep and I am afraid that it will make the analysis quite slow and
  complicated.

 are you talking about the ast returned by the parser module, or the ast
 from the compiler module ? The former is a higher abstraction, using
 specific class instances in the tree, and most importantly with all the
 parsing junk removed. See [3]. You may also be interested in pylint
 [4] which is a pychecker like program built in top of the compiler ast,
 and so doesn't require actual import of the analyzed code. However it's
 not yet as advanced as pychecker regarding bug detection.

 And finally as another poster said you should probably keep an eye open
 on the python 2.5 ast branch work...

 Hope that helps !

 [1]http://www.python.org/pycon/dc2004/papers/1/paper.pdf)
 [2]http://codespeak.net/pypy/index.cgi?home
 [3]http://www.python.org/doc/current/lib/module-compiler.ast.html
 [4]http://www.logilab.org/projects/pylint

 --
 Sylvain Thénault   LOGILAB, Paris (France).

 http://www.logilab.com   http://www.logilab.fr  http://www.logilab.org

--
http://mail.python.org/mailman/listinfo/python-list


Re: Classical FP problem in python : Hamming problem

2005-01-28 Thread Francis Girard
Le vendredi 28 Janvier 2005 08:27, Paul Rubin a écrit :
 Francis Girard [EMAIL PROTECTED] writes:
  Thank you Nick and Steven for the idea of a more generic imerge.

 If you want to get fancy, the merge should use a priority queue (like
 in the heapsort algorithm) instead of a linear scan through the
 incoming iters, to find the next item to output.  That lowers the
 running time to O(n log k) instead of O(n*k), where k is the number of
 iterators and n is the length.

The goal was only to show some FP construct on small problems. Here the number 
of iterators are intended to be fairly small.

Otherwise, yes, you could exploit the fact that, at one loop execution, you 
already seen most of the elements at previous loop execution, storing them in 
some heap structure and therefore not having to recan the whole list of 
already seen iterator values at each loop execution.

Thank you

Francis Girard

--
http://mail.python.org/mailman/listinfo/python-list


Re: Question about 'None'

2005-01-28 Thread Francis Girard
Le vendredi 28 Janvier 2005 22:54, jfj a écrit :
 Francis Girard wrote:
  What was the goal behind this rule ?

 If you have a list which contains integers, strings, tuples, lists and
 dicts and you sort it and print it, it will be easier to detect what
 you're looking for:)


 G.

Mmm. Certainly not one of my top requirements.

Anyway, it would be better to separately provide a compare function that 
orders elements according to their types instead of making the comparison 
operators default semantics somewhat obscure (and dangerous).

Thank you
Francis Girard

--
http://mail.python.org/mailman/listinfo/python-list


Re: Classical FP problem in python : Hamming problem

2005-01-27 Thread Francis Girard
Le jeudi 27 Janvier 2005 10:30, Nick Craig-Wood a crit:
 Francis Girard [EMAIL PROTECTED] wrote:
   Thank you Nick and Steven for the idea of a more generic imerge.

 You are welcome :-)  [It came to me while walking the children to school!]


Sometimes fresh air and children purity is all what it takes. Much better than 
coffee, cigarrette and flat screen.

 [snip]

   class IteratorDeiterator:
 def __init__(self, iterator):
   self._iterator = iterator.__iter__()
   self._firstVal = None ## Avoid consuming if not requested from
  outside ## Works only if iterator itself can't return None

 You can use a sentinel here if you want to avoid the can't return
 None limitation.  For a sentinel you need an object your iterator
 couldn't possibly return.  You can make one up, eg


Great idea. I'll use it.

self._sentinel = object()
self._firstVal = self._sentinel

 Or you could use self (but I'm not 100% sure that your recursive
 functions wouldn't return it!)

 def __iter__(self): return self
 
 def next(self):
   valReturn = self._firstVal
   if valReturn is None:

 and

if valReturn is self._sentinel:
 valReturn = self._iterator.next()
   self._firstVal = None

self._firstVal = self._sentinel

 etc..

 [snip more code]

 Thanks for some more examples of fp-style code.  I find it hard to get
 my head round so its been good exercise!

Introduction to functional programming
Richard Bird and Philip Wadler
Prentice Hall
1988

This is the very best intro I ever read. The book is without hype, doesn't 
show its age and is language neutral.
Authors are world leaders in the field today. Only really strong guys have the 
kindness to do understandable introductions without trying to hide the 
difficulties (because they are strong enough to face them with simplicity).

It's been a real pleasure.

Regards,

Francis Girard
FRANCE


 --
 Nick Craig-Wood [EMAIL PROTECTED] -- http://www.craig-wood.com/nick

--
http://mail.python.org/mailman/listinfo/python-list


Re: Question about 'None'

2005-01-27 Thread Francis Girard
Le jeudi 27 Janvier 2005 20:16, Steven Bethard a écrit :
 flamesrock wrote:
  The statement (1  None) is false (or any other value above 0). Why is
  this?

 What code are you executing?  I don't get this behavior at all:

 py 100  None
 True
 py 1  None
 True
 py 0  None
 True
 py -1  None
 True
 py -100  None
 True


Wow ! What is it that are compared ? I think it's the references (i.e. the 
adresses) that are compared. The None reference may map to the physical 0x0 
adress whereas 100 is internally interpreted as an object for which the 
reference (i.e. address) exists and therefore greater than 0x0.

Am I interpreting correctly ?

  (The reason I ask is sortof unrelated. I wanted to use None as a
  variable for which any integer, including negative ones have a greater
  value so that I wouldn't need to implement any tests or initializations
  for a loop that finds the maximum of a polynomial between certain x
  values. Anything is greater than nothing, no?)

 Yup, that's the behavior I get with None.

 Steve

--
http://mail.python.org/mailman/listinfo/python-list


Re: Question about 'None'

2005-01-27 Thread Francis Girard
Oops, I misunderstood what you said. I understood that it was now the case 
that objects of different types are not comparable by default whereas you 
meant it as a planned feature for version 3.

I really hope that it will indeed be the case for version 3.

Francis Girard

Le jeudi 27 Janvier 2005 21:29, Steven Bethard a écrit :
 Francis Girard wrote:
  Le jeudi 27 Janvier 2005 20:16, Steven Bethard a écrit :
 flamesrock wrote:
 The statement (1  None) is false (or any other value above 0). Why is
 this?
 
 What code are you executing?  I don't get this behavior at all:
 
 py 100  None
 True
 py 1  None
 True
 py 0  None
 True
 py -1  None
 True
 py -100  None
 True
 
  Wow ! What is it that are compared ? I think it's the references (i.e.
  the adresses) that are compared. The None reference may map to the
  physical 0x0 adress whereas 100 is internally interpreted as an object
  for which the reference (i.e. address) exists and therefore greater than
  0x0.
 
  Am I interpreting correctly ?

 Actually, I believe None is special-cased to work like this.  From
 object.c:

 static int
 default_3way_compare(PyObject *v, PyObject *w)
 {
   ...
   if (v-ob_type == w-ob_type) {
   ...
   Py_uintptr_t vv = (Py_uintptr_t)v;
   Py_uintptr_t ww = (Py_uintptr_t)w;
   return (vv  ww) ? -1 : (vv  ww) ? 1 : 0;
   }
   ...
   /* None is smaller than anything */
   if (v == Py_None)
   return -1;
   if (w == Py_None)
   return 1;
   ...
 }

 So None being smaller than anything (except itself) is hard-coded into
 Python's compare routine.  My suspicion is that even if/when objects of
 different types are no longer comparable by default (as has been
 suggested for Python 3.0), None will still compare as smaller than
 anything...

 Steve

--
http://mail.python.org/mailman/listinfo/python-list


Re: Question about 'None'

2005-01-27 Thread Francis Girard
Le jeudi 27 Janvier 2005 22:07, Steven Bethard a écrit :
 Francis Girard wrote:
  Le jeudi 27 Janvier 2005 21:29, Steven Bethard a écrit :
 So None being smaller than anything (except itself) is hard-coded into
 Python's compare routine.  My suspicion is that even if/when objects of
 different types are no longer comparable by default (as has been
 suggested for Python 3.0), None will still compare as smaller than
 anything...
 
  Well, here's python doesn't seem to confirm what you're saying :
 a = 10
 b = 10
 a  b
 
  True
 
 b  a
 
  False
 
 id(a)
 
  1077467584
 
 id(b)
 
  134536516

 Actually, this code doesn't contradict what I said at all.  I said that
 None (and only None) was special-cased.


You're right. I just posted too soon.
Sorry.

  It really looks like the addresses are compared when objects are of
  different types if there is no __cmp__ or __lt__ user made specification
  to compare objects of different types.

 Use the source Luke! ;)  Download it from CVS and take a look.  The end
 of default_3way_compare:


Just want to know the specifications. No matter how it had been implemented.

 static int
 default_3way_compare(PyObject *v, PyObject *w)
 {
   ...
   /* None is smaller than anything */
   if (v == Py_None)
   return -1;
   if (w == Py_None)
   return 1;

   /* different type: compare type names; numbers are smaller */
   if (PyNumber_Check(v))
   vname = ;
   else
   vname = v-ob_type-tp_name;
   if (PyNumber_Check(w))
   wname = ;
   else
   wname = w-ob_type-tp_name;
   c = strcmp(vname, wname);
   if (c  0)
   return -1;
   if (c  0)
   return 1;
   /* Same type name, or (more likely) incomparable numeric types */
   return ((Py_uintptr_t)(v-ob_type)  (
   Py_uintptr_t)(w-ob_type)) ? -1 : 1;
 }

 So it looks like Python uses the type name.  Testing this:

 py A = type('A', (object,), {})
 py Z = type('Z', (object,), {})
 py lst = [str('a'), dict(b=2), tuple(), Z(), A()]
 py [type(o).__name__ for o in lst]
 ['str', 'dict', 'tuple', 'Z', 'A']
 py sorted(type(o).__name__ for o in lst)
 ['A', 'Z', 'dict', 'str', 'tuple']
 py sorted(lst)
 [__main__.A object at 0x011E29D0, __main__.Z object at 0x011E29F0,
 {'b': 2}, 'a', ()]

 Yup.  Looks about right.  (Note that the source code also special-cases
 numbers...)

 So, the order is consistent, but arbitrary, just as Fredrik Lundh
 pointed out in the documentation.


Ok. Thank you,

Francis Girard


 Steve

--
http://mail.python.org/mailman/listinfo/python-list


Re: Question about 'None'

2005-01-27 Thread Francis Girard
Very complete explanation.
Thank you

Francis Girard

Le jeudi 27 Janvier 2005 22:15, Steven Bethard a écrit :
 Francis Girard wrote:
  I see. There is some rule stating that all the strings are greater than
  ints and smaller than lists, etc.

 Yes, that rule being to compare objects of different types by their type
 names (falling back to the address of the type object if the type names
 are the same, I believe).  Of course, this is arbitrary, and Python does
 not guarantee you this ordering -- it would not raise backwards
 compatibility concerns to, say, change the ordering in Python 2.5.

  What was the goal behind this rule ?

 I believe at the time, people thought that comparison should be defined
 for all Python objects.  Guido has since said that he wishes the
 decision hadn't been made this way, and has suggested that in Python
 3.0, objects of unequal types will not have a default comparison.

 Probably this means ripping the end off of default_3way_compare and
 raising an exception.  As Fredrik Lundh pointed out, they could, if they
 wanted to, also rip out the code that special-cases None too.

 Steve

--
http://mail.python.org/mailman/listinfo/python-list


Re: python without OO

2005-01-26 Thread Francis Girard
Le mercredi 26 Janvier 2005 02:43, Jeff Shannon a écrit :

 In statically typed languages like C++ and Java, inheritance trees are
 necessary so that you can appropriately categorize objects by their
 type.  Since you must explicitly declare what type is to be used
 where, you may need fine granularity of expressing what type a given
 object is, which requires complex inheritance trees.  In Python, an
 object is whatever type it acts like -- behavior is more important
 than declared type, so there's no value to having a huge assortment of
 potential types.  Deep inheritance trees only happen when people are
 migrating from Java. ;)

 Jeff Shannon
 Technician/Programmer
 Credit International

These lines precisely express my thoughts. Most of the difficulties in OO in 
Java/C++ comes from the all mighty goal of preserving type safety. Type 
safety is certainly a very desirable goal but it, sometimes, leads to very 
complex code only to achieve it. The prize is just too high. The design 
patterns that were supposed to save time through high code reuse oftenly 
becomes a maintenance nightmare. Something that no one in the company can 
understand except a few. Instead of trying to fix some domain specific code, 
you end up trying to fix a supposedly highly reusable code that, oh well, you 
have to adapt. This is espeacially true if the system had been designed by a 
big OO-design-patterns enthusiastic programmer geek.

I am not saying that design patterns are bad. I think that they are an 
invaluable gift to OO. I'm only saying that they have indeed a perniciuous 
and pervert effect in the real world industry. People become religious about 
it and forget to think about a simple solution ...

Being dynamically typed, these kind monster patterns are much less probable. 
And the Python philosophy and culture is the very contrary to that trend.

I've been involved in C++/Java projects for the last 8 years. The first time I 
met Python, I've been frigthen by its lack of static type safety. But over 
the time, I've been convinced by seeing clean Python code over and over 
again. In the end, I could appreciate that being simple leads the way to less 
bugs, which type safety was supposed to prevent ... Coming from the C++ 
community, Python had been just like fresh air. It changed me from the 
nightmare derscribed in that discussion thread. When I think that comapnies 
pay big money for these kind of monsters after having seen a few ppt slides 
about it, it makes me shiver.

Regards,

Francis Girard
FRANCE


--
http://mail.python.org/mailman/listinfo/python-list


Re: python without OO

2005-01-26 Thread Francis Girard
Le mercredi 26 Janvier 2005 20:47, PA a écrit :

 Project fails for many reasons but seldomly because one language is
 better or worst than another one.

I think you're right. But you have to choose the right tools that fit your 
needs. But I think that's what you meant anyway.


 Cheers


Cheers too,

Francis Girard
FRANCE

 --
 PA
 http://alt.textdrive.com/

--
http://mail.python.org/mailman/listinfo/python-list


Re: python without OO

2005-01-26 Thread Francis Girard
Le mercredi 26 Janvier 2005 21:44, PA a écrit :
 On Jan 26, 2005, at 21:35, Francis Girard wrote:
  Project fails for many reasons but seldomly because one language is
  better or worst than another one.
 
  I think you're right. But you have to choose the right tools that fit
  your
  needs. But I think that's what you meant anyway.

 Yes. But even with the best tool and the best intents, projects
 still fail. In fact, most IT projects are considered failures:

 http://www.economist.com/business/PrinterFriendly.cfm?Story_ID=3423238

Well, let's go back home for some gardening. My wife will be happy.


 Cheers

 --
 PA
 http://alt.textdrive.com/

--
http://mail.python.org/mailman/listinfo/python-list


Re: Classical FP problem in python : Hamming problem

2005-01-26 Thread Francis Girard
Le mardi 25 Janvier 2005 09:01, Michael Spencer a crit:
 Francis Girard wrote:
  The following implementation is even more speaking as it makes
  self-evident and almost mechanical how to translate algorithms that run
  after their tail from recursion to tee usage :

 Thanks, Francis and Jeff for raising a fascinating topic.  I've enjoyed
 trying to get my head around both the algorithm and your non-recursive
 implementation.


Yes, it's been fun.

 Here's a version of your implementation that uses a helper class to make
 the algorithm itself prettier.

 from itertools import tee, imap

 def hamming():
  def _hamming():
  yield 1
  for n in imerge(2 * hamming, imerge(3 * hamming, 5 * hamming)):
  yield n

  hamming = Tee(_hamming())
  return iter(hamming)


 class Tee(object):
  Provides an indepent iterator (using tee) on every iteration
 request Also implements lazy iterator arithmetic
  def __init__(self, iterator):
  self.iter = tee(iterator,1)[0]
  def __iter__(self):
  return self.iter.__copy__()
  def __mul__(self, number):
  return imap(lambda x: x * number,self.__iter__())

 def imerge(xs, ys):
x = xs.next()
y = ys.next()
while True:
  if x == y:
yield x
x = xs.next()
y = ys.next()
  elif x  y:
yield x
x = xs.next()
  else: # if y  x:
yield y
y = ys.next()

   hg = hamming()
   for i in range(1):

 ... n = hg.next()
 ... if i % 1000 == 0: print i, n
 ...
 0 1
 1000 5184
 2000 81
 3000 27993600
 4000 4707158941350
 5000 5096079360
 6000 4096000
 7000 2638827906662400
 8000 143327232
 9000 680244480



Interesting idea.

 Regards

 Michael

--
http://mail.python.org/mailman/listinfo/python-list


Re: Classical FP problem in python : Hamming problem

2005-01-26 Thread Francis Girard
Le mardi 25 Janvier 2005 19:52, Steven Bethard a écrit :

Thank you Nick and Steven for the idea of a more generic imerge.

To work with the Hamming problem, the imerge function _must_not_ allow 
duplicates and we can assume all of the specified iteratables are of the same 
size, i.e. infinite !

Therefore, Nick's solution fulfills the need.  But it is admittedly confusing 
to call the function imerge when it doesn't merge everything (including 
duplicates). Anyway both solution sheds new light and brings us a bit 
farther.

That's the beauty of many brains from all over the world collabarating.
Really, it makes me emotive thinking about it.

For the imerge function, what we really need to make the formulation clear is 
a way to look at the next element of an iteratable without consuming it. Or 
else, a way to put back consumed elements in the front an iteration flow, 
much like the list constructors in FP languages like Haskell.

It is simple to encapsulate an iterator inside another iterator class that 
would do just that. Here's one. The additional fst method returns the next 
element to consume without consuming it and the isBottom checks if there is 
something left to consume from the iteration flow, without actually consuming 
anything. I named the class IteratorDeiterator for lack of imagination :

-- BEGIN SNAP
class IteratorDeiterator:
  def __init__(self, iterator):
self._iterator = iterator.__iter__()
self._firstVal = None ## Avoid consuming if not requested from outside
  ## Works only if iterator itself can't return None

  def __iter__(self): return self

  def next(self):
valReturn = self._firstVal
if valReturn is None:
  valReturn = self._iterator.next()
self._firstVal = None
return valReturn

  def fst(self):
if self._firstVal is None:
  self._firstVal = self._iterator.next()
return self._firstVal

  def isBottom(self):
if self._firstVal is None:
  try: self._firstVal = self._iterator.next()
  except StopIteration:
return True
return False
-- END SNAP

Now the imerge_infinite which merges infinite lists, while allowing 
duplicates, is quite straightforward :

-- BEGIN SNAP
def imerge_infinite(*iterators):
  vItTopable = [IteratorDeiterator(it) for it in iterators]
  while vItTopable:
yield reduce(lambda itSmallest, it: 
   itSmallest.fst()  it.fst() and it or itSmallest, 
   vItTopable).next()
-- END SNAP

To merge finite lists of possibly different lengths is a bit more work as you 
have to eliminate iterators that reached the bottom before the others. This 
is quite easily done by simply filtering them out. 
The imerge_finite function below merges lists of possibly different sizes.

-- BEGIN SNAP
def imerge_finite(*iterators):
  vItDeit = [IteratorDeiterator(it) for it in iterators]
  vItDeit = filter(lambda it: not it.isBottom(), vItDeit)
  while vItDeit:
yield reduce(lambda itSmallest, it: 
   itSmallest.fst()  it.fst() and it or itSmallest, 
   vItDeit).next()
vItDeit = filter(lambda it: not it.isBottom(), vItDeit)
-- END SNAP


Now, we want versions of these two merge functions that do not allow 
duplicates. Building upon what we've already done in a semi FP way, we just 
filter out the duplicates from the iteration streams returned by the above 
functions. The job is the same for the infinite and finite versions, hence 
the imerge_nodup generic function.

-- BEGIN SNAP
def imerge_nodup(fImerge, *iterators):
  it = fImerge(*iterators)
  el = it.next()
  yield el
  while True:
el = dropwhile(lambda _next: _next == el, it).next()
yield el

imerge_inf_nodup = \
  lambda *iterators: imerge_nodup(imerge_infinite, *iterators)
imerge_finite_nodup = \
  lambda *iterators: imerge_nodup(imerge_finite, *iterators)
-- END SNAP

I used the lambda notation for imerge_inf_nodup and imerge_finite_nodup to 
avoid another useless for-yield loop. Here the notation really just express 
function equivalence with a bounded argument (it would be great to have a 
notation for this in Python, i.e. binding a function argument to yield a new 
function).

The merge function to use with hamming() is imerge_inf_nodup.

Regards,

Francis Girard
FRANCE

 Nick Craig-Wood wrote:
  Steven Bethard [EMAIL PROTECTED] wrote:
  Nick Craig-Wood wrote:
 Thinking about this some more leads me to believe a general purpose
 imerge taking any number of arguments will look neater, eg
 
 def imerge(*generators):
 values = [ g.next() for g in generators ]
 while True:
 x = min(values)
 yield x
 for i in range(len(values)):
 if values[i] == x:
 values[i] = generators[i].next()
 
  This kinda looks like it dies after the first generator is exhausted,
  but I'm not certain.
 
  Yes it will stop iterating then (rather like zip() on lists of unequal
  size). Not sure what the specification should

Re: Another scripting language implemented into Python itself?

2005-01-25 Thread Francis Girard
Hi,

I'm really not sure but there might be some way to embed Java Script within 
Jython. Or to run Jython from inside Java much like the jEdit editor. You 
then have Python to make some glue code between the C++ core and the Java 
Script. Java Script must be secure since it runs in browsers everywhere from 
my worst sex web sites !

I'm not really sure why the original poster want to use Python, I think it's 
to make the glue between C++ and the scripting engine. Nor am I really sure 
why java runs inside my browser ...

Francis Girard
FRANCE

Le mardi 25 Janvier 2005 18:08, Cameron Laird a crit:
 In article [EMAIL PROTECTED],
 Carl Banks [EMAIL PROTECTED] wrote:
   .
   .
   .

  Python, or Perl, or TCL, or Ruby, or PHP,
 
 Not PHP.  PHP is one of the better (meaning less terrible) examples of
 what happens when you do this sort of thing, which is not saying a lot.
 PHP was originally not much more than a template engine with some
 crude operations and decision-making ability.  Only its restricted
 problem domain has saved it from the junkheap where it belongs.
 
 TCL isn't that great in this regard, either, as it makes a lot of
 common operations that ought to be very simple terribly unweildy.

   .
   .
   .
 I've lost track of the antecedent by the time of our arrival at
 this regard.  I want to make it clear that, while Tcl certainly
 is different from C and its imitators, and, in particular, insists
 that arithmetic be expressed more verbosely than in most languages,
 the cause is quite distinct from the imperfections perceived in
 PHP.  PHP is certainly an instance of scope creep in its semantics.
 Tcl was designed from the beginning, though, and has budged little in
 over a decade in its fundamentals; Tcl simply doesn't bother to make
 a lot of common operations ... concise.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Classical FP problem in python : Hamming problem

2005-01-24 Thread Francis Girard
Ok, I think that the bottom line is this :

For all the algorithms that run after their tail in an FP way, like the 
Hamming problem, or the Fibonacci sequence, (but unlike Sieve of Eratosthene 
-- there's a subtle difference), i.e. all those algorithms that typically 
rely upon recursion to get the beginning of the generated elements in order 
to generate the next one ...

... so for this family of algorithms, we have, somehow, to abandon recursion, 
and to explicitely maintain one or many lists of what had been generated.

One solution for this is suggested in test_generators.py. But I think that 
this solution is evil as it looks like recursion is used but it is not and it 
depends heavily on how the m235 function (for example) is invoked. 
Furthermore, it is _NOT_ memory efficient as pretended : it leaks ! It 
internally maintains a lists of generated results that grows forever while it 
is useless to maintain results that had been consumed. Just for a gross 
estimate, on my system, percentage of memory usage for this program grows 
rapidly, reaching 21.6 % after 5 minutes. CPU usage varies between 31%-36%.
Ugly and inefficient.

The solution of Jeff Epler is far more elegant. The itertools.tee function 
does just what we want. It internally maintain a list of what had been 
generated and deletes the consumed elements as the algo unrolls. To follow 
with my gross estimate, memory usage grows from 1.2% to 1.9% after 5 minutes 
(probably only affected by the growing size of Huge Integer). CPU usage 
varies between 27%-29%.
Beautiful and effecient.

You might think that we shouldn't be that fussy about memory usage on 
generating 100 digits numbers but we're talking about a whole family of 
useful FP algorithms ; and the best way to implement them should be 
established.

For this family of algorithms, itertools.tee is the way to go.

I think that the semi-tutorial in test_generators.py should be updated to 
use tee. Or, at least, a severe warning comment should be written.

Thank you,

Francis Girard
FRANCE

Le dimanche 23 Janvier 2005 23:27, Jeff Epler a crit:
 Your formulation in Python is recursive (hamming calls hamming()) and I
 think that's why your program gives up fairly early.

 Instead, use itertools.tee() [new in Python 2.4, or search the internet
 for an implementation that works in 2.3] to give a copy of the output
 sequence to each multiply by N function as well as one to be the
 return value.

 Here's my implementation, which matched your list early on but
 effortlessly reached larger values.  One large value it printed was
 6412351813189632 (a Python long) which indeed has only the distinct
 prime factors 2 and 3. (2**43 * 3**6)

 Jeff

 from itertools import tee
 import sys

 def imerge(xs, ys):
 x = xs.next()
 y = ys.next()
 while True:
 if x == y:
 yield x
 x = xs.next()
 y = ys.next()
 elif x  y:
 yield x
 x = xs.next()
 else:
 yield y
 y = ys.next()

 def hamming():
 def _hamming(j, k):
 yield 1
 hamming = generators[j]
 for i in hamming:
 yield i * k
 generators = []
 generator = imerge(imerge(_hamming(0, 2), _hamming(1, 3)), _hamming(2,
 5)) generators[:] = tee(generator, 4)
 return generators[3]

 for i in hamming():
 print i,
 sys.stdout.flush()

--
http://mail.python.org/mailman/listinfo/python-list


Re: Classical FP problem in python : Hamming problem

2005-01-24 Thread Francis Girard
The following implementation is even more speaking as it makes self-evident 
and almost mechanical how to translate algorithms that run after their tail 
from recursion to tee usage :

*** BEGIN SNAP
from itertools import tee, imap
import sys

def imerge(xs, ys):
  x = xs.next()
  y = ys.next()
  while True:
if x == y:
  yield x
  x = xs.next()
  y = ys.next()
elif x  y:
  yield x
  x = xs.next()
else:
  yield y
  y = ys.next()

  
def hamming():
  def _hamming():
yield 1
hamming2 = hammingGenerators[0]
hamming3 = hammingGenerators[1]
hamming5 = hammingGenerators[2]
for n in imerge(imap(lambda h: 2*h, iter(hamming2)),
imerge(imap(lambda h: 3*h, iter(hamming3)),
   imap(lambda h: 5*h, iter(hamming5:
  yield n
  hammingGenerators = tee(_hamming(), 4)
  return hammingGenerators[3]
  
for i in hamming():
  print i,
  sys.stdout.flush()
*** END SNAP

Here's an implementation of the fibonacci sequence that uses tee : 

*** BEGIN SNAP
from itertools import tee
import sys

def fib():
  def _fib():
yield 1
yield 1
curGen = fibGenerators[0]
curAheadGen = fibGenerators[1]
curAheadGen.next()
while True:
  yield curGen.next() + curAheadGen.next()
  fibGenerators = tee(_fib(), 3)
  return fibGenerators[2]
  
for n in fib():
  print n,
  sys.stdout.flush()
*** END SNAP

Francis Girard
FRANCE


Le lundi 24 Janvier 2005 14:09, Francis Girard a crit:
 Ok, I think that the bottom line is this :

 For all the algorithms that run after their tail in an FP way, like the
 Hamming problem, or the Fibonacci sequence, (but unlike Sieve of
 Eratosthene -- there's a subtle difference), i.e. all those algorithms that
 typically rely upon recursion to get the beginning of the generated
 elements in order to generate the next one ...

 ... so for this family of algorithms, we have, somehow, to abandon
 recursion, and to explicitely maintain one or many lists of what had been
 generated.

 One solution for this is suggested in test_generators.py. But I think
 that this solution is evil as it looks like recursion is used but it is not
 and it depends heavily on how the m235 function (for example) is invoked.
 Furthermore, it is _NOT_ memory efficient as pretended : it leaks ! It
 internally maintains a lists of generated results that grows forever while
 it is useless to maintain results that had been consumed. Just for a
 gross estimate, on my system, percentage of memory usage for this program
 grows rapidly, reaching 21.6 % after 5 minutes. CPU usage varies between
 31%-36%. Ugly and inefficient.

 The solution of Jeff Epler is far more elegant. The itertools.tee
 function does just what we want. It internally maintain a list of what had
 been generated and deletes the consumed elements as the algo unrolls. To
 follow with my gross estimate, memory usage grows from 1.2% to 1.9% after 5
 minutes (probably only affected by the growing size of Huge Integer). CPU
 usage varies between 27%-29%.
 Beautiful and effecient.

 You might think that we shouldn't be that fussy about memory usage on
 generating 100 digits numbers but we're talking about a whole family of
 useful FP algorithms ; and the best way to implement them should be
 established.

 For this family of algorithms, itertools.tee is the way to go.

 I think that the semi-tutorial in test_generators.py should be updated to
 use tee. Or, at least, a severe warning comment should be written.

 Thank you,

 Francis Girard
 FRANCE

 Le dimanche 23 Janvier 2005 23:27, Jeff Epler a crit:
  Your formulation in Python is recursive (hamming calls hamming()) and I
  think that's why your program gives up fairly early.
 
  Instead, use itertools.tee() [new in Python 2.4, or search the internet
  for an implementation that works in 2.3] to give a copy of the output
  sequence to each multiply by N function as well as one to be the
  return value.
 
  Here's my implementation, which matched your list early on but
  effortlessly reached larger values.  One large value it printed was
  6412351813189632 (a Python long) which indeed has only the distinct
  prime factors 2 and 3. (2**43 * 3**6)
 
  Jeff
 
  from itertools import tee
  import sys
 
  def imerge(xs, ys):
  x = xs.next()
  y = ys.next()
  while True:
  if x == y:
  yield x
  x = xs.next()
  y = ys.next()
  elif x  y:
  yield x
  x = xs.next()
  else:
  yield y
  y = ys.next()
 
  def hamming():
  def _hamming(j, k):
  yield 1
  hamming = generators[j]
  for i in hamming:
  yield i * k
  generators = []
  generator = imerge(imerge(_hamming(0, 2), _hamming(1, 3)),
  _hamming(2, 5)) generators[:] = tee(generator, 4)
  return generators[3]
 
  for i in hamming():
  print i,
  sys.stdout.flush()

--
http

Re: Classical FP problem in python : Hamming problem

2005-01-24 Thread Francis Girard
Ok I'll submit the patch with the prose pretty soon.
Thank you
Francis Girard
FRANCE

Le mardi 25 Janvier 2005 04:29, Tim Peters a écrit :
 [Francis Girard]

  For all the algorithms that run after their tail in an FP way, like the
  Hamming problem, or the Fibonacci sequence, (but unlike Sieve of
  Eratosthene -- there's a subtle difference), i.e. all those algorithms
  that typically rely upon recursion to get the beginning of the generated
  elements in order to generate the next one ...
 
  ... so for this family of algorithms, we have, somehow, to abandon
  recursion, and to explicitely maintain one or many lists of what had been
  generated.
 
  One solution for this is suggested in test_generators.py. But I think
  that this solution is evil as it looks like recursion is used but it is
  not and it depends heavily on how the m235 function (for example) is
  invoked.

 Well, yes -- Heh.  Here's one way to get a shared list, complete with
 an excruciating namespace renaming trick was intended to warn you in
 advance that it wasn't pretty wink.

  Furthermore, it is _NOT_ memory efficient as pretended : it leaks !

 Yes.  But there are two solutions to the problem in that file, and the
 second one is in fact extremely memory-efficient compared to the first
 one.  Efficiency here was intended in a relative sense.

  It internally maintains a lists of generated results that grows forever
  while it is useless to maintain results that had been consumed. Just
  for a gross estimate, on my system, percentage of memory usage for this
  program grows rapidly, reaching 21.6 % after 5 minutes. CPU usage varies
  between 31%-36%. Ugly and inefficient.

 Try the first solution in the file for a better understanding of what
 inefficient means wink.

  The solution of Jeff Epler is far more elegant. The itertools.tee
  function does just what we want. It internally maintain a list of what
  had been generated and deletes the consumed elements as the algo
  unrolls. To follow with my gross estimate, memory usage grows from 1.2%
  to 1.9% after 5 minutes (probably only affected by the growing size of
  Huge Integer). CPU usage varies between 27%-29%.
  Beautiful and effecient.

 Yes, it is better.  tee() didn't exist when generators (and
 test_generators.py) were written, so of course nothing in the test
 file uses them.

  You might think that we shouldn't be that fussy about memory usage on
  generating 100 digits numbers but we're talking about a whole family of
  useful FP algorithms ; and the best way to implement them should be
  established.

 Possibly -- there really aren't many Pythonistas who care about this.

  For this family of algorithms, itertools.tee is the way to go.
 
  I think that the semi-tutorial in test_generators.py should be updated
  to use tee. Or, at least, a severe warning comment should be written.

 Please submit a patch.  The purpose of that file is to test
 generators, so you should add a third way of doing it, not replace the
 two ways already there.  It should also contain prose explaining why
 the third way is better (just as there's prose now explaining why the
 second way is better than the first).

--
http://mail.python.org/mailman/listinfo/python-list


Classical FP problem in python : Hamming problem

2005-01-23 Thread Francis Girard
Hi,

First,

My deepest thanks to Craig Ringer, Alex Martelli, Nick Coghlan and Terry Reedy 
for having generously answered on the Need help on need help on generator 
thread. I'm compiling the answers to sketch myself a global pictures about 
iterators, generators, iterator-generators and laziness in python.

In the meantime, I couldn't resist to test the new Python features about 
laziness on a classical FP problem, i.e. the Hamming problem.

The name of the game is to produce the sequence of integers satisfying the 
following rules :

(i) The list is in ascending order, without duplicates
(ii) The list begins with the number 1
(iii) If the list contains the number x, then it also contains the numbers 
2*x, 3*x, and 5*x
(iv) The list contains no other numbers.

The algorithm in FP is quite elegant. Simply suppose that the infinite 
sequence is produced, then simply merge the three sequences (2*x,3*x,5*x) for 
each x in the infinite sequence we supposed as already produced ; this is 
O(n) complexity for n numbers.

I simply love those algorithms that run after their tails.

In haskell, the algorithm is translated as such :

-- BEGIN SNAP
-- hamming.hs

-- Merges two infinite lists
merge :: (Ord a) = [a] - [a] - [a]
merge (x:xs)(y:ys)
  | x == y= x : merge xs ys
  | x   y= x : merge xs (y:ys)
  | otherwise = y : merge (x:xs) ys

-- Lazily produce the hamming sequence
hamming :: [Integer]
hamming 
  = 1 : merge (map (2*) hamming) (merge (map (3*) hamming) (map (5*) hamming))
-- END SNAP


In Python, I figured out this implementation :

-- BEGIN SNAP
import sys
from itertools import imap

## Merges two infinite lists
def imerge(xs, ys):
  x = xs.next()
  y = ys.next()
  while True:
if x == y:
  yield x
  x = xs.next()
  y = ys.next()
elif x  y:
  yield x
  x = xs.next()
else: # if y  x:
  yield y
  y = ys.next()

## Lazily produce the hamming sequence 
def hamming():
  yield 1 ## Initialize the machine
  for n in imerge(imap(lambda h: 2*h, hamming()),
  imerge(imap(lambda h: 3*h, hamming()),
 imap(lambda h: 5*h, hamming(:
yield n
  print Falling out -- We should never get here !!

for n in hamming():
  sys.stderr.write(%s  % str(n)) ## stderr for unbuffered output
-- END SNAP


My goal is not to compare Haskell with Python on a classical FP problem, which 
would be genuine stupidity.

Nevertheless, while the Haskell version prints Hamming sequence for as long as 
I can stand it, and with very little memory consumation, the Python version 
only prints :

 hamming.py
1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 40 45 48 50 54 60 64 72 75 
80 81 90 96 100 108 120 125 128 135 144 150 160 162 180 192 200 216 225 240 
243 250 256 270 288 300 320 324 360 375 384 400 405 432 450 480 486 500 512 
540 576 600 625 640 648 675 720 729 750 768 800 810 864 900 960 972 1000 1024 
1080 1125 1152 1200 1215 1250 1280 1296 1350 1440 1458 1500 1536 1600 1620 
1728 1800 1875 1920 1944 2000 2025 2048 2160 2187 2250 2304 2400 2430 2500 
2560 2592 2700 2880 2916 3000 3072 3125 3200 3240 3375 3456 3600 3645 3750 
3840 3888 4000 4050 4096 4320 4374 4500 4608 4800 4860 5000 5120 5184 5400 
5625 5760 5832 6000 6075 6144 6250 6400 6480 6561 6750 6912 7200 7290 7500 
7680 7776 8000 8100 8192 8640 8748 9000 9216 9375 9600 9720 1 10125 10240 
10368 10800 10935 11250 11520 11664 12000 12150 12288 12500 12800 12960 13122 
13500 13824 14400 14580 15000 15360 15552 15625 16000 16200 16384 16875 17280 
17496 18000 18225 18432 18750 19200 19440 19683 2 20250 20480 20736 21600 
21870 22500 23040 23328 24000 24300 24576 25000 25600 25920 26244 27000 27648 
28125 28800 29160 3 30375 30720 31104 31250 32000 32400 32768 32805 33750 
34560 34992 36000 36450 36864 37500 38400 38880 39366 4 40500 40960 41472 
43200 43740 45000 46080 46656 46875 48000 48600 49152 5 50625 51200 51840 
52488 54000 54675 55296 56250 57600
Processus arrt

After 57600, my machine begins swapping like crazy and I do have to kill the 
python processus. 

I think I should not have this kind of behaviour, even using recursion, since 
I'm only using lazy constructs all the time. At least, I would expect the 
program to produce much more results before surrending.

What's going on ?

Thank you

Francis Girard
FRANCE

--
http://mail.python.org/mailman/listinfo/python-list


Re: need help on need help on generator...

2005-01-22 Thread Francis Girard
Le samedi 22 Janvier 2005 10:10, Alex Martelli a crit:
 Francis Girard [EMAIL PROTECTED] wrote:
...

  But besides the fact that generators are either produced with the new
  yield reserved word or by defining the __new__ method in a class
  definition, I don't know much about them.

 Having __new__ in a class definition has nothing much to do with
 generators; it has to do with how the class is instantiated when you
 call it.  Perhaps you mean 'next' (and __iter__)?  That makes instances
 of the class iterators, just like iterators are what you get when you
 call a generator.


Yes, I meant next.


  In particular, I don't know what Python constructs does generate a
  generator.

 A 'def' of a function whose body uses 'yield', and in 2.4 the new genexp
 construct.


Ok. I guess I'll have to update to version 2.4 (from 2.3) to follow the 
discussion.

  I know this is now the case for reading lines in a file or with the new
  iterator package.

 Nope, besides the fact that the module you're thinking of is named
 'itertools': itertools uses a lot of C-coded special types, which are
 iterators but not generators.  Similarly, a file object is an iterator
 but not a generator.

  But what else ?

 Since you appear to conflate generators and iterators, I guess the iter
 built-in function is the main one you missed.  iter(x), for any x,
 either raises an exception (if x's type is not iterable) or else returns
 an iterator.


You're absolutly right, I take the one for the other and vice-versa. If I 
understand correctly, a generator produce something over which you can 
iterate with the help of an iterator. Can you iterate (in the strict sense 
of an iterator) over something not generated by a generator ?


  Does Craig Ringer answer mean that list
  comprehensions are lazy ?

 Nope, those were generator expressions.

  Where can I find a comprehensive list of all the
  lazy constructions built in Python ?

 That's yet a different question -- at least one needs to add the
 built-in xrange, which is neither an iterator nor a generator but IS
 lazy (a historical artefact, admittedly).

 But fortunately Python's built-ins are not all THAT many, so that's
 about it.

  (I think that to easily distinguish lazy
  from strict constructs is an absolute programmer need -- otherwise you
  always end up wondering when is it that code is actually executed like in
  Haskell).

 Encapsulation doesn't let you easily distinguish issues of
 implementation.  For example, the fact that a file is an iterator (its
 items being its lines) doesn't tell you if that's internally implemented
 in a lazy or eager way -- it tells you that you can code afile.next() to
 get the next line, or for line in afile: to loop over them, but does
 not tell you whether the code for the file object is reading each line
 just when you ask for it, or whether it reads all lines before and just
 keeps some state about the next one, or somewhere in between.


You're right. I was much more talking (mistakenly) about lazy evaluation of 
the arguments to a function (i.e. the function begins execution before its 
arguments get evaluated) -- in such a case I think it should be specified 
which arguments are strict and which are lazy -- but I don't think 
there's such a thing in Python (... well not yet as Python get more and more 
akin to FP).

 The answer for the current implementation, BTW, is in between -- some
 buffering, but bounded consumption of memory -- but whether that tidbit
 of pragmatics is part of the file specs, heh, that's anything but clear
 (just as for other important tidbits of Python pragmatics, such as the
 facts that list.sort is wickedly fast, 'x in alist' isn't, 'x in adict'
 IS...).


 Alex

Thank you

Francis Girard
FRANCE

--
http://mail.python.org/mailman/listinfo/python-list


Re: need help on need help on generator...

2005-01-21 Thread Francis Girard
Hi,

I recently read David Mertz (IBM DeveloperWorks) about generators and got 
excited about using lazy constructs in my Python programming.

But besides the fact that generators are either produced with the new yield 
reserved word or by defining the __new__ method in a class definition, I 
don't know much about them.

In particular, I don't know what Python constructs does generate a generator. 
I know this is now the case for reading lines in a file or with the new 
iterator package. But what else ? Does Craig Ringer answer mean that list 
comprehensions are lazy ? Where can I find a comprehensive list of all the 
lazy constructions built in Python ? (I think that to easily distinguish lazy 
from strict constructs is an absolute programmer need -- otherwise you always 
end up wondering when is it that code is actually executed like in Haskell).

Thank you

Francis Girard
FRANCE

Le vendredi 21 Janvier 2005 15:38, Craig Ringer a crit:
 On Fri, 2005-01-21 at 17:14 +0300, Denis S. Otkidach wrote:
  On 21 Jan 2005 05:58:03 -0800
 
  [EMAIL PROTECTED] (Joh) wrote:
   i'm trying to understand how i could build following consecutive sets
   from a root one using generator :
  
   l = [1,2,3,4]
  
   would like to produce :
  
   [1,2], [2,3], [3,4], [1,2,3], [2,3,4]
  
   def consecutive_sets(l):
 
  ... for i in xrange(2, len(l)):
  ... for j in xrange(0, len(l)-i+1):
  ... yield l[j:j+i]

 Since you have a much faster brain than I (though I ended up with
 exactly the same thing barring variable names) and beat me to posting
 the answer, I'll post the inevitable awful generator expression version
 instead:

 consecutive_sets = ( x[offset:offset+subset_size]
  for subset_size in xrange(2, len(x))
  for offset in xrange(0, len(x) + 1 - subset_size) )

 --
 Craig Ringer

--
http://mail.python.org/mailman/listinfo/python-list


Re: need help on generator...

2005-01-21 Thread Francis Girard
Le vendredi 21 Janvier 2005 16:06, Craig Ringer a crit:
 On Fri, 2005-01-21 at 22:38 +0800, Craig Ringer wrote:
  consecutive_sets = ( x[offset:offset+subset_size]
   for subset_size in xrange(2, len(x))
   for offset in xrange(0, len(x) + 1 - subset_size) )

 Where 'x' is list to operate on, as I should've initially noted. Sorry
 for the reply-to-self.

 I did say awful for a reason ;-)

 --
 Craig Ringer

First, I think that you mean :

consecutive_sets = [ x[offset:offset+subset_size]
  for subset_size in xrange(2, len(x))
  for offset in xrange(0, len(x) + 1 - subset_size)]

(with square brackets).

Second, 

this is not lazy anymore (like Denis S. Otkidach previous answer was) because 
the __whole__ list get constructed __before__ any other piece of code have a 
chance to execute. The type of consecutive_sets is simply a list, not a 
generator.

I'm just trying to understand and obviously I'm missing the point.

Thank you

Francis Girard
FRANCE

--
http://mail.python.org/mailman/listinfo/python-list


Re: need help on need help on generator...

2005-01-21 Thread Francis Girard
Really, thank you Craig Ringer for your great answer.


 I'm afraid I can't help you with that. I tend to take the view that side
 effects in lazily executed code are a bad plan, and use lazy execution
 for things where there is no reason to care when the code is executed.


I completly agree with this. But this is much more true in theory than in 
practice. In practice you might end up with big big memory usage with lazy 
constructs as the system has to store intermediate results (the execution 
context in the case of Python). These kinds of phenomena happen all the time 
in a language like Haskell -- at least for a beginner like me -- if you don't 
pay attention to it ; and this makes the language a lot more difficult to 
master. Thus you have to keep an eye on performance even though, in FP, you 
shoould just have to declare your intentions and let the system manage the 
execution path.


 http://gnosis.cx/publish/programming/metaclass_1.html
 http://gnosis.cx/publish/programming/metaclass_2.html

Thank you, I'll read that.

Francis Girard
FRANCE

Le vendredi 21 Janvier 2005 16:42, Craig Ringer a crit:
 On Fri, 2005-01-21 at 16:05 +0100, Francis Girard wrote:
  I recently read David Mertz (IBM DeveloperWorks) about generators and
  got excited about using lazy constructs in my Python programming.

 Speaking of totally great articles, and indirectly to lazyness (though
 not lazyily evaluated constructs), I was really impressed by this
 fantastic article on metaclasses:

 http://gnosis.cx/publish/programming/metaclass_1.html
 http://gnosis.cx/publish/programming/metaclass_2.html

 which shows that they're really just not that hard. That saved me an
 IMMENSE amount of utterly tedious coding just recently.

  But besides the fact that generators are either produced with the new
  yield reserved word or by defining the __new__ method in a class
  definition, I don't know much about them.

 They can also be created with a generator expression under Python 2.4. A
 generator expression works much like a list comprehension, but returns a
 generator instead of a list, and is evaluated lazily. (It also doesn't
 pollute the outside namespace with its working variables).

  print [ x for x in range(1,10)]

 [1, 2, 3, 4, 5, 6, 7, 8, 9]

  print ( x for x in xrange(1,10) )

 generator object at 0x401e40ac

  print list(( x for x in xrange(1,10) ))

 [1, 2, 3, 4, 5, 6, 7, 8, 9]

 Not the use of xrange above for efficiency in the generator expressions.
 These examples are trivial and pointless, but hopefully get the point
 across.

  In particular, I don't know what Python constructs does generate a
  generator.

 As far as I know, functions that use yield, and generator expressions. I
 was unaware of the ability to create them using a class with a __new__
 method, and need to check that out - I can imagine situations in which
 it might be rather handy.

 I'm not sure how many Python built-in functions and library modules
 return generators for things.

  I know this is now the case for reading lines in a file or with the
  new iterator package. But what else ? Does Craig Ringer answer mean
  that list comprehensions are lazy ?

 Nope, but generator expressions are, and they're pretty similar.

  Where can I find a comprehensive list of all the lazy constructions
  built in Python ? (I think that to easily distinguish lazy from strict
  constructs is an absolute programmer need -- otherwise you always end
  up wondering when is it that code is actually executed like in
  Haskell).

 I'm afraid I can't help you with that. I tend to take the view that side
 effects in lazily executed code are a bad plan, and use lazy execution
 for things where there is no reason to care when the code is executed.

 --
 Craig Ringer

--
http://mail.python.org/mailman/listinfo/python-list


Re: need help on generator...

2005-01-21 Thread Francis Girard
Thank you,

I immediately download version 2.4, switching from version 2.3.

Francis Girard
FRANCE

Le vendredi 21 Janvier 2005 17:34, Craig Ringer a crit:
 On Fri, 2005-01-21 at 16:54 +0100, Francis Girard wrote:
  First, I think that you mean :
 
  consecutive_sets = [ x[offset:offset+subset_size]
for subset_size in xrange(2, len(x))
for offset in xrange(0, len(x) + 1 - subset_size)]
 
  (with square brackets).
 
  I'm just trying to understand and obviously I'm missing the point.

 Yep. This:

 ( x for x in xrange(10) )

 will return a generator that calculates things as it goes, while this:

 [ x for x in xrange(10) ]

 will return a list.

 Check out:
   http://www.python.org/peps/pep-0289.html
   http://docs.python.org/whatsnew/node4.html
   http://www.python.org/dev/doc/newstyle/ref/genexpr.html
 for details.

 --
 Craig Ringer

--
http://mail.python.org/mailman/listinfo/python-list


Re: Overloading ctor doesn't work?

2005-01-20 Thread Francis Girard
Hi,

It looks like the assertEquals use the != operator which had not been defined 
to compare instances of your time class and instances of the datetime class.
In such a case, the operator ends up in comparing the references to instances,
i.e. the id of the objects, i.e. their physical memory addresses, which of 
course can't be the same. And that is why your test fails.

Consider defining the __cmp__ method (see 2.3.3 Comparisons  of the library 
reference). Sess also the operator module.

Francis Girard
LANNILIS
Breizh
FRANCE

Le jeudi 20 Janvier 2005 19:23, Martin Häcker a écrit :
 Hi there,

 I just tried to run this code and failed miserably - though I dunno why.
 Could any of you please enlighten me why this doesn't work?

 Thanks a bunch.

 --- snip ---
 import unittest
 from datetime import datetime

 class time (datetime):
def __init__(self, hours=0, minutes=0, seconds=0, microseconds=0):
  print blah
  datetime.__init__(self, 1, 1, 1, hours, \
   minutes, seconds, microseconds)


 class Test (unittest.TestCase):
def testSmoke(self):
  # print time() # bombs, and complains that
  # the time ctor needs at least 3 arguments
  self.assertEquals(datetime(1,1,1,1,2,3,4),time(1,2,3,4))


 if __name__ == '__main__':
unittest.main()
 --- snap ---

 The reason I want to do this is that I want to work with times but I
 want to do arithmetic with them. Therefore I cannot use the provided
 time directly.

 Now I thought, just overide the ctor of datetime so that year, month and
   day are static and everything should work as far as I need it.

 That is, it could work - though I seem to be unable to overide the ctor. :(

 Why is that?

 cu Martin

 --
 Reach me at spamfaenger (at) gmx (dot) net

--
http://mail.python.org/mailman/listinfo/python-list


Re: Overloading ctor doesn't work?

2005-01-20 Thread Francis Girard
Wow !
Now, this is serious. I tried all sort of things but can't solve the problem.
I'm mystified too and forget my last reply.
I'm curious to see the answers.
Francis Girard
Le jeudi 20 Janvier 2005 19:59, Kent Johnson a écrit :
  Martin Häcker wrote:
  Hi there,
 
  I just tried to run this code and failed miserably - though I dunno
  why. Could any of you please enlighten me why this doesn't work?

 Here is a simpler test case. I'm mystified too:

 from datetime import datetime

 class time (datetime):
def __init__(self, hours=0, minutes=0, seconds=0, microseconds=0):
  datetime.__init__(self, 2001, 10, 31, hours, minutes, seconds,
 microseconds)

 print time(1,2,3,4) # = 0001-02-03 04:00:00
 print time()# = TypeError: function takes at least 3 arguments (0
 given)


 What happens to the default arguments to time.__init__? What happens to the
 2001, 10, 31 arguments to datetime.__init__?

 I would expect the output to be
 2001-10-31 01:02:03.04
 2001-10-31 00:00:00.00

 Kent

--
http://mail.python.org/mailman/listinfo/python-list