readline, rlcompleter

2005-01-10 Thread michele . simionato
This a case where the documentation is lacking. The standard library
documentation
(http://www.python.org/dev/doc/devel/lib/module-rlcompleter.html) gives
this example
try:
import readline
except ImportError:
print "Module readline not available."
else:
import rlcompleter
readline.parse_and_bind("tab: complete")

but I don't find a list of recognized key bindings. For instance, can I
would
like to bind shift-tab to rlcompleter, is that possible? Can I use
function
keys? I did various attempt, but I did not succed :-(
Is there any readline-guru here with some good pointers?
Michele Simionato

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Port blocking

2005-01-10 Thread Ville Vainio
> "Steve" == Steve Holden <[EMAIL PROTECTED]> writes:

>> >>> Usually you wouldn't run a public corba or pyro service over
>> >>> the internet.  You'd use something like XMLRPC over HTTP port
>> >>> 80 partly for the precise purpose of not getting blocked by
>> >>> firewalls.

Mark> I'm not sure if we're talking at cross-purposes here, but
Mark> the application isn't intended for public consumption, but
Mark> for fee-paying clients.

>> Still, if the consumption happens over the internet there is almost
>> 100% chance of the communication being prevented by firewalls.
>> This is exactly what "web services" are for.

Steve> I teach the odd security class, and what you say is far
Steve> from true. As long as the service is located behind a
Steve> firewall which opens up the correct holes for it, it's most
Steve> unlikely that corporate firewalls would disallow client
Steve> connections to such a remote port.

Yes, but "clients" might also act as servers, e.g. when they register
a callback object and expect the "server" to invoke something later
on. This is possible (and typical) with CORBA at least. ORBs can use
the same client-initiated connection for all the traffic, but this is
probably somewhere in the gray area.

-- 
Ville Vainio   http://tinyurl.com/2prnb
-- 
http://mail.python.org/mailman/listinfo/python-list


stretching a string over several lines (Re: PyChecker messages)

2005-01-10 Thread Steven Bethard
Frans Englich wrote:
Also, another newbie question: How does one make a string stretch over several 
lines in the source code? Is this the proper way?
(1)
print "asda asda asda asda asda asda " \
"asda asda asda asda asda asda " \
"asda asda asda asda asda asda"
A couple of other options here:
(2)
print """asda asda asda asda asda asda
asda asda asda asda asda asda
asda asda asda asda asda asda"""
(3)
print """\
asda asda asda asda asda asda
asda asda asda asda asda asda
asda asda asda asda asda asda"""
(4)
print ("asda asda asda asda asda asda "
   "asda asda asda asda asda asda "
   "asda asda asda asda asda asda")
Note that backslash continuations (1) are on Guido's list of "Python 
Regrets", so it's likely they'll disappear with Python 3.0 (Of course 
this is 3-5 years off.)

I typically use either (3) or (4), but of course the final choice is up 
to you.

Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: Dr. Dobb's Python-URL! - weekly Python news and links (Jan 9)

2005-01-10 Thread Bengt Richter
On Tue, 11 Jan 2005 07:27:42 +1100, Tim Churches <[EMAIL PROTECTED]> wrote:

>Josiah Carlson wrote:
>> QOTW:  Jim Fulton: "[What's] duck typing?"
>> Andrew Koenig: "That's the Australian pronunciation of 'duct taping'."
>
>I must protest.
>1) No (true-blue) Australian has every uttered the words 'duct taping', 
>because Aussies (and Pommies) know that the universe is held together 
>with gaffer tape, not duct tape. See http://www.exposure.co.uk/eejit/gaffer/
>b) If an Australian were ever induced to utter the words 'duct typing', 
>the typical Strine (see 
>http://www.geocities.com/jendi2_2000/strine1.html ) pronunciation would 
>be more like 'duh toypn' - the underlying principle being one of 
>elimination of all unnecessary syllables, vowels and consonants, thus 
>eliminating the need to move the lips (which reduces effort and stops 
>flies getting in).
>
>Tim C
>Sydney, Australia
LOL. Thanks, needed that ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python3: on removing map, reduce, filter

2005-01-10 Thread Steven Bethard
David M. Cooke wrote:
Steven Bethard <[EMAIL PROTECTED]> writes:
Some timings to verify this:
$ python -m timeit -s "def square(x): return x*x" "map(square, range(1000))"
1000 loops, best of 3: 693 usec per loop
$ python -m timeit -s "[x*x for x in range(1000)]"
1000 loops, best of 3: 0.0505 usec per loop

Maybe you should compare apples with apples, instead of oranges :-)
You're only running the list comprehension in the setup code...
$ python2.4 -m timeit -s "def square(x): return x*x" "map(square, range(1000))"
1000 loops, best of 3: 464 usec per loop
$ python2.4 -m timeit "[x*x for x in range(1000)]"
1000 loops, best of 3: 216 usec per loop
So factor of 2, instead of 13700 ...
Heh heh.  Yeah, that'd be better.  Sorry about that!
Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: reference or pointer to some object?

2005-01-10 Thread Paul Rubin
Torsten Mohr <[EMAIL PROTECTED]> writes:
> i'd like to pass a reference or a pointer to an object
> to a function.  The function should then change the
> object and the changes should be visible in the calling
> function.

Normally you would pass a class instance or boxed object, and let the
function change the instance or object:

def bump(b):
   b[0] += 123  # change

x = [5]
bump(x)
print x   # prints [128]
-- 
http://mail.python.org/mailman/listinfo/python-list


reference or pointer to some object?

2005-01-10 Thread Torsten Mohr
Hi,

i'd like to pass a reference or a pointer to an object
to a function.  The function should then change the
object and the changes should be visible in the calling
function.

In perl this would be something like:

sub func {
  $ref = shift;

  $$ref += 123; # change
}

$a = 1;
func(\$a);

is something like this possible in python?

The keyword "global" does NOT fit this purpose to
my understanding as it only makes the variables of
the UPPERMOST level visible, not the ones of ONE
calling level above.

Is this somehow possible with weakref?

I don't want to pass the parameter to a function and
then return a changed value.

Is there some other mechanism in python available to
achieve a behaviour like this?


Thanks for any hints,
Torsten.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: shutil.move has a mind of its own

2005-01-10 Thread drs
"Delaney, Timothy C (Timothy)" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
Daniel Bickett wrote:

> > shutil.move( "C:\omg.txt" , "C:\folder\subdir" )
  ^  ^^ ^
> The problem is that backslash is the escape character. In particular,
> '\f' is a form feed.

> You have a couple of options:

You can also include an r to make it a raw string if extra or reversed
slashes look odd

shutil.move( r"C:\omg.txt" , r"C:\folder\subdir" )


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: exceptions and items in a list

2005-01-10 Thread vincent wehren
Steve Holden wrote:
vincent wehren wrote:
rbt wrote:
If I have a Python list that I'm iterating over and one of the 
objects in the list raises an exception and I have code like this:

try:
do something to object in list
except Exception:
pass
Does the code just skip the bad object and continue with the other 
objects in the list, or does it stop?

Thanks

Fire up a shell and try:
 >>> seq = ["1", "2", "a", "4", "5", 6.0]
 >>> for elem in seq:
 try:
print int(elem)
 except ValueError:
pass
and see what happens...
--
Vincent Wehren

I suspect the more recent versions of Python allow a much more elegant 
solution. I can't remember precisely when we were allowed to use 
continue in an except suite, but I know we couldn't in Python 2.1.

Nowadays you can write:
Python 2.4 (#1, Dec  4 2004, 20:10:33)
[GCC 3.3.3 (cygwin special)] on cygwin
Type "help", "copyright", "credits" or "license" for more information.
 >>> for i in [1, 2, 3]:
 ...   try:
 ... print i
 ... if i == 2: raise AttributeError, "Bugger!"
 ...   except AttributeError:
 ... print "Caught exception"
 ... continue
 ...
1
2
Caught exception
3
 >>>
To terminate the loop on the exception you would use "break" instead of 
"continue".
What do you mean by a more elegant solution to the problem? I thought 
the question was if a well-handled exception would allow the iteration 
to continue with the next object or that it would stop. Why would you 
want to use the continue statement when in the above case that is 
obviously unnecessary?:

$ python
Python 2.4 (#1, Dec  4 2004, 20:10:33)
[GCC 3.3.3 (cygwin special)] on cygwin
Type "help", "copyright", "credits" or "license" for more information.
>>> for i in [1,2,3]:
... try:
... if i == 2: raise AttributeError, "Darn!"
... except AttributeError:
... print "Caught Exception"
...
1
2
Caught Exception
3
>>>
Or do you mean that using "continue" is more elegant than using "pass" 
if there are no other statements in the except block?

Regards,
--
Vincent Wehren
regards
 Steve
--
http://mail.python.org/mailman/listinfo/python-list


PyChecker messages

2005-01-10 Thread Frans Englich

Hello,

I take PyChecker partly as an recommender of good coding practice, but I 
cannot make sense of some of the messages. For example:

runner.py:878: Function (main) has too many lines (201)

What does this mean? Cannot functions be large? Or is it simply an advice that 
functions should be small and simple?


runner.py:200: Function (detectMimeType) has too many returns (11)

The function is simply a long "else-if" clause, branching out to different 
return statements. What's wrong? It's simply a "probably ugly code" advice?


A common message is these:

runner.py:41: Parameter (frame) not used

But I'm wondering if there's cases where this cannot be avoided. For example, 
this signal handler:

#---
def signalSilencer( signal, frame ):
"""
Dummy signal handler for avoiding ugly
tracebacks when the user presses CTRL+C.
"""
print "Received signal", str(signal) + ", exiting."
sys.exit(1)
#---

_must_ take two arguments; is there any way that I can make 'frame' go away?


Also, another newbie question: How does one make a string stretch over several 
lines in the source code? Is this the proper way?

print "asda asda asda asda asda asda " \
"asda asda asda asda asda asda " \
"asda asda asda asda asda asda"


Thanks in advance,

Frans

PS. Any idea how to convert any common time type to W3C XML Schema datatype 
duration?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unicode mystery

2005-01-10 Thread John Lenton
On Mon, Jan 10, 2005 at 07:48:44PM -0800, Sean McIlroy wrote:
> I recently found out that unicode("\347", "iso-8859-1") is the
> lowercase c-with-cedilla, so I set out to round up the unicode numbers
> of the extra characters you need for French, and I found them all just
> fine EXCEPT for the o-e ligature (oeuvre, etc). I examined the unicode
> characters from 0 to 900 without finding it; then I looked at
> www.unicode.org but the numbers I got there (0152 and 0153) didn't
> work. Can anybody put a help on me wrt this? (Do I need to give a
> different value for the second parameter, maybe?)

Å isn't part of ISO 8859-1, so you can't get it that way. You can do
one of

   u'\u0153'

or, if you must,

   unicode("\305\223", "utf-8")

-- 
John Lenton ([EMAIL PROTECTED]) -- Random fortune:
Lisp, Lisp, Lisp Machine,
Lisp Machine is Fun.
Lisp, Lisp, Lisp Machine,
Fun for everyone.


signature.asc
Description: Digital signature
-- 
http://mail.python.org/mailman/listinfo/python-list

RE: shutil.move has a mind of its own

2005-01-10 Thread Delaney, Timothy C (Timothy)
Daniel Bickett wrote:

> shutil.move( "C:\omg.txt" , "C:\folder\subdir" )
  ^  ^^ ^
The problem is that backslash is the escape character. In particular,
'\f' is a form feed.

>>> '\o'
'\\o'
>>> '\f'
'\x0c'
>>> '\s'
'\\s'

Notice how for '\o' and '\s' it doubles-up the backslash - this is
because '\o' and '\s' are not valid escapes, and so it treats the
backslash as just a backslash. But '\f' is a valid escape.

You have a couple of options:

1. Use double-backslashes (to escape the backslash):
   shutil.move("C:\\omg.txt", "C:\\folder\\subdir")

2. Use forward slashes (they work on Windows for the most part):
   shutil.move("C:/omg.txt", "C:/folder/subdir")

3. Build your paths using os.path.join (untested):
   shutil.move(os.path.join("C:", "omg.txt"), os.path.join("C:",
"folder", "subdir"))

Tim Delaney
--
http://mail.python.org/mailman/listinfo/python-list


Re: OT: MoinMoin and Mediawiki?

2005-01-10 Thread Paul Rubin
Brion Vibber <[EMAIL PROTECTED]> writes:
> MediaWiki should run with PHP configured in CGI handler mode, but
> these days mod_php has got its claws just about everywhere anyway. If
> you control your own server and don't have multi-user security
> worries, mod_php is simple enough to install and will probably perform
> better.

Thanks, yes, I could run a special apache instance with mod_php
installed.  I'm pretty good with apache.  I have no MySQL admin
experience but I suppose enough people are using MySQL that the
installation procedures and docs are pretty well developed and I can
follow the instructions.

What I'm wondering is just how big an adventure I'd be setting off on,
simply to get MediaWiki itself installed, configured, and running.
Any thoughts about that?

> For performance I also highly recommend using Turck MMCache or
> equivalent PHP bytecode cache extension. Unlike Python, saving
> compiled bytecode is not the default behavior of PHP, and for
> non-trivial scripts compilation eats up a lot of runtime.

Hmm, that's something I could deal with later, I guess.  Is that
similar to what Zend does?

> >  I'll say that I haven't actually looked at
> > the Mediawiki code, though I guess I should do so.
> 
> Cover your eyes...! it _is_ PHP after all. ;)

Heehee.  I like PHP just fine for small projects.  I just cringe at
the notion of something as complex as MediaWiki being written in PHP
and am constantly, involuntarily thinking about how I would do it in
Python.  I can't help myself.  Without looking at even a line of
WikiMedia's code, I already want to do a total rewrite ;-).

> I would generally recommend you just start with MediaWiki if you
> intend to use it. To migrate a non-tiny site later you'll need to work
> out a migration script to import your data in some way (some people
> have asked about this in the past, I don't know if anyone's ever
> completed one or made it public).

You're probably right, I'll download Wikimedia and see about
installing it.  I have tons of server disk space, though the CPU has
been getting a bit overloaded lately.  

> On the other hand if you _do_ write a MoinMoin-to-MediaWiki
> conversion script (or vice-versa!) we'd love to include it in the
> MediaWiki distribution.

I think a rough approximation would be pretty easy to do.  Trying to
get every detail right would be very difficult.  If I do something like
that, I'll likely go for the rough approximation.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: OT: MoinMoin and Mediawiki?

2005-01-10 Thread Eric Pederson
Paul Rubin wrote:

> What I'm getting at is I might like to install MoinMoin now and
> migrate to Mediawiki sometime later.  Anyone have any thoughts about
> whether that's a crazy plan?  


Disclaimer, I am neither using Moinmoin nor Mediawiki, and don't really have 
your answer.

>From what I read, Mediawiki stores each page as "wikitext" in a MySQL 
>database; wikitext "is a mixture of content, markup, and metadata."

It seems essentially what you'd need for migration is a mapping function and I 
do not know how complex the mapping between the systems would be.  I could 
imagine migrating from Moinmoin to Mediawiki via a script looping through the 
Moinmoin files in a directory, modifying a copy of each, and storing them in 
MySQL.

I suspect it's less painful to just start with the wiki you want to end up 
with, but if you're going to migrate between the two, won't Python come in 
handy!  ;-)



Eric Pederson
http://www.songzilla.blogspot.com
:::
domainNot="@something.com"
domainIs=domainNot.replace("s","z")
ePrefix="".join([chr(ord(x)+1) for x in "do"])
mailMeAt=ePrefix+domainIs
:::

--
http://mail.python.org/mailman/listinfo/python-list


Re: OT: MoinMoin and Mediawiki?

2005-01-10 Thread Brion Vibber
Paul Rubin wrote:
Mediawiki is written in PHP and
is far more complex than MoinMoin, plus it's database backed, meaning
you have to run an SQL server as well as the wiki software itself
(MoinMoin just uses the file system).  Plus, I'll guess that it really
needs mod_php, while MoinMoin runs tolerably as a set of cgi's, at
least when traffic is low.
MediaWiki should run with PHP configured in CGI handler mode, but these 
days mod_php has got its claws just about everywhere anyway. If you 
control your own server and don't have multi-user security worries, 
mod_php is simple enough to install and will probably perform better.

For performance I also highly recommend using Turck MMCache or 
equivalent PHP bytecode cache extension. Unlike Python, saving compiled 
bytecode is not the default behavior of PHP, and for non-trivial scripts 
compilation eats up a lot of runtime.

 I'll say that I haven't actually looked at
the Mediawiki code, though I guess I should do so.
Cover your eyes...! it _is_ PHP after all. ;)
What I'm getting at is I might like to install MoinMoin now and
migrate to Mediawiki sometime later.  Anyone have any thoughts about
whether that's a crazy plan?  Should I just bite the bullet and run
Mediawiki from the beginning?  Is anyone here actually running
Mediawiki who can say just how big a hassle it is?
I would generally recommend you just start with MediaWiki if you intend 
to use it. To migrate a non-tiny site later you'll need to work out a 
migration script to import your data in some way (some people have asked 
about this in the past, I don't know if anyone's ever completed one or 
made it public).

On the other hand if you _do_ write a MoinMoin-to-MediaWiki conversion 
script (or vice-versa!) we'd love to include it in the MediaWiki 
distribution.

-- brion vibber (brion @ pobox.com)
--
http://mail.python.org/mailman/listinfo/python-list


Re: fetching method names from a class, and the parameter list from a method

2005-01-10 Thread John Lenton
On Mon, Jan 10, 2005 at 08:29:40PM +0100, Philippe C. Martin wrote:
> Is this possible ?
> 
> I am trying to have auto-completion working in a shell I wrote but I
> currently have the method lists done by hand (ie; if I add/subtract a
> method from that class, then my auto-completion is out of date).
> 
> Same issue with method parameters.
> 
> I have parsed through many of the attributes (ex: I use method.__doc__)
> but have not yet found a way to achieve the above goal.
> 
> Is there a way? something like the following would be great:
> 1) list = Class.__methods__
> 2) dict (because of default values: "param = None") =
> Class.__method__[0].__params__

>>> import inspect
>>> help(inspect)

HTH

-- 
John Lenton ([EMAIL PROTECTED]) -- Random fortune:
In Greene, New York, it is illegal to eat peanuts and walk backwards on
the sidewalks when a concert is on.


signature.asc
Description: Digital signature
-- 
http://mail.python.org/mailman/listinfo/python-list

shutil.move has a mind of its own

2005-01-10 Thread Daniel Bickett
Hello,

I'm writing an application in my pastime that moves files around to
achieve various ends -- the specifics aren't particularly important.
The shutil module was chosen as the means simply because that is what
google and chm searches returned most often.

My problem has to do with shutil.move actually putting the files where
I ask it to. Citing code wouldn't serve any purpose, because I am
using the function in the most straight forward manner, ex:

shutil.move( "C:\omg.txt" , "C:\folder\subdir" )

In my script, rather than a file being moved to the desired location,
it is, rather, moved to the current working directory (in this case,
my desktop -- without any exceptions, mind you). As it happens, the
desired locations are system folders (running windows xp, the folders
are as follows: C:\WINDOWS, C:\WINDOWS\SYSTEM, C:\WINDOWS\SYSTEM32).
To see if this factor was causing the problem, I tried it using the
interpreter, and found it to be flawless.

My question boils down to this: What factors could possibly cause
shutil.move to fail to move a file to the desired location, choosing
instead to place it in the cwd (without raising any exceptions)?

Thank you for your time,

Daniel Bickett

P.S. I know I said I didn't need to post code, but I will anyway. You
never know :)

http://rafb.net/paste/results/FcwlEw86.html
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python3: on removing map, reduce, filter

2005-01-10 Thread David M. Cooke
Steven Bethard <[EMAIL PROTECTED]> writes:
> Some timings to verify this:
>
> $ python -m timeit -s "def square(x): return x*x" "map(square, range(1000))"
> 1000 loops, best of 3: 693 usec per loop
>
> $ python -m timeit -s "[x*x for x in range(1000)]"
> 1000 loops, best of 3: 0.0505 usec per loop

Maybe you should compare apples with apples, instead of oranges :-)
You're only running the list comprehension in the setup code...

$ python2.4 -m timeit -s "def square(x): return x*x" "map(square, range(1000))"
1000 loops, best of 3: 464 usec per loop
$ python2.4 -m timeit "[x*x for x in range(1000)]"
1000 loops, best of 3: 216 usec per loop

So factor of 2, instead of 13700 ...

-- 
|>|\/|<
/--\
|David M. Cooke
|cookedm(at)physics(dot)mcmaster(dot)ca
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: why not datetime.strptime() ?

2005-01-10 Thread David M. Cooke
Joshua Spoerri <[EMAIL PROTECTED]> writes:

> Skip Montanaro  pobox.com> writes:
>> josh> Shouldn't datetime have strptime?
>> If someone wants to get their feet wet with extension module
>> programming
>> this might be a good place to start.  Mostly, I think nobody who has
>> needed/wanted it so far has the round tuits available to spend on the
>> task.
>
> OK, it was pretty straightforward. Thanks for the direction.
>
> To whom should I send the patch (attached)?

Submit it to the patch tracker on sourceforge.

But first, some constructive criticism:

> --- Modules/datetimemodule.c.orig 2003-10-20 10:34:46.0 -0400
> +++ Modules/datetimemodule.c  2005-01-10 20:58:38.884823296 -0500
> @@ -3774,6 +3774,32 @@
>   return result;
>  }
>  
> +/* Return new datetime from time.strptime(). */
> +static PyObject *
> +datetime_strptime(PyObject *cls, PyObject *args)
> +{
> + PyObject *result = NULL, *obj, *module;
> + const char *string, *format;
> +
> + if (!PyArg_ParseTuple(args, "ss:strptime", &string, &format))
> + return NULL;
> + if ((module = PyImport_ImportModule("time")) == NULL)
> + return NULL;
> + obj = PyObject_CallMethod(module, "strptime", "ss", string, format);
> + Py_DECREF(module);

You don't check for errors: an exception being thrown by
PyObject_CallMethod will return obj == NULL.

If there's a module in sys.path called time that overrides the stdlib
time, things will fail, and you should be able to catch that.

> + result = PyObject_CallFunction(cls, "iii",
> + PyInt_AsLong(PySequence_GetItem(obj, 0)),
> + PyInt_AsLong(PySequence_GetItem(obj, 1)),
> + PyInt_AsLong(PySequence_GetItem(obj, 2)),
> + PyInt_AsLong(PySequence_GetItem(obj, 3)),
> + PyInt_AsLong(PySequence_GetItem(obj, 4)),
> + PyInt_AsLong(PySequence_GetItem(obj, 5)),
> + PyInt_AsLong(PySequence_GetItem(obj, 6)));

Are you positive those PySequence_GetItem calls will succeed? That
they will return Python integers?

> + Py_DECREF(obj);
> + return result;
> +}
> +
>  /* Return new datetime from date/datetime and time arguments. */
>  static PyObject *
>  datetime_combine(PyObject *cls, PyObject *args, PyObject *kw)
> @@ -4385,6 +4411,11 @@
>PyDoc_STR("timestamp -> UTC datetime from a POSIX timestamp "
>  "(like time.time()).")},
>  
> + {"strptime", (PyCFunction)datetime_strptime,
> +  METH_VARARGS | METH_CLASS,
> +  PyDoc_STR("strptime -> new datetime parsed from a string"
> +"(like time.strptime()).")},
> +
>   {"combine", (PyCFunction)datetime_combine,
>METH_VARARGS | METH_KEYWORDS | METH_CLASS,
>PyDoc_STR("date, time -> datetime with same date and time fields")},

It probably would help to add some documentation to add to the
datetime module documentation.

-- 
|>|\/|<
/--\
|David M. Cooke
|cookedm(at)physics(dot)mcmaster(dot)ca
-- 
http://mail.python.org/mailman/listinfo/python-list


unicode mystery

2005-01-10 Thread Sean McIlroy
I recently found out that unicode("\347", "iso-8859-1") is the
lowercase c-with-cedilla, so I set out to round up the unicode numbers
of the extra characters you need for French, and I found them all just
fine EXCEPT for the o-e ligature (oeuvre, etc). I examined the unicode
characters from 0 to 900 without finding it; then I looked at
www.unicode.org but the numbers I got there (0152 and 0153) didn't
work. Can anybody put a help on me wrt this? (Do I need to give a
different value for the second parameter, maybe?)

Peace,
STM

PS: I'm considering looking into pyscript as a means of making
diagrams for inclusion in LaTeX documents. If anyone can share an
opinion about pyscript, I'm interested to hear it.

Peace
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huve ge Sets() to disk

2005-01-10 Thread Mike C. Fletcher
Martin MOKREJÅ wrote:
Tim Peters wrote:
...
I was really hoping I'll get an answer how to alter the indexes for 
dictionaries
in python.

Sorry, I don't have a guess for what that might mean.

I'm not an expert, mysql for example givs you ability to index say
first 10 characters of a text column, which are typically varchar.
Just for curiosity I'd like to know how do it in python.
When importing data from a flatfile into mysql table, there's an
option to delay indexing to the very last moment, when all keys are
loaded (it doesn't make sense to re-create index after each new
row into table is added). I believe it's exactly same waste of cpu/io
in this case - when I create a dictionary and fill it with data,
I want to create index afterward, not after every key/value pair
is recorded.
Okay, you seem to be missing this key idea:
   A hash-table (dict) is approximately the same level of abstraction
   as a btree index.
Your MySQL "index" is likely implemented as a btree.  A hash-table could 
just as readily be used to implement the index.  When you insert into 
either of these structures (btree or hash), you are not creating an 
"index" separate from the structure, the structure *is* an "index" of 
the type you are thinking about.  These structures are both efficient 
representations that map from a data-value to some other data-value.  
Hashes with a good hash function (such as Python's dicts) are basically 
O(1) (worst case O(n), as Tim notes), while Btrees (such as common 
database indices) are O(log(n)) (or something of that type, basically it 
grows much more slowly than n).

Once more, I expect to have between E4 or E5 to E8??? words
stored in 20 dictionaries (remember words of sizes in range 1-20?
Every of those 20 dictionaries should be, I believe, indexed just once.
The indexing method should know all entries in a given file are of same
size, i.e. 5 chars, 15 chars, 20 chars etc.

I think you're making this more complicated than it needs to be.

I hope the algoritm can save some logic. For example, can turn off 
locking support,
index only part of the key etc.
I'd tend to agree with Tim.  You're making this all far too complex in 
what appears to be the absence of any real need.  There's a maxim in 
computer programming that you avoid, wherever possible, what is called 
"premature optimisation".  You are here trying to optimise away a 
bottleneck that doesn't exist (indexing overhead, and locking support 
are basically nil for a dictionary).  It is almost a certainty that 
these are not *real* bottlenecks in your code (what with not existing), 
so your time would be better spent writing a "dumb" working version of 
the code and profiling it to see where the *real* bottlenecks are.

For example, looking up a key with it's value only once in the whole 
for loop tells me
I don't need an index. Yes, I'll do this 4 times for those 4 
languages, but
still I think it's faster to live without an index, when I can sort
records. I think I like the sorted text file approach, and the dictionary
approach without an index would be almost the same, especially if I 
manage
to tell the db layout not to move the cursor randomly but just to walk 
down the
pre-sorted data.
Again, you don't really have a cursor with a dictionary (and since it's 
randomly ordered, a cursor wouldn't mean anything).  A *btree* has an 
order, but not a dictionary.  You could construct a sorted list in 
memory that would approximate what I *think* you're thinking of as a 
dictionary-without-an-index.

Good luck,
Mike

 Mike C. Fletcher
 Designer, VR Plumber, Coder
 http://www.vrplumber.com
 http://blog.vrplumber.com
--
http://mail.python.org/mailman/listinfo/python-list


Re: Old Paranoia Game in Python

2005-01-10 Thread SPK
You can download the code from the web directly now at:
http://homepage.mac.com/spkane/python/
Thanks for all the code suggestions. This is what I was hoping for, but 
was honestly suprised to actually get it all, although I did get at 
least one emotional blast, so I don't feel like Usenet has completely 
changed from it's old self...grin

I already made a change to the this_page() method based on a 
suggestion, and will happily pursue many of the other sugestions here. 
I'm only about 100 pages into Mark Lutz's "Learning Python" book, so 
I'm sure there is a lot more learning to do.

Thanks for all your input!
Sean

On 2005-01-10 15:44:33 -0800, "McBooCzech" <[EMAIL PROTECTED]> said:
Newbie in Python.
I did copy the whole script form the web and save it as para1.py. I did
download pyparsing module and save it to
C:\\Python23\\Lib\\pyparsing122.
I did run following script:
import sys
sys.path.append('C:\\Python23\\Lib\\pyparsing122')
from pyparsing import *
extraLineBreak = White(" ",exact=1) + LineEnd().suppress()
text = file("Para1.py").read()
newtext = extraLineBreak.transformString(text)
file("para2.py","w").write(newtext)
I did try to run para2.py script, but following message
File "para2.py", line 169
choose(4,"You give your correct clearance",5,"You lie and claim
^
SyntaxError: EOL while scanning single-quoted string
So my questions are:
Why pyparser didn't correct the script?
What I am doing wrong?
Is it necessary to correct the script by hand anyway?
Petr

--
http://mail.python.org/mailman/listinfo/python-list


OT: MoinMoin and Mediawiki?

2005-01-10 Thread Paul Rubin
I need to set up a wiki for a small group.  I've played with MoinMoin
a little bit and it's reasonably straightforward to set up, but
limited in capabilities and uses BogusMarkupConventions.  I want to
use it anyway because I need something running right away and I don't
want to spend a whole lot of time messing with it.

In the larger world, though, there's currently One True wiki package,
namely Mediawiki (used by Wikipedia).  Mediawiki is written in PHP and
is far more complex than MoinMoin, plus it's database backed, meaning
you have to run an SQL server as well as the wiki software itself
(MoinMoin just uses the file system).  Plus, I'll guess that it really
needs mod_php, while MoinMoin runs tolerably as a set of cgi's, at
least when traffic is low.  I'll say that I haven't actually looked at
the Mediawiki code, though I guess I should do so.

What I'm getting at is I might like to install MoinMoin now and
migrate to Mediawiki sometime later.  Anyone have any thoughts about
whether that's a crazy plan?  Should I just bite the bullet and run
Mediawiki from the beginning?  Is anyone here actually running
Mediawiki who can say just how big a hassle it is?

There are actually two wikis I want to run, one of which I need
immediately, but which will be private, low traffic and stay that way.
The other one will be public and is planned grow to medium size (a few
thousand active users), but which I don't need immediately.  I
definitely want the second one to eventually run Mediawiki.  I can
probably keep the first one on MoinMoin indefinitely, but that would
mean I'm eventually running two separate wiki packages, which gets
confusing.
-- 
http://mail.python.org/mailman/listinfo/python-list


fetching method names from a class, and the parameter list from a method

2005-01-10 Thread Philippe C. Martin
Is this possible ?

I am trying to have auto-completion working in a shell I wrote but I
currently have the method lists done by hand (ie; if I add/subtract a
method from that class, then my auto-completion is out of date).

Same issue with method parameters.

I have parsed through many of the attributes (ex: I use method.__doc__)
but have not yet found a way to achieve the above goal.

Is there a way? something like the following would be great:
1) list = Class.__methods__
2) dict (because of default values: "param = None") =
Class.__method__[0].__params__



Regards,


Philippe



-- 
***
Philippe C. Martin
SnakeCard LLC
www.snakecard.com
***

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huge Sets() to disk

2005-01-10 Thread Tim Peters
[Istvan Albert] 
> #- I think that you need to first understand how dictionaries work. 
> #- The time needed to insert a key is independent of 
> #- the number of values in the dictionary. 

[Batista, Facundo]
> Are you sure? 
> 
> I think that is true while the hashes don't collide. If you have collisions,
> time starts to depend of element quantity. But I'm not sure
>
> Tim sure can enlighten us. 

I can only point the way to enlightenment -- you have to contemplate
your own navel (or Guido's, but he's kind of shy that way).

What Istvan Albert said is close enough in context.  The *expected*
(mean) number of collisions in a Python dict probe is less than 1,
regardless of dict size.  That assumes the collision distribution is
no worse than random.  It's possible that all dict keys hash to the
same table slot, and then each insertion is O(len(dict)).  It's
possible to contrive such cases even if the hash function is a good
one.  But it's not going to happen by accident (and, when it does
, open a bug report -- we'll improve the key type's hash
function then).
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Securing a future for anonymous functions in Python

2005-01-10 Thread Tim Peters
...

[Anna]
>> BTW - I am *quite* happy with the proposal for "where:" syntax - I
>> think it handles the problems I have with lambda quite handily.

[Steve Holden]
> Whereas I find it to be an excrescence, proving (I suppose) that one
> man's meat is another person's poison, or something.

I've been waiting for someone to mention this, but looks like nobody
will, so I'm elected.  Modern functional languages generally have two
forms of local-name definition, following common mathematical
conventions.  "where" was discussed here.  The other is "let/in", and
seems a more natural fit to Python's spelling of block structure:

let:
suite
in:
suite

There's no restriction to expressions here.  I suppose that, like the
body of a class, the `let` suite is executed starting with a
conceptually empty local namespace, and whatever the suite binds to a
local name becomes a temporary binding in the `in` suite (like
whatever a class body binds to local names becomes the initial value
of the class __dict__).  So, e.g.,

i = i1 = 3
let:
i1 = i+1
from math import sqrt
in:
print i1, sqrt(i1)
print i1,
print sqrt(i1)

would print

4 2
3

and then blow up with a NameError.

LIke it or not, it doesn't seem as strained as trying to pile more
gimmicks on Python expressions.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: why not datetime.strptime() ?

2005-01-10 Thread Joshua Spoerri
Skip Montanaro  pobox.com> writes:
> josh> Shouldn't datetime have strptime?
> If someone wants to get their feet wet with extension module
> programming
> this might be a good place to start.  Mostly, I think nobody who has
> needed/wanted it so far has the round tuits available to spend on the
> task.

OK, it was pretty straightforward. Thanks for the direction.

To whom should I send the patch (attached)?
--- Modules/datetimemodule.c.orig   2003-10-20 10:34:46.0 -0400
+++ Modules/datetimemodule.c2005-01-10 20:58:38.884823296 -0500
@@ -3774,6 +3774,32 @@
return result;
 }
 
+/* Return new datetime from time.strptime(). */
+static PyObject *
+datetime_strptime(PyObject *cls, PyObject *args)
+{
+   PyObject *result = NULL, *obj, *module;
+   const char *string, *format;
+
+   if (!PyArg_ParseTuple(args, "ss:strptime", &string, &format))
+   return NULL;
+   if ((module = PyImport_ImportModule("time")) == NULL)
+   return NULL;
+   obj = PyObject_CallMethod(module, "strptime", "ss", string, format);
+   Py_DECREF(module);
+
+   result = PyObject_CallFunction(cls, "iii",
+   PyInt_AsLong(PySequence_GetItem(obj, 0)),
+   PyInt_AsLong(PySequence_GetItem(obj, 1)),
+   PyInt_AsLong(PySequence_GetItem(obj, 2)),
+   PyInt_AsLong(PySequence_GetItem(obj, 3)),
+   PyInt_AsLong(PySequence_GetItem(obj, 4)),
+   PyInt_AsLong(PySequence_GetItem(obj, 5)),
+   PyInt_AsLong(PySequence_GetItem(obj, 6)));
+   Py_DECREF(obj);
+   return result;
+}
+
 /* Return new datetime from date/datetime and time arguments. */
 static PyObject *
 datetime_combine(PyObject *cls, PyObject *args, PyObject *kw)
@@ -4385,6 +4411,11 @@
 PyDoc_STR("timestamp -> UTC datetime from a POSIX timestamp "
   "(like time.time()).")},
 
+   {"strptime", (PyCFunction)datetime_strptime,
+METH_VARARGS | METH_CLASS,
+PyDoc_STR("strptime -> new datetime parsed from a string"
+  "(like time.strptime()).")},
+
{"combine", (PyCFunction)datetime_combine,
 METH_VARARGS | METH_KEYWORDS | METH_CLASS,
 PyDoc_STR("date, time -> datetime with same date and time fields")},
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Port blocking

2005-01-10 Thread Aldo Cortesi
Thus spake Steve Holden ([EMAIL PROTECTED]):

> I teach the odd security class, and what you say is far
> from true. As long as the service is located behind a
> firewall which opens up the correct holes for it, it's
> most unlikely that corporate firewalls would disallow
> client connections to such a remote port.

Don't be too sure about that - most of the well-run
corporate networks I have been involved with block outbound
traffic by default. It is certainly sound security policy to
shunt outbound traffic through intermediary servers (e.g.
SMTP) and proxies (e.g. HTTP and FTP) so that it can be
logged, monitored, tracked, and controlled.

This is the strategy I recommend to my clients - the only
sensible one in a world of spyware, worms, insecure web
browsers and corporate espionage...




Cheers,


Aldo



--
Aldo Cortesi
[EMAIL PROTECTED]
http://www.nullcube.com
Off: (02) 9283 1131
Mob: 0419 492 863
-- 
http://mail.python.org/mailman/listinfo/python-list


Dabo 0.3 Released

2005-01-10 Thread Ed Leafe
We are pleased to announce Dabo 0.3, the third major release of our 
data application framework. The Dabo framework is a true 3-tier design, 
with data access and UI code separated from your business logic. And 
since it's Python, and uses wxPython for its UI, it is completely 
cross-platform, having been tested on Linux, Windows and OS X.

Download from http://dabodev.com/download
This marks the first major release of Dabo since we changed the 
licensing of Dabo. It is now released under the MIT License, which 
gives you much more freedom to do what you want with the code. See 
http://www.opensource.org/licenses/mit-license.html for the exact terms 
of the license. It is our hope that this will remove any reservations 
that people may have had about working with GPL software, and as a 
result grow the user base of the framework.

Over the past few months we have gotten some wonderful feedback from 
people who are interested in our work and who are interested in a solid 
application framework for developing database apps. The framework is 
now very solid and reliable, and Dabo applications are being used 
daily. But there is still more we plan on adding, so like all 
pre-release software, use with caution.

Anyone interested in contributing to Dabo, or who just want to find out 
what it is all about is encouraged to join our mailing lists:
	dabo-users: for those interested in learning how to work with Dabo to 
create applications, and for general help with Dabo.   
http://leafe.com/mailman/listinfo/dabo-users
	dabo-dev: for those interested in the ongoing development of Dabo. 
This list contains lots of lively discussion on design issues, as well 
as notices of all commits to the code repository.   
http://leafe.com/mailman/listinfo/dabo-dev

Here is a brief summary of what's new in Dabo 0.3:
Dabo Framework:
Support for PostgreSQL added in addition to existing support for MySQL 
and Firebird databases.

Improved unicode support in data cursors.
Support for fulltext searches added.
Child requeries and transaction support added to bizobj and cursor 
classes.

Several new controls have been added, including a window splitter and a 
grid-like list control.

The GridSizer has been added, making laying out of controls in a 
grid-like pattern a breeze. The API for all the sizers has been greatly 
simplified, too.

Menus have had wxPython-specific code removed, and are now much simpler 
to create and manage.

A 'Command Window' has been added to the base menu. This allows you to 
enter commands interactively in a running application, which makes 
debugging and testing so much simpler. Once you've developed an app 
with a Command Window at your disposal, you'll never want to develop 
without it!

And, of course, more bug fixes than we'd like to say!
Dabo IDE:
The appWizard has been moved from the Demo project to here, as well as 
the FieldSpecEditor. There is a new connection editor for visually 
setting up database connection information and storing it in an XML 
file.

The appWizard can now generate parent/child/grandchild/... 
relationships. The FieldSpecEditor (for visually controlling the 
appearance and behavior of wizard-generated apps) has been greatly 
improved, with interactive ordering and live previews of changes.

The editor can now toggle word wrap, jump to any line in the file, and 
comment/uncomment blocks of code.

Dabo Demo:
There's now a MineSweeper game - written in Dabo, of course!
All of the other demos have been updated to work with the changes in 
the rest of the framework.

 ___/
/
   __/
  /
 /
 Ed Leafe
 http://leafe.com/
 http://dabodev.com/
--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huve ge Sets() to disk

2005-01-10 Thread Martin MOKREJÅ
Tim Peters wrote:
[Tim Peters]
As I mentioned before, if you store keys in sorted text files,
you can do intersection and difference very efficiently just by using
the Unix `comm` utiltity.

[Martin MOKREJÅ]
Now I got your point. I understand the comm(1) is written in C, but it still
has to scan file1 once and file2 n-times, where n is a number of lines
in file1, right? Without any index ... I'll consider it, actually will test,
thanks!

`comm` is much more efficient than that.  Note that the input files
have to be sorted.  Then, in a single pass over both files (not 2
passes, not 3, not n, just 1), it can compute any subset of these
three (do `man comm`):
1. The words common to both files.
2. The words unique to "the first" file.
3. The words unique to "the second" file.
I read the manpage actually before answering to the list.
It's essentially just doing a merge on two sorted lists, and how it
works should be obvious if you give it some thought.  It takes time
proportional to the sum of the lengths of the input files, and nothing
*can* be faster than that.
Well, I think I understand now because "Scott David Daniels" wrote
probably the very same logic in python code in this thread.
I was really hoping I'll get an answer how to alter the indexes for dictionaries
in python.

Sorry, I don't have a guess for what that might mean.
I'm not an expert, mysql for example givs you ability to index say
first 10 characters of a text column, which are typically varchar.
Just for curiosity I'd like to know how do it in python.
When importing data from a flatfile into mysql table, there's an
option to delay indexing to the very last moment, when all keys are
loaded (it doesn't make sense to re-create index after each new
row into table is added). I believe it's exactly same waste of cpu/io
in this case - when I create a dictionary and fill it with data,
I want to create index afterward, not after every key/value pair
is recorded.
You convinced me not to even try to construct to theoretical dictionary,
as it will take ages just to create. Even if I'd manage, I couldn't
save it (the theoretical and possibly not even the dict(theoret) - dict(real)
result).

Worse, it didn't sound like a good approach even if you could save it.

Still, before I give the whole project, once more:
I'll parse some text files, isolates separate words and add them to
either Set(), list, dict, flatfile line by line.
Depending on the above, I'll compare them and look for those unique
to some "language". I need to keep track of frequencies used
in every such language,

Frequencies of what?  "A language" here can contain some words
multiple times each?
Yes, to compute the frequency of each word used in say "language A",
I'll count number of occurencies and them compute frequency of it.
Frequency number should be recorded as a value in the dictionary,
where the keys are unique and represent the word itself (or it's hash
as recommended already).
Once more, the dictionary will contain every word only once, it will be really
a unique key.
so the dict approach would be the best.  The number stored as a value vould
be a float ^H^H^H^H^H^H Decimal() type - very small number.

Integers are the natural type for counting, and require less memory
than floats or decimals.
I hoped I was clear ... when I said I'll compute frequency of those words.
The algorithm to compute it will be subject to change during developemnt. ;)

Once more, I expect to have between E4 or E5 to E8??? words
stored in 20 dictionaries (remember words of sizes in range 1-20?
Every of those 20 dictionaries should be, I believe, indexed just once.
The indexing method should know all entries in a given file are of same
size, i.e. 5 chars, 15 chars, 20 chars etc.

I think you're making this more complicated than it needs to be.
I hope the algoritm can save some logic. For example, can turn off locking 
support,
index only part of the key etc.

I already did implement the logic to walk through those 20 different
dictionaries from language a and from language b and find uout those
unique to a or common to both of them. Why I went to ask on this list
was to make sure I took right approach. Sets seemed to be better solution
for the simple comparison (common, unique). To keep track of those
very small frequencies, I anyway have to have dictionaries. I say
that again, how can I improve speed of dictionary indexing?
It doesn't matter here if I have 10E4 keys in dictionary or
10E8 keys in a dictionary.

What reason do you have for believing that the speed of dictionary
indexing is worth any bother at all to speed up?  Dict lookup is
generally considered to be extremely fast already.  If you're just
*guessing* that indexing is the bottleneck, you're probably guessing
wrong -- run a profiler to find out where the real bottleneck is.
For example, looking up a key with it's value only once in the whole for 
loop tells me
I don't need an index. Yes, I'll do this 4 times for those 4 languages, but
still I th

Re: Chicago Python Users Group: Thu 13 Jan Meeting

2005-01-10 Thread Steve Holden
Ian Bicking wrote:
The Chicago Python User Group, ChiPy, will have its next meeting on
Thursday, 13 January 2005, starting at 7pm.  For more information on
ChiPy see http://chipy.org
[...]
About ChiPy
---
We meet once a month, on the second Thursday of the month.  If you
can't come this month, please join our mailing list:
http://lonelylion.com/mailman/listinfo/chipy
Ian:
I am teaching class in Chicago the week of February 22 (I'll be in 
Schaumberg and available any evening from 21-24). Since that doesn't 
coincide woth your regular schedule, I wonder if you'd like to mention 
that I'd be up for a beer and/or meal and a natter with any Chicago 
Pythonistas who happen to fancy it.

Anyone who so wishes can then email me.
regards
 Steve
--
Steve Holden   http://www.holdenweb.com/
Python Web Programming  http://pydish.holdenweb.com/
Holden Web LLC  +1 703 861 4237  +1 800 494 3119
--
http://mail.python.org/mailman/listinfo/python-list


Re: Port blocking

2005-01-10 Thread Ed Leafe
On Jan 10, 2005, at 8:00 PM, Steve Holden wrote:
Ah yes, but is there really? For example, I did a search of the TOC 
of GTK+ Reference Manual:
http://developer.gnome.org/doc/API/2.0/gtk/index.html
for the word "data", and there's apparently no widget which is 
explicitly tied to databases. So in GTKs case, for instance, it looks 
like one has to roll one's own solution, rather than just using one 
out of the box.
There isn't, IMHO, anything with the polish of (say) Microsoft Access, 
or even Microsoft SQL Server's less brilliant interfaces. Some things 
Microsoft *can* do well, it's a shame they didn't just stick to the 
knitting.
	Though it's certainly not anywhere near the polish of 
Access, you should check out Dabo. It's designed from the ground up to 
be a database application framework, and is on its way to achieving 
that goal. Right now you still have to do all the UI stuff in code, but 
we're just starting to develop the visual UI Designer. Stay 
tuned!

 ___/
/
   __/
  /
 /
 Ed Leafe
 http://leafe.com/
 http://dabodev.com/
--
http://mail.python.org/mailman/listinfo/python-list


Re: exceptions and items in a list

2005-01-10 Thread Steve Holden
vincent wehren wrote:
rbt wrote:
If I have a Python list that I'm iterating over and one of the objects 
in the list raises an exception and I have code like this:

try:
do something to object in list
except Exception:
pass
Does the code just skip the bad object and continue with the other 
objects in the list, or does it stop?

Thanks

Fire up a shell and try:
 >>> seq = ["1", "2", "a", "4", "5", 6.0]
 >>> for elem in seq:
 try:
print int(elem)
 except ValueError:
pass
and see what happens...
--
Vincent Wehren
I suspect the more recent versions of Python allow a much more elegant 
solution. I can't remember precisely when we were allowed to use 
continue in an except suite, but I know we couldn't in Python 2.1.

Nowadays you can write:
Python 2.4 (#1, Dec  4 2004, 20:10:33)
[GCC 3.3.3 (cygwin special)] on cygwin
Type "help", "copyright", "credits" or "license" for more information.
 >>> for i in [1, 2, 3]:
 ...   try:
 ... print i
 ... if i == 2: raise AttributeError, "Bugger!"
 ...   except AttributeError:
 ... print "Caught exception"
 ... continue
 ...
1
2
Caught exception
3
 >>>
To terminate the loop on the exception you would use "break" instead of 
"continue".

regards
 Steve
--
Steve Holden   http://www.holdenweb.com/
Python Web Programming  http://pydish.holdenweb.com/
Holden Web LLC  +1 703 861 4237  +1 800 494 3119
--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huge Sets() to disk

2005-01-10 Thread Paul Rubin
=?windows-1252?Q?Martin_MOKREJ=8A?= <[EMAIL PROTECTED]> writes:
> Yes, I'm. I still don't get what that acronym CLRS stands for ... :(

CLRS = the names of the authors, Cormen, Leiserson, Rivest, and Stein,
if I spelled those correctly.  :)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Port blocking

2005-01-10 Thread Paul Rubin
Mark Carter <[EMAIL PROTECTED]> writes:
> >>Also, is there a good tool for writing database UIs?
> > Yes, quite a few.
> 
> Ah yes, but is there really? For example, I did a search of the TOC of
> GTK+ Reference Manual:

Try looking on freshmeat or sourceforge instead.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Port blocking

2005-01-10 Thread Steve Holden
Ville Vainio wrote:
"Mark" == Mark Carter <[EMAIL PROTECTED]> writes:

Mark> Mark Carter wrote:
>> Paul Rubin wrote:
>>> Usually you wouldn't run a public corba or pyro service over
>>> the internet.  You'd use something like XMLRPC over HTTP port
>>> 80 partly for the precise purpose of not getting blocked by
>>> firewalls.
Mark> I'm not sure if we're talking at cross-purposes here, but
Mark> the application isn't intended for public consumption, but
Mark> for fee-paying clients.
Still, if the consumption happens over the internet there is almost
100% chance of the communication being prevented by firewalls.
This is exactly what "web services" are for.
I teach the odd security class, and what you say is far from true. As 
long as the service is located behind a firewall which opens up the 
correct holes for it, it's most unlikely that corporate firewalls would 
disallow client connections to such a remote port.

Web services are for offering services despite the fact that the 
corporate firewall managers are valiantly trying to stop unknown 
services from presenting to the outside world (and my immediately 
preceding post tells you what I think of that idea).

The situation is analogous to connecting to web servers running on 
non-standard ports (8000 and 8080 are traditional favorites, but 
firewalls very rarely accord them any special treatment).

Most firewall configurations allow fairly unrestricted outgoing 
connections, limiting rules to sanity checking of addresses to ensure 
nobody inside the firewall is address spoofing. Incoming connections are 
usually limited to specific combinations of port number and IP address 
known to be legitimate corporate services to the external world. 
Firewalling web services effectively is just an additional pain for the 
network manager.

regards
 Steve
--
Steve Holden   http://www.holdenweb.com/
Python Web Programming  http://pydish.holdenweb.com/
Holden Web LLC  +1 703 861 4237  +1 800 494 3119
--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huge Sets() to disk

2005-01-10 Thread Martin MOKREJŠ
Paul Rubin wrote:
Paul Rubin  writes:
handle with builtin Python operations without putting some thought
into algorithms and data structures.  From "ribosome" I'm guessing
you're doing computational biology.  If you're going to be writing
Well, trying sort of ... Not much successfull so far. ;)
code for these kinds of problems on a regular basis, you probably
ought to read a book or two on the topic.  "CLRS" is a good choice:
 http://theory.lcs.mit.edu/~clr/
 http://mitpress.mit.edu/algorithms/

Correction to unclarity: I meant a book on the topic of algorithms and
data structures (e.g. CLRS).  Since you're presumably already a
biologist, I wouldn't presume to suggest that you read a book on
biology ;-).
Yes, I'm. I still don't get what that acronym CLRS stands for ... :(
--
http://mail.python.org/mailman/listinfo/python-list


Re: Port blocking

2005-01-10 Thread Steve Holden
Mark Carter wrote:
Paul Rubin wrote:
Mark Carter <[EMAIL PROTECTED]> writes:
Supposing I decide to write a server-side application using something
like corba or pyro.

Usually you wouldn't run a public corba or pyro service over the
internet.  You'd use something like XMLRPC over HTTP port 80 partly
for the precise purpose of not getting blocked by firewalls.

Although, when you think about it, it kinda defeats the purposes of 
firewalls. Not that I'm criticising you personally, you understand.

Yet another brilliant Microsoft marketing concept: "Shit, these bloody 
firewalls are getting in the way of our new half-baked ideas for 
application architectures to replace all that funky not-invented-here 
open source stuff we can't charge money for. Let's design something that 
completely screws up existing firewall strategies, then we can charge 
people extra to firewall the new stuff after we've hooked them all on 
yet another inferior execution of existing ideas".

Also, is there a good tool for writing database UIs?

Yes, quite a few.

Ah yes, but is there really? For example, I did a search of the TOC of 
GTK+ Reference Manual:
http://developer.gnome.org/doc/API/2.0/gtk/index.html
for the word "data", and there's apparently no widget which is 
explicitly tied to databases. So in GTKs case, for instance, it looks 
like one has to roll one's own solution, rather than just using one out 
of the box.
There isn't, IMHO, anything with the polish of (say) Microsoft Access, 
or even Microsoft SQL Server's less brilliant interfaces. Some things 
Microsoft *can* do well, it's a shame they didn't just stick to the 
knitting.

regards
 Steve
--
Steve Holden   http://www.holdenweb.com/
Python Web Programming  http://pydish.holdenweb.com/
Holden Web LLC  +1 703 861 4237  +1 800 494 3119
--
http://mail.python.org/mailman/listinfo/python-list


Re: python3: 'where' keyword

2005-01-10 Thread Steve Holden
Paul Rubin wrote:
"Carl Banks" <[EMAIL PROTECTED]> writes:
When I asked you to do this, it was just a rhetorical way to tell you
that I didn't intend to play this game.  It's plain as day you're
trying to get me to admit something.  I'm not falling for it.
If you have a point to make, why don't you just make it?

You asked me to compare the notion of macros with the Zen list.  I did
so.  I didn't see any serious conflict, and reported that finding.
Now you've changed your mind and you say you didn't really want me to
make that comparison after all.
Well I for one disagreed with many of your estimates of the zen's 
applicability to macros, but I just couldn't be arsed saying so.

An amazing amount of the headaches that both newbies and experienced
users have with Python, could be solved by macros.  That's why there's
been an active interest in macros for quite a while.  It's not clear
what the best way to do design them is, but their existence can have a
profound effect on how best to do these ad-hoc syntax extensions like
"where".  Arbitrary limitations that are fairly harmless without
macros become a more serious pain in the neck if we have macros.
This is not a justifiable assertion, IMHO, and if you think that newbies 
will have their lives made easier by the addition of ad hoc syntax 
extensions then you and I come from a different world (and I suspect the 
walls might be considerably harder in mine than in yours).

So, we shouldn't consider these topics separately from each other.
They are likely to end up being deeply related.
I don't really understand why, if macros are so great (and you are 
reading the words of one who was using macros back in the days of 
Strachey's GPM) why somebody doesn't produce a really useful set of 
(say) M4 macros to prove how much they'd improve Python.

Now that's something that would be a bit less ignorable than this 
apparently interminable thread.

regards
 Steve
PS: Your continued use of the NOSPAM.invalid domain is becoming much 
more irritating than your opinions on macros in Python. Using a bogus 
URL is piling crap on top of more crap.
--
Steve Holden   http://www.holdenweb.com/
Python Web Programming  http://pydish.holdenweb.com/
Holden Web LLC  +1 703 861 4237  +1 800 494 3119
--
http://mail.python.org/mailman/listinfo/python-list


Re: Securing a future for anonymous functions in Python

2005-01-10 Thread Steve Holden
Anna wrote:
You cut something from that...
"""It's not, after all, the word "lambda" itself; I would still
have some issues with using, say "function", instead of "lambda", but
at least then I would immediately know what I was looking at..."""
I would have fewer ambiguities about using, say "func" rather than
lambda. Lambda always makes me feel like I'm switching to some *other*
language (specifically, Greek - I took a couple of semesters of Attic
Greek in college and quite enjoyed it.) But, the fact that lambda
Good God, you mean there's a language just for the attic? Those Greeks 
certainly believed in linguistic specialization, didn't they?

doesn't MEAN anything  (and has come - I mean - DELTA at least has a
fairly commonly understood meaning, even at high-school level math.
But, lambda? If it was "func" or "function" or even "def", I would be
happier. At least that way I'd have some idea what it was supposed to
be...
Well, I suspect that Church originally chose lambda precisely because of 
its meaninglessness, and I'm always amused when mathematical types try 
to attribute an intuitive meaning to the word. It's purely a learned 
association, which some arrogantly assume simply *everyone* knows or 
should know.

Not that I'm trying to write off lambda supporters as arrogant (though I 
*do* have a suspicion that many of them break the wrong end of their 
boiled eggs).

BTW - I am *quite* happy with the proposal for "where:" syntax - I
think it handles the problems I have with lambda quite handily. 

Whereas I find it to be an excrescence, proving (I suppose) that one 
man's meat is another person's poison, or something.

regards
 Steve
[who only speaks Ground Floor English]
--
Steve Holden   http://www.holdenweb.com/
Python Web Programming  http://pydish.holdenweb.com/
Holden Web LLC  +1 703 861 4237  +1 800 494 3119
--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huge Sets() to disk

2005-01-10 Thread Paul Rubin
Paul Rubin  writes:
> handle with builtin Python operations without putting some thought
> into algorithms and data structures.  From "ribosome" I'm guessing
> you're doing computational biology.  If you're going to be writing
> code for these kinds of problems on a regular basis, you probably
> ought to read a book or two on the topic.  "CLRS" is a good choice:
> 
>   http://theory.lcs.mit.edu/~clr/
>   http://mitpress.mit.edu/algorithms/

Correction to unclarity: I meant a book on the topic of algorithms and
data structures (e.g. CLRS).  Since you're presumably already a
biologist, I wouldn't presume to suggest that you read a book on
biology ;-).
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python & unicode

2005-01-10 Thread Michel Claveau - abstraction méta-galactique non triviale en fuite perpétuelle.
Hi !

>>> and plain Latin letters

But not all letters  (no : é à ç à ê ö ñ  etc.)



Therefore, the Python's support of Unicode is... limited.



Good night
-- 
Michel Claveau





-- 
http://mail.python.org/mailman/listinfo/python-list


Re: exceptions and items in a list

2005-01-10 Thread Peter Hansen
rbt wrote:
Andrey Tatarinov wrote:
# skip bad object and continue with others
for object in objects:
try:
#do something to object
except Exception:
pass
Thanks Andrey. That's a great example of how to do it.
Actually, it's not really a "great" example, since it catches
_all_ exceptions that might happen, and quietly ignores
them.  For example, in the following code, you might not
realize that you've made a typo and none of the items in
the list are actually being checked, even though there is
obviously no problem with any of these simple items
(integers from 0 to 9).
def fornat_value(val):
return '-- %5d --' % val
L = range(10)
for val in L:
try:
print format_value(val)
except Exception:
pass
It's almost always better to identify the *specific*
exceptions which you are expecting and to catch those,
or not do a simple "pass".  See Vincent's response,
for example, where he explicitly asks only for ValueErrors
and lets others propagate upwards, to be reported.
(I realize these were contrived examples, but examples
that don't mention this important issue risk the propagation
of buggy code...)
-Peter
--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huge Sets() to disk

2005-01-10 Thread Paul Rubin
Martin MOKREJ¦ <[EMAIL PROTECTED]> writes:
> >>  I have sets.Set() objects having up to 20E20 items,
>   just imagine, you want to compare how many words are in English, German,
> Czech, Polish disctionary. You collect words from every language and record
> them in dict or Set, as you wish.
> 
>   Once you have those Set's or dict's for those 4 languages, you ask
> for common words and for those unique to Polish. I have no estimates
> of real-world numbers, but we might be in range of 1E6 or 1E8?
> I believe in any case, huge.

They'll be less than 1e6 and so not huge by the standards of today's
computers.  You could use ordinary dicts or sets.

1e20 is another matter.  I doubt that there are any computers in the
world with that much storage.  How big is your dataset REALLY?

>   I wanted to be able to get a list of words NOT found in say Polish,
> and therefore wanted to have a list of all, theoretically existing words.
> In principle, I can drop this idea of having ideal, theoretical lexicon.
> But have to store those real-world dictionaries anyway to hard drive.

One way you could do it is by dumping all the words sequentially to
disk, then sorting the resulting disk file using an O(n log n) offline
algorithm.

Basically data sets of this size are outside of what you can easily
handle with builtin Python operations without putting some thought
into algorithms and data structures.  From "ribosome" I'm guessing
you're doing computational biology.  If you're going to be writing
code for these kinds of problems on a regular basis, you probably
ought to read a book or two on the topic.  "CLRS" is a good choice:

  http://theory.lcs.mit.edu/~clr/
  http://mitpress.mit.edu/algorithms/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huge Sets() to disk

2005-01-10 Thread John Lenton
On Tue, Jan 11, 2005 at 12:33:42AM +0200, Simo Melenius wrote:
> "John Lenton" <[EMAIL PROTECTED]> writes:
> 
> > you probably want to look into building set-like objects ontop of
> > tries, given the homogeneity of your language. You should see
> > imrpovements both in size and speed.
> 
> Ternary search trees give _much_ better space-efficiency compared to
> tries, at the expense of only slightly slower build and search time.
> This is especially essential as the OP mentioned he could have huge
> sets of data.

hmm! sounds like *some*body needs to go read up on ternary trees,
then.

Ok, ok, I'm going...

-- 
John Lenton ([EMAIL PROTECTED]) -- Random fortune:
Fortune finishes the great quotations, #6

"But, soft!  What light through yonder window breaks?"
It's nothing, honey.  Go back to sleep.


signature.asc
Description: Digital signature
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Python Operating System???

2005-01-10 Thread Paul Rubin
"Roose" <[EMAIL PROTECTED]> writes:
> Well, then of course you know I have to say:  An OS does not run inside a
> browser.  There's a sentence I never thought I'd utter in my lifetime.
> 
> So that is an irrelevant example, since it obviously isn't a task scheduler
> in the context of this thread.

Huh?  I'm just baffled why you think writing a scheduler in an OS is
harder than writing one in an application.  You have some means of
doing a coroutine switch in one situation, and some means of doing a
hardware context switch in the other.  Aside from that the methods are
about the same.

Why do you think there's anything difficult about doing this stuff in
Python, given the ability to call low level routines for some hardware
operations as needed?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: time module precision

2005-01-10 Thread Peter Hansen
[EMAIL PROTECTED] wrote:
Peter Hansen wrote:
_Why_ do you want to wait such brief amounts of time?
What I am trying to do is sending binary data to a serial port. Since
the device attached to the port cannot handle a continous in-flow of
data, I need to make an artificial tiny delay in-between data chunks(a
few hundreds of KBs). The delay might be a few tens to hundreds of us.
I guess you've checked that your situation can't take advantage
of either hardware handshaking (e.g. RTS/CTS) or software handshaking
(Xon/Xoff) to do flow control.
Something doesn't quite feel right in your description, however,
but it could simply be because the sorts of devices we work with
are quite different.  I'm very surprised to hear about a device
that can manage to absorb *hundreds of KBs* of data without any
handshaking, and yet after that much data suddenly manages to
have a hiccup on the order of mere microseconds before it can
take more data.
Also, is your data rate so high that a few hundred microseconds
represents a noticeable delay?  I'm rarely dealing with data
rates higher than, say, 38400bps, where hundreds of KBs would take
on the order of minutes, so 100us is effectively instantaneous
in my world.
I calculate that your data rate would have to be higher than
307Mbps (a third of a gigabit per second) before 100us would
represent a large enough delay (measured in bits rather than
time) that you would be bothered by it... (defined somewhat
arbitrarily as more than 2% of the time required to send
100KB of data at your data rate).
I'd like to make the transmission as fast as possible, uh.. well..
reasonably fast.
I suspect it will be, if you simply do a time.sleep(0.001) and
pretend that's shorter than it really is...  since WinXP
is not a realtime OS, sometimes that could take as long as
hundreds of milliseconds, but it's very unlikely that anyone
will ever notice.
-Peter
--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huge Sets() to disk

2005-01-10 Thread Scott David Daniels
Martin MOKREJÅ wrote:
 But I don't think I can use one-way hashes, as I need to reconstruct
the string later. I have to study hard to get an idea what the proposed 
> code really does.
Scott David Daniels wrote:
Tim Peters wrote:
 Call the set of all English words E; G, C, and P similarly.
This Python expression then gives the set of words common to all 4:
E & G & C & P
and for those unique to Polish.
P -  E - G  - C
One attack is to define a hash to get sizes smaller.  Suppose you find
your word sets are 10**8 large, and you find you need to deal with sets
of size 10**6.  Then if you define a hash that produces an integer below
100, you'll find:
E & G & C & P == Union(E_k & G_k & C_k & P_k) for k in range(100)
P -  E - G  - C == Union(P_k - E_k - G_k - C_k) for k in range(100)
where X_k = [v for v in X if somehash(v) == k]
I don't understand here what would be the result from the hashing 
function. :(  ... Can you clarify this please in more detail?
The trick is to build the X_k values into files.
For example:
for base in range(0, 100, 10):
# work in blocks of 10 (or whatever)
for language in ('English', 'German', 'Czech', 'Polish'):
source = open(language)
dests = [open('%s_%s.txt' % (language.lower(), base + n), 'w')
 for n in range(10)]
for word in source:  # Actually this is probably more involved
code = somehash(word) - base
if 0 <= code < base:
dests[code].write(word + '\n')
for dest in dests:
dest.close()
source.close()
After running the above code, you get 400 files with names like, for
example, 'english_33.txt'.  'english_33.txt' contains only words in
English which hash to 33.  Then you need to sort the different files
(like 'english_33.txt') before using them in the next phase.  If you
can sort and dupe-elim in the same step, do that and get rid of the
calls to dupelim below.  Otherwise use dupelim below when reading the
sorted files.  If you must, you might want to do the sort and
dupe-elim in Python:
for language in ('english', 'german', 'czech', 'polish'):
for hashcode in range(100):
filename = '%s_%s.txt' % (language, hashcode)
source = open(filename)
lines = sorted(source)
source.close()
dest = open(filename, 'w')
for line in dupelim(lines):
dest.write(line)
dest.close()
>> def inboth(left, right): 
>> def leftonly(left, right): 
Aaaah, so the functions just walk one line in left, one line in right, 
if values don't match the value in left is unique, it walks again one
line in left and checks if it already matches the value in right file
in the last position, and so on until it find same value in the right file?
Exactly, you must make sure you deal with cases where you pass values,
match values, and run out of source -- that keeps these from being four-
liners.
>> For example:
>>
>> Ponly = open('polishonly.txt', 'w')
>> every = open('every.txt', 'w')
>> for hashcode in range(100):
>>English = open('english_%s.txt' % hashcode)
> ^^ this is some kind of eval?
If  hashcode == 33  then 'english_%s.txt' % hashcode == 'english_33.txt'
So we are just finding the language-specific for-this-hash source.
>>... for unmatched in leftonly(leftonly(leftonly(dupelim(Polish),
>> dupelim(English)), dupelim(German)), dupelim(Czech)):
>>Ponly.write(unmatched)
>> for source in (Polish, English, German, Czech):#(paraphrased)
>> source.seek(0)
>> for match in inboth(inboth(dupelim(Polish), upelim(English)),
>>   inboth(dupelim(German), dupelim(Czech))):
>> every.write(match)
>> for source in (Polish, English, German, Czech):#(paraphrased)
>> source.close()
>> Ponly.close()
>> every.close()
--Scott David Daniels
[EMAIL PROTECTED]
--
http://mail.python.org/mailman/listinfo/python-list


Handing a number of methods to the same child class

2005-01-10 Thread Dave Merrill
Python newb here. Say a class contains some rich attributes, each defined as
a class. If an instance of the parent class recieves a call to a method
belonging to one of those attributes, it should be dispatched to the
corresponding child class.

Somewhat silly example:

class Address:
def __init__():
self.displayed_name = ''
self.adr = ''
self.city = ''
self.state = ''
def set_name(name):
self.displayed_name = name
def set_adr(adr):
self.adr = adr
def set_city(city):
self.city = city
def set_state(state):
self.state = state

class Phone:
def __init__():
self.displayed_name = ''
self.number = ''
def set_name(name):
self.displayed_name = name
def set_number(number):
self.number = number

class Customer:
def __init__():
self.last_name = ''
self.first_name = ''
self.adr = Adr()
self.phone = Phone()
def set_adr_name(name):
self.adr.set_name(name)
def set_adr_adr(adr):
self.adr.set_adr(adr)
def set_adr_city(city):
self.adr.set_city(city)
def set_adr_state(state):
self.adr.set_state(state)
def set_phone_name(name):
self.phone.set_name(name)
def set_phone_number(number):
self.phone.set_number(number)

IOW, all the adr methods go to the corresponding method in self.adr, all the
phone methods go to self.phone, theorectically etc for other rich
attributes.

What I'd really like is to say, "the following list of methods pass all
their arguments through to a method of the same name in self.adr, and the
following methods do the same but to self.phone." Is there some sane way to
express that in python?

Callers should stay ignorant about the internal structure of customer
objects; they should be talking only to the customer object itself, not its
components. Customer objects should stay ignorant of the internal structure
of addresses and phones; they should let those objects handle their own
implementation of the methods that apply to them.

What customer objects need to do is call the appropriate internal object for
each incoming method. How would you implement this? It's unfortunate to have
to create individual passthrough methods for everything, like the above,
among other reasons because it makes customer objects have to know about
each new method implemented by the objects they contain.

Am I making sense? Thanks,

Dave Merrill


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: complex numbers

2005-01-10 Thread It's me
For those of us that works with complex numbers, having complex number as a
natively supported data type is a big advantage.  Non-native add-ons are not
sufficient and lead to very awkward program code.


"Jürgen Exner" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> [EMAIL PROTECTED] wrote:
> > #python supports complex numbers.
> [...]
>
> So?
>

The world would come to a halt if all of a sudden nobody understands complex
numbers anymore.  :-)

> > # Perl doesn't support complex numbers. But there are packages that
> > supports it.
>
> The Math::Complex module is part of the standard installation already, no
> need for any "packages" (whatever that might be).
> Did you check "perldoc Math::Complex"
>
> NAME
> Math::Complex - complex numbers and associated mathematical functions
> [...]
>
> jue
>
>


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python & unicode

2005-01-10 Thread Leif K-Brooks
John Roth wrote:
It doesn't work because Python scripts must be in ASCII except for
the contents of string literals. Having a function name in anything
but ASCII isn't supported.
To nit-pick a bit, identifiers can be in Unicode; they're simply 
confined to digits and plain Latin letters.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Old Paranoia Game in Python

2005-01-10 Thread McBooCzech
Newbie in Python.
I did copy the whole script form the web and save it as para1.py. I did
download pyparsing module and save it to
C:\\Python23\\Lib\\pyparsing122.
I did run following script:

import sys
sys.path.append('C:\\Python23\\Lib\\pyparsing122')

from pyparsing import *
extraLineBreak = White(" ",exact=1) + LineEnd().suppress()
text = file("Para1.py").read()
newtext = extraLineBreak.transformString(text)
file("para2.py","w").write(newtext)

I did try to run para2.py script, but following message

File "para2.py", line 169
choose(4,"You give your correct clearance",5,"You lie and claim
^
SyntaxError: EOL while scanning single-quoted string

So my questions are:
Why pyparser didn't correct the script?
What I am doing wrong?
Is it necessary to correct the script by hand anyway?

Petr

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: static compiled python modules

2005-01-10 Thread Thomas Linden
Nick Coghlan wrote:
> http://www.python.org/dev/doc/devel/api/importing.html
> Take a look at the last three entries about registering builtin modules.

Thanks a lot, it works!


regards, Tom
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huve ge Sets() to disk

2005-01-10 Thread Tim Peters
[Tim Peters]
>> As I mentioned before, if you store keys in sorted text files,
>> you can do intersection and difference very efficiently just by using
>> the Unix `comm` utiltity.

[Martin MOKREJÅ]
> Now I got your point. I understand the comm(1) is written in C, but it still
> has to scan file1 once and file2 n-times, where n is a number of lines
> in file1, right? Without any index ... I'll consider it, actually will test,
> thanks!

`comm` is much more efficient than that.  Note that the input files
have to be sorted.  Then, in a single pass over both files (not 2
passes, not 3, not n, just 1), it can compute any subset of these
three (do `man comm`):

1. The words common to both files.
2. The words unique to "the first" file.
3. The words unique to "the second" file.

It's essentially just doing a merge on two sorted lists, and how it
works should be obvious if you give it some thought.  It takes time
proportional to the sum of the lengths of the input files, and nothing
*can* be faster than that.

> I was really hoping I'll get an answer how to alter the indexes for 
> dictionaries
> in python.

Sorry, I don't have a guess for what that might mean.

> You convinced me not to even try to construct to theoretical dictionary,
> as it will take ages just to create. Even if I'd manage, I couldn't
> save it (the theoretical and possibly not even the dict(theoret) - dict(real)
> result).

Worse, it didn't sound like a good approach even if you could save it.

> Still, before I give the whole project, once more:
> 
> I'll parse some text files, isolates separate words and add them to
> either Set(), list, dict, flatfile line by line.
>
> Depending on the above, I'll compare them and look for those unique
> to some "language". I need to keep track of frequencies used
> in every such language,

Frequencies of what?  "A language" here can contain some words
multiple times each?

> so the dict approach would be the best.  The number stored as a value vould
> be a float ^H^H^H^H^H^H Decimal() type - very small number.

Integers are the natural type for counting, and require less memory
than floats or decimals.

> Once more, I expect to have between E4 or E5 to E8??? words
> stored in 20 dictionaries (remember words of sizes in range 1-20?
> Every of those 20 dictionaries should be, I believe, indexed just once.
> The indexing method should know all entries in a given file are of same
> size, i.e. 5 chars, 15 chars, 20 chars etc.

I think you're making this more complicated than it needs to be.

> I already did implement the logic to walk through those 20 different
> dictionaries from language a and from language b and find uout those
> unique to a or common to both of them. Why I went to ask on this list
> was to make sure I took right approach. Sets seemed to be better solution
> for the simple comparison (common, unique). To keep track of those
> very small frequencies, I anyway have to have dictionaries. I say
> that again, how can I improve speed of dictionary indexing?
> It doesn't matter here if I have 10E4 keys in dictionary or
> 10E8 keys in a dictionary.

What reason do you have for believing that the speed of dictionary
indexing is worth any bother at all to speed up?  Dict lookup is
generally considered to be extremely fast already.  If you're just
*guessing* that indexing is the bottleneck, you're probably guessing
wrong -- run a profiler to find out where the real bottleneck is.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huge Sets() to disk

2005-01-10 Thread Martin MOKREJŠ
Simo Melenius wrote:
"John Lenton" <[EMAIL PROTECTED]> writes:

you probably want to look into building set-like objects ontop of
tries, given the homogeneity of your language. You should see
imrpovements both in size and speed.

Ternary search trees give _much_ better space-efficiency compared to
tries, at the expense of only slightly slower build and search time.
This is especially essential as the OP mentioned he could have huge
sets of data.
Hi Simo and John,
 would you please point me to some docs so I learn what are you talking about? 
;)
Many thanks!
Martin
--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huge Sets() to disk

2005-01-10 Thread Simo Melenius
"John Lenton" <[EMAIL PROTECTED]> writes:

> you probably want to look into building set-like objects ontop of
> tries, given the homogeneity of your language. You should see
> imrpovements both in size and speed.

Ternary search trees give _much_ better space-efficiency compared to
tries, at the expense of only slightly slower build and search time.
This is especially essential as the OP mentioned he could have huge
sets of data.


br,
S
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python & unicode

2005-01-10 Thread John Roth
It doesn't work because Python scripts
must be in ASCII except for the
contents of string literals. Having a function
name in anything but ASCII isn't
supported.
John Roth
"Michel Claveau - abstraction mÃta-galactique non triviale en fuite 
perpÃtuelle." <[EMAIL PROTECTED]> wrote in 
message news:[EMAIL PROTECTED]
Hi !

If  Python is Ok with Unicode, why the next script not run ?
# -*- coding: utf-8 -*-
def Ñ(toto):
   return(toto*3)
aret = Ñ(4)




@-salutations
--
Michel Claveau

--
http://mail.python.org/mailman/listinfo/python-list


Re: C structure in the Python extension

2005-01-10 Thread Craig Ringer
Dave win wrote:

>Howdy:
>   When I was writting interface functions of the extending python,  I
>meet a question. As I using the  "PyArg_ParseTuple(args,arg_type,...)"
>function call, if I wanna use the personal defined argument, such as the
>C structure which I made. How to make it?
>
>static PyObject* Call_V_ABSUB(PyObject *self, PyObject* args){
>  myStruct FU;
>  myStruct result;
>  if(!PyArg_ParseTuple(args,"O&",&FU)) return NULL;
>^^^
>How to modify here???
>  V_ABSUB(FU);
>
>  return Py_BuildValue("i",result);
>}
>  
>
You can't, really. Python code can't work with C structs directly, so it
can't pass you one. I have used one utterly hideous hack to do this when
prototyping code in the past, but that was before I knew about
PyCObject. PyCObject is also pretty evil if abused, but nowhere NEAR on
the scale of what I was doing. Can you say passing pointers around as
Python longs? Can't bring yourself to say it? Don't blame you. My only
defense was that it was quick hack prototype code, and that the Python/C
API for making classes is too painful to use when quickly prototyping
things.

To do what you want, you could encapsulate a pointer to the struct in a
PyCObject, then pass that around. Your Python code will just see a
PyCObject with no attributes or methods; it can't do anything to it
except pass it to other Python code (and delete it, but that'll result
in a memory leak if the PyCObject holds the only pointer to the struct).
Your C code can extract the pointer to the struct and work with that. DO
NOT do this if the Python code just deleting the PyCObject could ever
discard the last pointer to the struct, as you'll leak the struct.

A much BETTER option is probably to rewrite myStruct to be a Python type
("class") implemented in C, and provide it with both a C and Python API.
This isn't too hard, though the Python/C API does make creating types a
bit cumbersome. (Most of this seems to be because you're playing
pretend-we-have-objects in C, rather than issues specific to the
Python/C API).

--
Craig Ringer


-- 
http://mail.python.org/mailman/listinfo/python-list


Python & unicode

2005-01-10 Thread Michel Claveau - abstraction mÃta-galactique non triviale en fuite perpÃtuelle.
ï


Hi !If  Python is Ok with Unicode, 
why the next script not run ?
# -*- 
coding: utf-8 -*-
 
def Ñ(toto):    
return(toto*3)
 
aret = Ñ(4)
 
@-salutations-- Michel 
Claveau
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Writing huge Sets() to disk

2005-01-10 Thread Martin MOKREJÅ
Dear Scott,
 thatnk you for you excellent email. I also thought about
using some zip() function to compress the strings first before
using them as keys in a dict.
But I don't think I can use one-way hashes, as I need to reconstruct
the string later. I have to study hard to get an idea
what the proposed code really does.
Scott David Daniels wrote:
Tim Peters wrote:
[Martin MOKREJÅ]
just imagine, you want to compare how many words are in English, German,
Czech, Polish disctionary. You collect words from every language and 
record
them in dict or Set, as you wish.

Call the set of all English words E; G, C, and P similarly.
Once you have those Set's or dict's for those 4 languages, you ask
for common words

This Python expression then gives the set of words common to all 4:
E & G & C & P
and for those unique to Polish.
P -  E - G  - C

One attack is to define a hash to get sizes smaller.  Suppose you find
your word sets are 10**8 large, and you find you need to deal with sets
of size 10**6.  Then if you define a hash that produces an integer below
100, you'll find:
E & G & C & P == Union(E_k & G_k & C_k & P_k) for k in range(100)
P -  E - G  - C == Union(P_k - E_k - G_k - C_k) for k in range(100)
where X_k = [v for v in X if somehash(v) == k]
I don't understand here what would be the result from the hashing function. 
:(
Can you clarify this please in more detail? P is the largest dictionary
in this example, or was it shifted by one to the right?
This means, you can separate the calculation into much smaller buckets,
and combine the buckets back at the end (without any comparison on the
combining operations).
Yes.
For calculating these two expressions, you can dump values out to a
file per hash per language, sort and dupe-elim the contents of the
various files (maybe dupe-elim on the way out).  Then hash-by-hash,
you can calculate parts of the results by combining iterators like
inboth and leftonly below on iterators producing words from files.
def dupelim(iterable):
source = iter(iterable)
former = source.next()  # Raises StopIteration if empty
yield former
for element in source:
if element != former:
yield element
former = element
def inboth(left, right):
'''Take ordered iterables and yield matching cases.'''
lsource = iter(left)
lhead = lsource.next()  # May StopIteration (and we're done)
rsource = iter(right)
for rhead in rsource:
while lhead < rhead:
lhead = lsource.next()
if lhead == rhead:
yield lhead
lhead = lsource.next()
def leftonly(left, right):
'''Take ordered iterables and yield matching cases.'''
lsource = iter(left)
rsource = iter(right)
try:
rhead = rsource.next()
except StopIteration: # empty right side.
for lhead in lsource:
yield lhead
else:
for lhead in lsource:
try:
while rhead < lhead:
rhead = rsource.next()
if lhead < rhead:
yield lhead
except StopIteration: # Ran out of right side.
yield lhead
for lhead in lsource:
yield lhead
break
Aaaah, so the functions just walk one line in left, one line in right, if 
values don't match
the value in left is unique, it walks again one line in left and checks if it 
already
matches the value in riught file in the last position, and so on untill it find
same value in the right file?
So, it doesn't scan the file on right n-times, but only once?
Yes, nice thing. Then I really don't need an index and finally I really believe
flatfiles will do just fine.

Real-word dictionaries shouldn't be a problem.  I recommend you store
each as a plain text file, one word per line.  Then, e.g., to convert
that into a set of words, do
f = open('EnglishWords.txt')
set_of_English_words = set(f)
f.close()
You'll have a trailing newline character in each word, but that
doesn't really matter.
Note that if you sort the word-per-line text files first, the Unix
`comm` utility can be used to perform intersection and difference on a
pair at a time with virtually no memory burden (and regardless of file
size).
In fact, once you've sorted these files, you can use the iterators
above to combine those sorted files.
For example:
Ponly = open('polishonly.txt', 'w')
every = open('every.txt', 'w')
for hashcode in range(100):
English = open('english_%s.txt' % hashcode)
--^^ this is some kind of 
eval?
German = open('german_%s.txt' % hashcode)
Czech = open('czech_%s.txt' % hashcode)
Polish = open('polish_%s.txt' % hashcode)
for unmatched in leftonly(leftonly(leftonly

Re: Referenz auf Variable an Funktion Ãbergeben?

2005-01-10 Thread "Martin v. LÃwis"
Torsten Mohr wrote:
Geht sowas auch in Python?
Nicht direkt. Es ist Ãblich, dass Funktionen, die Ergebnisse
(RÃckgabewerte) liefern, dies mittels return tun:
def vokale(string):
result = [c for c in string if c in "aeiou"]
return "".join(result)
x = "Hallo, Welt"
x = vokale(x)
Falls man mehrere Strings als Ãndern will, hat man halt
mehrere RÃckgabewerte
def welt_anhaengen(a, b):
return a+"Hallo", b+"Welt"
x = "foo"
y = "bar"
x,y = welt_anhaengen(x,y)
Geht sowas vielleicht mit weakref?
Nein. Wenn Du unbedingt das Argument Ãndern willst, musst
Du ein Objekt Ãbergeben, das man Ãndern kann, etwa eine
Liste, die nur einen String enthÃlt.
def welt_anhaengen_2(a,b):
a[0] += "Hallo"
b[0] += "Welt"
a = ["foo"]
b = ["bar"]
welt_anhaengen_2(a,b)
Ciao,
Martin
P.S. comp.lang.python ist eigentlich auf Englisch.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huge Sets() to disk

2005-01-10 Thread Scott David Daniels
Tim Peters wrote:
[Martin MOKREJÅ]
just imagine, you want to compare how many words are in English, German,
Czech, Polish disctionary. You collect words from every language and record
them in dict or Set, as you wish.
Call the set of all English words E; G, C, and P similarly.
Once you have those Set's or dict's for those 4 languages, you ask
for common words
This Python expression then gives the set of words common to all 4:
E & G & C & P
and for those unique to Polish.
P -  E - G  - C
One attack is to define a hash to get sizes smaller.  Suppose you find
your word sets are 10**8 large, and you find you need to deal with sets
of size 10**6.  Then if you define a hash that produces an integer below
100, you'll find:
E & G & C & P == Union(E_k & G_k & C_k & P_k) for k in range(100)
P -  E - G  - C == Union(P_k - E_k - G_k - C_k) for k in range(100)
where X_k = [v for v in X if somehash(v) == k]
This means, you can separate the calculation into much smaller buckets,
and combine the buckets back at the end (without any comparison on the
combining operations).
For calculating these two expressions, you can dump values out to a
file per hash per language, sort and dupe-elim the contents of the
various files (maybe dupe-elim on the way out).  Then hash-by-hash,
you can calculate parts of the results by combining iterators like
inboth and leftonly below on iterators producing words from files.
def dupelim(iterable):
source = iter(iterable)
former = source.next()  # Raises StopIteration if empty
yield former
for element in source:
if element != former:
yield element
former = element
def inboth(left, right):
'''Take ordered iterables and yield matching cases.'''
lsource = iter(left)
lhead = lsource.next()  # May StopIteration (and we're done)
rsource = iter(right)
for rhead in rsource:
while lhead < rhead:
lhead = lsource.next()
if lhead == rhead:
yield lhead
lhead = lsource.next()
def leftonly(left, right):
'''Take ordered iterables and yield matching cases.'''
lsource = iter(left)
rsource = iter(right)
try:
rhead = rsource.next()
except StopIteration: # empty right side.
for lhead in lsource:
yield lhead
else:
for lhead in lsource:
try:
while rhead < lhead:
rhead = rsource.next()
if lhead < rhead:
yield lhead
except StopIteration: # Ran out of right side.
yield lhead
for lhead in lsource:
yield lhead
break
Real-word dictionaries shouldn't be a problem.  I recommend you store
each as a plain text file, one word per line.  Then, e.g., to convert
that into a set of words, do
f = open('EnglishWords.txt')
set_of_English_words = set(f)
f.close()
You'll have a trailing newline character in each word, but that
doesn't really matter.
Note that if you sort the word-per-line text files first, the Unix
`comm` utility can be used to perform intersection and difference on a
pair at a time with virtually no memory burden (and regardless of file
size).
In fact, once you've sorted these files, you can use the iterators
above to combine those sorted files.
For example:
Ponly = open('polishonly.txt', 'w')
every = open('every.txt', 'w')
for hashcode in range(100):
English = open('english_%s.txt' % hashcode)
German = open('german_%s.txt' % hashcode)
Czech = open('czech_%s.txt' % hashcode)
Polish = open('polish_%s.txt' % hashcode)
for unmatched in leftonly(leftonly(leftonly(dupelim(Polish),
dupelim(English)), dupelim(German)), dupelim(Czech)):
Ponly.write(unmatched)
English.seek(0)
German.seek(0)
Czech.seek(0)
Polish.seek(0)
for matched in inboth(inboth(dupelim(Polish), dupelim(English)),
  inboth(dupelim(German), dupelim(Czech))):
every.write(matched)
English.close()
German.close()
Czech.close()
Polish.close()
Ponly.close()
every.close()
--Scott David Daniels
[EMAIL PROTECTED]
--
http://mail.python.org/mailman/listinfo/python-list


[OT] Re: Old Paranoia Game in Python

2005-01-10 Thread Steven Bethard
Terry Reedy wrote:
Never saw this specific game.  Some suggestions on additional factoring out 
of duplicate code.


def next_page(this_page):
 print "\n"
 if this_page == 0:
page = 0
return

The following elif switch can be replaced by calling a selection from a 
list of functions:

[None, page1, pag2, ... page57][this_page]()

 elif this_page == 1:
page1()
return
 elif this_page == 2:
page2()
return
...
 elif this_page == 57:
page57()
return

Also, a chose3 function to complement your chose (chose2) function would 
avoid repeating the choose-from-3 code used on multiple pages.

Terry J. Reedy
This is what I love about this list.  Where else is someone going to 
look at 1200+ lines of code and give you useful advice?!  ;)  Very cool. 
 (Thanks Terry!)

While we're making suggestions, you might consider writing dice_roll as:
def dice_roll(num, sides):
return sum(random.randrange(sides) for _ in range(num)) + num
for Python 2.4 or
def dice_roll(num, sides):
return sum([random.randrange(sides) for _ in range(num)]) + num
for Python 2.3.
You also might consider writing all the pageX methods in a class, so all 
your globals can be accessed through self, e.g.:

class Game(object):
def __init__(self):
...
self.page = 1
self.computer_request = 0
...
def page2():
print ...
if self.computer_request == 1:
new_clone(45)
else:
new_clone(32)
...
You could then have your class implement the iteration protocol:
def __iter__(self):
try:
while True:
self.old_page = self.page
yield getattr(self, "page%i" % page)
except AttributeError:
raise StopIteration
And then your main could look something like:
def main(args):
   ...
   instructions()
   more()
   character()
   more()
   for page_func in Game():
   page_func()
   print "-"*79
Anyway, cool stuff.  Thanks for sharing!
Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: Reading Fortran binary files

2005-01-10 Thread Michael Fuhr
"drife" <[EMAIL PROTECTED]> writes:

> I need to read a Fortran binary data file in Python.
> The Fortran data file is organized thusly:
>
> nx,ny,nz,ilog_scale   # Record 1 (Header)
> ihour,data3D_array# Record 2
>
> Where every value above is a 2 byte Int.

Have you looked at the struct module?

http://www.python.org/doc/2.4/lib/module-struct.html

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: exceptions and items in a list

2005-01-10 Thread rbt
Andrey Tatarinov wrote:
rbt wrote:
If I have a Python list that I'm iterating over and one of the objects 
in the list raises an exception and I have code like this:

try:
do something to object in list
except Exception:
pass
Does the code just skip the bad object and continue with the other 
objects in the list, or does it stop?

# skip bad object and continue with others
for object in objects:
try:
#do something to object
except Exception:
pass
# stop at first bad object
try:
for object in objects:
#do something to object
except Exception:
pass
Thanks Andrey. That's a great example of how to do it.
--
http://mail.python.org/mailman/listinfo/python-list


Python and Tsunami Warning Systems

2005-01-10 Thread Tim Churches
Boc Cringely's column on the need for a grassroots (seaweed roots?) 
tsunami warning system for the Indian Ocean (and elsewhere) makes some 
very good points - see 
http://www.pbs.org/cringely/pulpit/pulpit20041230.html

In his following column ( 
http://www.pbs.org/cringely/pulpit/pulpit20050107.html ), he notes:

"Now to last week's column about tsunamis and tsunami warning systems. 
While my idea may have set many people to work, only a couple of them 
have been telling me about it. Developer Charles R. Martin and Canadian 
earth scientist Darren Griffith met through this column, and are in the 
initial stages of building an Open Tsunami Alerting System (OTAS). 
Although work has just started, they've established a few basic 
principles: OTAS will be very lightweight; will use openly available 
geophysical or seismic data sources; will be highly distributed and 
decentralized; and will be built to run on very low-powered commodity 
hardware. They currently foresee using Python and Java, but aren't 
religious about it. Anyone who wants to help out is welcome and their 
OTAS blog can be found in this week's links."

See http://otasblog.blogspot.com/
It seems to me that this would be a valuable and feasible type of 
project for the Python community to get involved in (or perhaps take the 
lead on). Something the PSF might even consider resourcing to some degree?

Tim C
--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huge Sets() to disk

2005-01-10 Thread Martin MOKREJÅ
Tim Peters wrote:
[Martin MOKREJÅ]
...
I gave up the theoretical approach. Practically, I might need up
to store maybe those 1E15 keys.

We should work on our multiplication skills here .  You don't
have enough disk space to store 1E15 keys.  If your keys were just one
byte each, you would need to have 4 thousand disks of 250GB each to
store 1E15 keys.  How much disk space do you actually have?  I'm
betting you have no more than one 250GB disk.
Can get about 700GB on raid5, but this doesn't help here of course. :(
I definitely appreciate your comments, I somehow forgot to make sure
I can store it. I was mainly distracted by the fact I might anyway
hit almost the same size, if there's too few words used in reality.
Therefore when looking for those words not found in such a dictionary,
I'd be close to the teoretical, maximal size of say in order of E15
or E14.

...
[Istvan Albert]
On my system storing 1 million words of length 15
as keys of a python dictionary is around 75MB.

Fine, that's what I wanted to hear. How do you improve the algorithm?
Do you delay indexing to the very latest moment or do you let your
computer index 999 999 times just for fun?

It remains wholly unclear to me what "the algorithm" you want might
My intent is do some analysis in protein science. But it can be really
thought of as analysing those 4 different languages.
be.  As I mentioned before, if you store keys in sorted text files,
you can do intersection and difference very efficiently just by using
the Unix `comm` utiltity.
Now I got your point. I understand the comm(1) is written in C, but it still
has to scan file1 once and file2 n-times, where n is a number of lines
in file1, right? Without any index ... I'll consider it, actually will test,
thanks!
I was really hoping I'll get an answer how to alter the indexes for 
dictionaries
in python.
You convinced me not to even try to construct to theoretical dictionary,
as it will take ages just to create. Even if I'd manage, I couldn't
save it (the theoretical and possibly not even the dict(theoret) - dict(real)
result).
Still, before I give the whole project, once more:
I'll parse some text files, isolates separate words and add them to
either Set(), list, dict, flatfile line by line.
Depending on the above, I'll compare them and look for those unique
to some "language". I need to keep track of frequencies used
in every such language, so the dict approach would be the best.
The number stored as a value vould be a float ^H^H^H^H^H^H Decimal()
type - very small number.
Once more, I expect to have between E4 or E5 to E8??? words
stored in 20 dictionaries (remember words of sizes in range 1-20?
Every of those 20 dictionaries should be, I believe, indexed just once.
The indexing method should know all entries in a given file are of same
size, i.e. 5 chars, 15 chars, 20 chars etc.
I already did implement the logic to walk through those 20 different
dictionaries from language a and from language b and find uout those
unique to a or common to both of them. Why I went to ask on this list
was to make sure I took right approach. Sets seemed to be better solution
for the simple comparison (common, unique). To keep track of those
very small frequencies, I anyway have to have dictionaries. I say
that again, how can I improve speed of dictionary indexing?
It doesn't matter here if I have 10E4 keys in dictionary or
10E8 keys in a dictionary.
Or I wanted to hear: go for Sets(), having set with 1E8 keys
might take 1/10 of the size you need for a dictionary ... but
you cannot dump them efficiently on a disk. Using shelve will
cost you maybe 2x the size of dictionary approach and will
be also slover that writing dictionary directly.
Something along these words. I really appreciate your input!
Martin

--
http://mail.python.org/mailman/listinfo/python-list


RE: Writing huge Sets() to disk

2005-01-10 Thread Batista, Facundo
Title: RE: Writing huge Sets() to disk





[Istvan Albert]


#- I think that you need to first understand how dictionaries work.
#- The time needed to insert a key is independent of
#- the number of values in the dictionary.


Are you sure?


I think that is true while the hashes don't collide. If you have collisions, time starts to depend of element quantity. But I'm not sure

Tim sure can enlighten us.


Asking-for-god-word--ly yours,


.    Facundo


Bitcora De Vuelo: http://www.taniquetil.com.ar/plog
PyAr - Python Argentina: http://pyar.decode.com.ar/



  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ADVERTENCIA.


La informaciÃn contenida en este mensaje y cualquier archivo anexo al mismo, son para uso exclusivo del destinatario y pueden contener informaciÃn confidencial o propietaria, cuya divulgaciÃn es sancionada por la ley.

Si Ud. No es uno de los destinatarios consignados o la persona responsable de hacer llegar este mensaje a los destinatarios consignados, no està autorizado a divulgar, copiar, distribuir o retener informaciÃn (o parte de ella) contenida en este mensaje. Por favor notifÃquenos respondiendo al remitente, borre el mensaje original y borre las copias (impresas o grabadas en cualquier medio magnÃtico) que pueda haber realizado del mismo.

Todas las opiniones contenidas en este mail son propias del autor del mensaje y no necesariamente coinciden con las de TelefÃnica Comunicaciones Personales S.A. o alguna empresa asociada.

Los mensajes electrÃnicos pueden ser alterados, motivo por el cual TelefÃnica Comunicaciones Personales S.A. no aceptarà ninguna obligaciÃn cualquiera sea el resultante de este mensaje.

Muchas Gracias.



-- 
http://mail.python.org/mailman/listinfo/python-list

Re: a new Perl/Python a day

2005-01-10 Thread Stephen Thorne
On Mon, 10 Jan 2005 18:38:14 GMT, gabriele renzi
<[EMAIL PROTECTED]> wrote:
> > You're joking, right?
> 
> please consider that the message you all are asking are crossposted to
> comp.lang.perl.misc and comp.lang.python, avoid the crossgroup flames :)

Yuck.

I'm on the python-list@python.org and I was extremely confused until
you pointed out the crossposting.

Maybe that mail2news gateway should be upgraded to point out
crossposted usenet posts...

Stephen.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: exceptions and items in a list

2005-01-10 Thread vincent wehren
rbt wrote:
If I have a Python list that I'm iterating over and one of the objects 
in the list raises an exception and I have code like this:

try:
do something to object in list
except Exception:
pass
Does the code just skip the bad object and continue with the other 
objects in the list, or does it stop?

Thanks
Fire up a shell and try:
>>> seq = ["1", "2", "a", "4", "5", 6.0]
>>> for elem in seq:
... try:
...print int(elem)
... except ValueError:
...pass
and see what happens...
--
Vincent Wehren
--
http://mail.python.org/mailman/listinfo/python-list


Re: Old Paranoia Game in Python

2005-01-10 Thread Terry Reedy
Never saw this specific game.  Some suggestions on additional factoring out 
of duplicate code.

> def next_page(this_page):
>   print "\n"
>   if this_page == 0:
>  page = 0
>  return

The following elif switch can be replaced by calling a selection from a 
list of functions:

[None, page1, pag2, ... page57][this_page]()

>   elif this_page == 1:
>  page1()
>  return
>   elif this_page == 2:
>  page2()
>  return
...
>   elif this_page == 57:
>  page57()
>  return

Also, a chose3 function to complement your chose (chose2) function would 
avoid repeating the choose-from-3 code used on multiple pages.

Terry J. Reedy




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huge Sets() to disk

2005-01-10 Thread John Lenton
you probably want to look into building set-like objects ontop of
tries, given the homogeneity of your language. You should see
imrpovements both in size and speed.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: exceptions and items in a list

2005-01-10 Thread Andrey Tatarinov
rbt wrote:
If I have a Python list that I'm iterating over and one of the objects 
in the list raises an exception and I have code like this:

try:
do something to object in list
except Exception:
pass
Does the code just skip the bad object and continue with the other 
objects in the list, or does it stop?
# skip bad object and continue with others
for object in objects:
try:
#do something to object
except Exception:
pass
# stop at first bad object
try:
for object in objects:
#do something to object
except Exception:
pass
--
http://mail.python.org/mailman/listinfo/python-list


Re: Speed revisited

2005-01-10 Thread Andrea Griffini
On Mon, 10 Jan 2005 17:52:42 +0100, Bulba! <[EMAIL PROTECTED]> wrote:

>I don't see why should deleting element from a list be O(n), while
>saying L[0]='spam' when L[0] previously were, say, 's', not have the
>O(n) cost, if a list in Python is just an array containing the 
>objects itself?
>
>Why should JUST deletion have an O(n) cost?

Because after deletion L[1] moved to L[0], L[2] moved to L[1],
L[3] moved to L[2] and so on. To delete the first element you
have to move n-1 pointers and this is where O(n) comes from.
When you reassign any element there is no need to move the
others around, so that's why you have O(1) complexity.

With a data structure slightly more complex than an array
you can have random access in O(1), deletion of elements
O(1) at *both ends* and insertion in amortized O(1) at
*both ends*. This data structure is called doubly-ended
queque (nickname "deque") and is available in python.

The decision was that for the basic list object the overhead
added by deques for element access (it's still O(1), but a
bit more complex that just bare pointer arithmetic) and, I
guess, the hassle of changing a lot of working code and
breaking compatibility with extensions manipulating directly
lists (no idea if such a thing exists) was not worth the gain.

The gain would have been that who doesn't know what O(n)
means and that uses lists for long FIFOs would get fast
programs anyway without understanding why. With current
solution they just have to use deques instead of lists.

After thinking to it for a while I agree that this is a
reasonable choice. The gain is anyway IMO very little
because if a programmer desn't understand what O(n) is
then the probability that any reasonably complex program
written is going to be fast is anyway zero... time would
just be wasted somewhere else for no reason.

Andrea
-- 
http://mail.python.org/mailman/listinfo/python-list


exceptions and items in a list

2005-01-10 Thread rbt
If I have a Python list that I'm iterating over and one of the objects 
in the list raises an exception and I have code like this:

try:
do something to object in list
except Exception:
pass
Does the code just skip the bad object and continue with the other 
objects in the list, or does it stop?

Thanks
--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huge Sets() to disk

2005-01-10 Thread Tim Peters
[Martin MOKREJÅ]
> ...
> 
> I gave up the theoretical approach. Practically, I might need up
> to store maybe those 1E15 keys.

We should work on our multiplication skills here .  You don't
have enough disk space to store 1E15 keys.  If your keys were just one
byte each, you would need to have 4 thousand disks of 250GB each to
store 1E15 keys.  How much disk space do you actually have?  I'm
betting you have no more than one 250GB disk.

...

[Istvan Albert]
>> On my system storing 1 million words of length 15
>> as keys of a python dictionary is around 75MB.

> Fine, that's what I wanted to hear. How do you improve the algorithm?
> Do you delay indexing to the very latest moment or do you let your
> computer index 999 999 times just for fun?

It remains wholly unclear to me what "the algorithm" you want might
be.  As I mentioned before, if you store keys in sorted text files,
you can do intersection and difference very efficiently just by using
the Unix `comm` utiltity.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huge Sets() to disk

2005-01-10 Thread Istvan Albert
Martin MOKREJÅ wrote:
Istvan Albert wrote:

So you say 1 million words is better to store in dictionary than
in a set and use your own function to get out those unique or common
words?
I have said nothing even remotely like that.
Fine, that's what I wanted to hear. How do you improve the algorithm?
Do you delay indexing to the very latest moment or do you let your
computer index 999 999 times just for fun?
I think that you need to first understand how dictionaries work.
The time needed to insert a key is independent of
the number of values in the dictionary.
Istvan.
--
http://mail.python.org/mailman/listinfo/python-list


Re: tuples vs lists

2005-01-10 Thread Bruno Desthuilliers
Antoon Pardon a écrit :
Op 2005-01-08, Bruno Desthuilliers schreef <[EMAIL PROTECTED]>:
worzel a écrit :
I get what the difference is between a tuple and a list, but why would I 
ever care about the tuple's immuutability?
Because, from a purely pratical POV, only an immutable object can be 
used as kay in a dict.
 s/kay/key/ 
This is not true.
Chapter and verse, please ?

So you can use tuples for 'composed key'. 
lists can be so used too. Just provide a hash.
Please show us an example, and let's see how useful and handy this is 
from a "purely practical POV" ?-)

Bruno
--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huge Sets() to disk

2005-01-10 Thread Martin MOKREJÅ
Istvan Albert wrote:
Martin MOKREJÅ wrote:

But nevertheless, imagine 1E6 words of size 15. That's maybe 1.5GB of raw
data. Will sets be appropriate you think?

You started out with 20E20 then cut back to 1E15 keys
now it is down to one million but you claim that these
will take 1.5 GB.
I gave up the theoretical approach. Practically, I might need up
to store maybe those 1E15 keys.
So you say 1 million words is better to store in dictionary than
in a set and use your own function to get out those unique or common
words?
On my system storing 1 million words of length 15
as keys of a python dictionary is around 75MB.
Fine, that's what I wanted to hear. How do you improve the algorithm?
Do you delay indexing to the very latest moment or do you let your
computer index 999 999 times just for fun?
I.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Dr. Dobb's Python-URL! - weekly Python news and links (Jan 9)

2005-01-10 Thread Tim Churches
Josiah Carlson wrote:
QOTW:  Jim Fulton: "[What's] duck typing?"
Andrew Koenig: "That's the Australian pronunciation of 'duct taping'."
I must protest.
1) No (true-blue) Australian has every uttered the words 'duct taping', 
because Aussies (and Pommies) know that the universe is held together 
with gaffer tape, not duct tape. See http://www.exposure.co.uk/eejit/gaffer/
b) If an Australian were ever induced to utter the words 'duct typing', 
the typical Strine (see 
http://www.geocities.com/jendi2_2000/strine1.html ) pronunciation would 
be more like 'duh toypn' - the underlying principle being one of 
elimination of all unnecessary syllables, vowels and consonants, thus 
eliminating the need to move the lips (which reduces effort and stops 
flies getting in).

Tim C
Sydney, Australia
--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huge Sets() to disk

2005-01-10 Thread Istvan Albert
Martin MOKREJÅ wrote:

But nevertheless, imagine 1E6 words of size 15. That's maybe 1.5GB of raw
data. Will sets be appropriate you think?
You started out with 20E20 then cut back to 1E15 keys
now it is down to one million but you claim that these
will take 1.5 GB.
On my system storing 1 million words of length 15
as keys of a python dictionary is around 75MB.
I.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Importing Problem on Windows

2005-01-10 Thread Grig Gheorghiu
I normall set PYTHONPATH to the parent directory of my module directory
tree. If I have my module files in C:\home\mymodules and below, then I
set PYTHONPATH to C:\home. This way, I can do "import mymodules" in my
code.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Importing Problem on Windows

2005-01-10 Thread brolewis
Sorry. 2.4 in both locations

-- 
http://mail.python.org/mailman/listinfo/python-list


Referenz auf Variable an Funktion Ãbergeben?

2005-01-10 Thread Torsten Mohr
Hallo,

ich mÃchte eine Funktion schreiben, der ich eine Referenz
auf einen String Ãbergeben kann und die dann einige Ãnderungen
am String vornimmt.

In Perl wÃrde ich also ein \$string Ãbergeben und in der Funktion
auf $$string zugreifen.

Geht sowas auch in Python?

Ich habe von "global" gelesen, das scheint dem was ich suche am
nÃchsten zu kommen, allerdings trifft es das Problem auch nicht,
da ich von einer Unterfunktion aus eine Unterfunktion aufrufen
mÃchte, also tief verschachtelt.  Und global greift nach meinem
VerstÃndnis auf den ganz globalen Namensraum zu und nicht auf den
der aufrufenden Funktion.

Geht sowas vielleicht mit weakref?


Danke fÃr Tipps,
Torsten.


-- 
http://mail.python.org/mailman/listinfo/python-list


Reading Fortran binary files

2005-01-10 Thread drife
Hello,

I need to read a Fortran binary data file in Python.
The Fortran data file is organized thusly:

nx,ny,nz,ilog_scale   # Record 1 (Header)
ihour,data3D_array# Record 2

Where every value above is a 2 byte Int. Further, the
first record is a header containing the dimensions of
the data that follows, as well as the scaling factor
of the data (log base 10). The second record contains
the hour, followed by the 3D array of data, which is
dimensioned by nx,ny,nz.

I also need to convert all the 2 byte Int values to
'regular' Int.

I realize that similar questions have previously been
posted to the group, but the most recent inquiries date
back to 2000 and 2001. I thought there may be newer and
easier ways to do this.


Thanks in advance for your help.


Daran Rife

[EMAIL PROTECTED]

-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Uploading files

2005-01-10 Thread Robert Brewer
Peter Mott wrote:
> If you upload a file using the cgi module is there any
> way to discover the file name that the user submitted
> as well as the file data? I've googled till I squint
> but I can't find anything.

Working example (for ASP, which uses BinaryRead to get the request
stream):

contentLength = int(env['CONTENT_LENGTH'])
content, size = request.BinaryRead(contentLength)

content = StringIO.StringIO(content)
form = cgi.FieldStorage(content, None, "", env, True)
content.close()

for key in form:
value = form[key]

try:
filename = value.filename
except AttributeError:
filename = None

if filename:
# Store filename, filedata as a tuple.
self.requestParams[key] = (filename, value.value)
else:
for subValue in form.getlist(key):
self.requestParams[key] = subValue


Robert Brewer
MIS
Amor Ministries
[EMAIL PROTECTED]
--
http://mail.python.org/mailman/listinfo/python-list


Re: Importing Problem on Windows

2005-01-10 Thread Kartic
It is quite possible that in linux, you launched the python interpreter
shell from the same directory you stored your parser.py and parse.py
files.

On windows, you probably saved the parser*.py files to some place like
"my documents" and launched the python interpreter or IDLE.

So, you could probably try this:
1. Launch command shell.
2. CD to the directory where you saved the parser*.py files.
3.  Start python.exe from the command prompt (not from the Program
Files shortcut)
4. Try importing.

Easier...copy the parser*.py files into the Libs folder of your python
installation in your windows machine.

Thanks,
--Kartic

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huge Sets() to disk

2005-01-10 Thread Martin MOKREJÅ
Tim Peters wrote:
[Martin MOKREJÅ]
just imagine, you want to compare how many words are in English, German,
Czech, Polish disctionary. You collect words from every language and record
them in dict or Set, as you wish.

Call the set of all English words E; G, C, and P similarly.

Once you have those Set's or dict's for those 4 languages, you ask
for common words

This Python expression then gives the set of words common to all 4:
E & G & C & P

and for those unique to Polish.

P -  E - G  - C
is a reasonably efficient way to compute that.
Nice, is it equivalent to common / unique methods of Sets?

I have no estimates
of real-world numbers, but we might be in range of 1E6 or 1E8?
I believe in any case, huge.

No matter how large, it's utterly tiny compared to the number of
character strings that *aren't* words in any of these languages. 
English has a lot of words, but nobody estimates it at over 2 million
(including scientific jargon, like names for chemical compounds):

http://www.worldwidewords.org/articles/howmany.htm
As I've said, I analyze in real something else then languages.
However, it can be described with the example of words in different languages.
But nevertheless, imagine 1E6 words of size 15. That's maybe 1.5GB of raw
data. Will sets be appropriate you think?
My concern is actually purely scientific, not really related to analysis
of these 4 languages, but I believe it describes my intent quite well.
I wanted to be able to get a list of words NOT found in say Polish,
and therefore wanted to have a list of all, theoretically existing words.
In principle, I can drop this idea of having ideal, theoretical lexicon.
But have to store those real-world dictionaries anyway to hard drive.

Real-word dictionaries shouldn't be a problem.  I recommend you store
each as a plain text file, one word per line.  Then, e.g., to convert
that into a set of words, do
f = open('EnglishWords.txt')
set_of_English_words = set(f)
I'm aware I can't keep set_of_English_words in memory.
f.close()
M.
--
http://mail.python.org/mailman/listinfo/python-list


Uploading files

2005-01-10 Thread Peter Mott
If you upload a file using the cgi module is there anyway to discover the 
file name that the user submitted as well as the file data? I've googled 
till I squint but I can't find anything.

Peter 


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huge Sets() to disk

2005-01-10 Thread Tim Peters
[Martin MOKREJÅ]
>  just imagine, you want to compare how many words are in English, German,
> Czech, Polish disctionary. You collect words from every language and record
> them in dict or Set, as you wish.

Call the set of all English words E; G, C, and P similarly.

>  Once you have those Set's or dict's for those 4 languages, you ask
> for common words

This Python expression then gives the set of words common to all 4:

E & G & C & P

> and for those unique to Polish.

P -  E - G  - C

is a reasonably efficient way to compute that.

> I have no estimates
> of real-world numbers, but we might be in range of 1E6 or 1E8?
> I believe in any case, huge.

No matter how large, it's utterly tiny compared to the number of
character strings that *aren't* words in any of these languages. 
English has a lot of words, but nobody estimates it at over 2 million
(including scientific jargon, like names for chemical compounds):

http://www.worldwidewords.org/articles/howmany.htm

> My concern is actually purely scientific, not really related to analysis
> of these 4 languages, but I believe it describes my intent quite well.
>
>  I wanted to be able to get a list of words NOT found in say Polish,
> and therefore wanted to have a list of all, theoretically existing words.
> In principle, I can drop this idea of having ideal, theoretical lexicon.
> But have to store those real-world dictionaries anyway to hard drive.

Real-word dictionaries shouldn't be a problem.  I recommend you store
each as a plain text file, one word per line.  Then, e.g., to convert
that into a set of words, do

f = open('EnglishWords.txt')
set_of_English_words = set(f)
f.close()

You'll have a trailing newline character in each word, but that
doesn't really matter.

Note that if you sort the word-per-line text files first, the Unix
`comm` utility can be used to perform intersection and difference on a
pair at a time with virtually no memory burden (and regardless of file
size).
--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huge Sets() to disk

2005-01-10 Thread Martin MOKREJŠ
Adam DePrince wrote:
On Mon, 2005-01-10 at 11:11, Martin MOKREJ¦ wrote:
Hi,
 I have sets.Set() objects having up to 20E20 items,
each is composed of up to 20 characters. Keeping
them in memory on !GB machine put's me quickly into swap.
I don't want to use dictionary approach, as I don't see a sense
to store None as a value. The items in a set are unique.

Lets be realistic.  Your house is on fire and you are remodeling the
basement.
Assuming you are on a 64 bit machine with full 64 bit addressing, your
absolute upper limit on the size of a set is 2^64, or
18446744073709551616 byte.  Your real upper limit is at least an order
of magnitude smaller.
You are asking us how to store 20E20, or 20, items
in a Set.  That is still an order of magnitude greater than the number
of *bits* you can address.  Your desktop might not be able to enumerate
all of these strings in your lifetime, much less index and store them.
We might as well be discussing the number of angles that can sit on the
head of a pin.  Any discussion of a list vs Set/dict is a small micro
optimization matter dwarfed by the fact that there don't exist machines
to hold this data.  The consideration of Set vs. dict is an even less
important matter of syntactic sugar.
To me, it sounds like you are taking an AI class and trying to deal with
a small search space by brute force.  First, stop banging your head
against the wall algorithmically.  Nobody lost their job for saying NP
!= P.  Then tell us what you are tring to do; perhaps there is a better
way, perhaps the problem is unsolvable and there is a heuristic that
will satisfy your needs. 
Hi Adam,
 just imagine, you want to compare how many words are in English, German,
Czech, Polish disctionary. You collect words from every language and record
them in dict or Set, as you wish.
 Once you have those Set's or dict's for those 4 languages, you ask
for common words and for those unique to Polish. I have no estimates
of real-world numbers, but we might be in range of 1E6 or 1E8?
I believe in any case, huge.
 My concern is actually purely scientific, not really related to analysis
of these 4 languages, but I believe it describes my intent quite well.
 I wanted to be able to get a list of words NOT found in say Polish,
and therefore wanted to have a list of all, theoretically existing words.
In principle, I can drop this idea of having ideal, theoretical lexicon.
But have to store those real-world dictionaries anyway to hard drive.
Martin
--
http://mail.python.org/mailman/listinfo/python-list


Re: Importing Problem on Windows

2005-01-10 Thread Grig Gheorghiu
What version of Python are you running on Linux vs. Windows?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ftplib with unknown file names

2005-01-10 Thread rbt
Jeremy Jones wrote:
rbt wrote:
How can I use ftplib to retrieve files when I do not know their names? 
I can do this to get a listing of the directory's contents:

ftp_server.retrlines('LIST')
The output from this goes to the console and I can't figure out how to 
turn that into something I can use to actually get the files (like a 
list of file names). I read a bit about the callback function that can 
be passed to retrlines but I couldn't figure out how to use it.

Any help is appreciated.
Thanks!

.nlst(argument) will return a list of file names.  Here are 
the docs for the nlst command:

http://www.python.org/doc/current/lib/ftp-objects.html
HTH,
Jeremy Jones
Very good Jeremy! Thank you for pointing that out. It works great.
--
http://mail.python.org/mailman/listinfo/python-list


Re: a new Perl/Python a day

2005-01-10 Thread gabriele renzi
Bob Smith ha scritto:
Scott Bryce wrote:
Xah Lee wrote:
frustrated constantly by its inanities and incompetences.)

I don't see what this has to do with Perl.

You're joking, right?
please consider that the message you all are asking are crossposted to 
comp.lang.perl.misc and comp.lang.python, avoid the crossgroup flames :)
--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huge Sets() to disk

2005-01-10 Thread Martin MOKREJŠ
Robert Brewer wrote:
Martin MOKREJŠ wrote:
Robert Brewer wrote:
Martin MOKREJŠ wrote:

I have sets.Set() objects having up to 20E20 items,
each is composed of up to 20 characters. Keeping
them in memory on !GB machine put's me quickly into swap.
I don't want to use dictionary approach, as I don't see a sense
to store None as a value. The items in a set are unique.
How can I write them efficiently to disk?

got shelve*?
I know about shelve, but doesn't it work like a dictionary?
Why should I use shelve for this? Then it's faster to use
bsddb directly and use string as a key and None as a value, I'd guess.

If you're using Python 2.3, then a sets.Set *is* implemented with
Yes, I do.
a dictionary, with None values. It simply has some extra methods to
make it behave like a set. In addition, the Set class already has
builtin methods for pickling and unpickling.
Really? Does Set() have such a method to pickle efficiently?
I haven't seen it in docs.
So it's probably faster to use bsddb directly, but why not find out
by trying 2 lines of code that uses shelve? The time-consuming part
Because I don't know how can I afect indexing using bsddb, for example.
For example, create index only for say keysize-1 or keysize-2 chars
of a keystring.
How to delay indexing so that index isn't rebuild after every addiotion
of a new key? I want to do it a the end of the loop adding new keys.
Even better, how to turn off indexing completely (to save space)?
of your quest is writing the timed test suite that will indicate
which route will be fastest, which you'll have to do regardless.
Unfortunately, I'm hoping to get first an idea what can be made
faster and how when using sets and dictionaries.
M.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Writing huge Sets() to disk

2005-01-10 Thread Adam DePrince
On Mon, 2005-01-10 at 11:11, Martin MOKREJ¦ wrote:
> Hi,
>   I have sets.Set() objects having up to 20E20 items,
> each is composed of up to 20 characters. Keeping
> them in memory on !GB machine put's me quickly into swap.
> I don't want to use dictionary approach, as I don't see a sense
> to store None as a value. The items in a set are unique.

Lets be realistic.  Your house is on fire and you are remodeling the
basement.

Assuming you are on a 64 bit machine with full 64 bit addressing, your
absolute upper limit on the size of a set is 2^64, or
18446744073709551616 byte.  Your real upper limit is at least an order
of magnitude smaller.

You are asking us how to store 20E20, or 20, items
in a Set.  That is still an order of magnitude greater than the number
of *bits* you can address.  Your desktop might not be able to enumerate
all of these strings in your lifetime, much less index and store them.

We might as well be discussing the number of angles that can sit on the
head of a pin.  Any discussion of a list vs Set/dict is a small micro
optimization matter dwarfed by the fact that there don't exist machines
to hold this data.  The consideration of Set vs. dict is an even less
important matter of syntactic sugar.

To me, it sounds like you are taking an AI class and trying to deal with
a small search space by brute force.  First, stop banging your head
against the wall algorithmically.  Nobody lost their job for saying NP
!= P.  Then tell us what you are tring to do; perhaps there is a better
way, perhaps the problem is unsolvable and there is a heuristic that
will satisfy your needs. 



Adam DePrince 


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Securing a future for anonymous functions in Python

2005-01-10 Thread Anna
You cut something from that...

"""It's not, after all, the word "lambda" itself; I would still
have some issues with using, say "function", instead of "lambda", but
at least then I would immediately know what I was looking at..."""

I would have fewer ambiguities about using, say "func" rather than
lambda. Lambda always makes me feel like I'm switching to some *other*
language (specifically, Greek - I took a couple of semesters of Attic
Greek in college and quite enjoyed it.) But, the fact that lambda
doesn't MEAN anything  (and has come - I mean - DELTA at least has a
fairly commonly understood meaning, even at high-school level math.
But, lambda? If it was "func" or "function" or even "def", I would be
happier. At least that way I'd have some idea what it was supposed to
be...

BTW - I am *quite* happy with the proposal for "where:" syntax - I
think it handles the problems I have with lambda quite handily. 

Anna

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Securing a future for anonymous functions in Python

2005-01-10 Thread Anna
Same here.

-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Writing huge Sets() to disk

2005-01-10 Thread Robert Brewer
Martin MOKREJŠ wrote:
> Robert Brewer wrote:
> > Martin MOKREJŠ wrote:
> > 
> >>  I have sets.Set() objects having up to 20E20 items,
> >>each is composed of up to 20 characters. Keeping
> >>them in memory on !GB machine put's me quickly into swap.
> >>I don't want to use dictionary approach, as I don't see a sense
> >>to store None as a value. The items in a set are unique.
> >>
> >>  How can I write them efficiently to disk?
> > 
> > 
> > got shelve*?
> 
> I know about shelve, but doesn't it work like a dictionary?
> Why should I use shelve for this? Then it's faster to use
> bsddb directly and use string as a key and None as a value, I'd guess.

If you're using Python 2.3, then a sets.Set *is* implemented with a dictionary, 
with None values. It simply has some extra methods to make it behave like a 
set. In addition, the Set class already has builtin methods for pickling and 
unpickling.

So it's probably faster to use bsddb directly, but why not find out by trying 2 
lines of code that uses shelve? The time-consuming part of your quest is 
writing the timed test suite that will indicate which route will be fastest, 
which you'll have to do regardless.


Robert Brewer
MIS
Amor Ministries
[EMAIL PROTECTED]
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python Operating System???

2005-01-10 Thread Roose

"Paul Rubin"  wrote in message
news:[EMAIL PROTECTED]
> "Roose" <[EMAIL PROTECTED]> writes:
> > Are you actually going to answer any of my questions?  Let's see
> > this "JavaScript task scheduler" you have written!
>
> I wrote it at a company and can't release it.  It ran inside a
> browser.  There was nothing terribly amazing about it.  Obviously the
> tasks it scheduled were not kernel tasks.  Do you know how Stackless
> Python (with continuations) used to work?  That had task switching,
> but those were not kernel tasks either.


Well, then of course you know I have to say:  An OS does not run inside a
browser.  There's a sentence I never thought I'd utter in my lifetime.

So that is an irrelevant example, since it obviously isn't a task scheduler
in the context of this thread.

Anyway, this argument is going nowhere... I will admit that people have
pointed out things here that are interesting like the attempts to embed
Python in a kernel.  But the point was that the OP was looking for an easier
way to write an OS, and thought that might be to do it in Python, and I
think I gave some good guidance away from that direction.  That is mostly
what I care about.

These other arguments are academic, and of course I am not trying to stop
anyone from trying anything.  When I we see real working example, then we
will all have a better idea of what the problems are, and how much of it can
realistically be implemented in an interpreted language.  Frankly I don't
think that will come for about 10 years if ever, but hey prove me wrong.


-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   >