Re: Converting _node* to a Code object?

2007-04-01 Thread Gabriel Genellina
En Sun, 01 Apr 2007 01:35:59 -0300, Brendon Costa <[EMAIL PROTECTED]>  
escribió:

> How do i convert a _node* object returned from:
> PyParser_SimpleParseStringFlagsFilename()
>
> into a code object i can use as a module to import with:
> PyImport_ExecCodeModule()

Using PyNode_Compile. But why don't you use Py_CompileXXX instead?
And look into import.c, maybe there is something handy.

-- 
Gabriel Genellina

-- 
http://mail.python.org/mailman/listinfo/python-list


Character set woes with binary data

2007-04-01 Thread Michael B. Trausch
I am attempting to piece together a Python client for Fotobilder, the
picture management server on Livejournal.

The protocol calls for binary data to be transmitted, and I cannot seem
to be able to do it, because I get this error:

>>> sb.UploadSinglePicture('/home/mbt/IMG_2618.JPG')
Traceback (most recent call last):
  File "", line 1, in 
  File "scrapbook.py", line 181, in UploadSinglePicture
{Request['UploadPic.Meta.Filename']: pic_mem})
  File "scrapbook.py", line 237, in ComposeMIME
return(self.EncodeMIME(fields, files))
  File "scrapbook.py", line 226, in EncodeMIME
body = eol.join(L)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0:
ordinal not in range(128)
>>> 

When putting the MIME segments (listed line-by-line in a Python list)
together to transmit them.  The files are typically JPG or some other
binary format, and as best as I understand the protocol, the binary data
needs to be transmitted directly (this is evidenced by looking at the
tcp-stream of an existing client for uploading files).

It seems that Python thinks it knows better than I do, though.  I want
to send this binary data straightaway to the server.  :-)

This is a hex dump of what one file looks like being uploaded to the
server (partial; the file is 3.8 MB):

01CB  ff d8 ff e1 3b fc 45 78  69 66 00 00 49 49 2a 00 ;.Ex
if..II*.
01DB  08 00 00 00 09 00 0f 01  02 00 10 00 00 00 7a
00  ..z.
01EB  00 00 10 01 02 00 10 00  00 00 aa 00 00 00 12
01  
01FB  03 00 01 00 00 00 01 00  00 00 1a 01 05 00 01
00  
020B  00 00 da 00 00 00 1b 01  05 00 01 00 00 00 e2
00  
021B  00 00 28 01 03 00 01 00  00 00 02 00 00 00 31
01 ..(. ..1.
022B  02 00 1e 00 00 00 ea 00  00 00 13 02 03 00 01
00  
023B  00 00 02 00 00 00 69 87  04 00 01 00 00 00 54
01 ..i. ..T.
024B  00 00 ac 12 00 00 48 65  77 6c 65 74 74 2d 50 61 ..He
wlett-Pa
025B  63 6b 61 72 64 00 00 00  00 00 00 00 00 00 00 00
ckard... 
026B  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00
00  
027B  00 00 00 00 00 00 50 68  6f 74 6f 73 6d 61 72 74 ..Ph
otosmart
028B  20 4d 35 32 35 00 00 00  00 00 00 00 00 00 00 00
M525... 
029B  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00
00  
02AB  00 00 00 00 00 00 e6 00  00 00 01 00 00 00 e6
00  
02BB  00 00 01 00 00 00 56 65  72 73 69 6f 6e 20 31 2e ..Ve
rsion 1.
02CB  34 31 30 30 2c 53 4e 3a  43 4e 36 34 31 44 33 31 4100,SN:
CN641D31
02DB  4a 35 53 00 00 00 00 00  00 00 00 00 00 00 00 00
J5S. 
02EB  00 00 00 00 00 00 ff ff  ff ff ff ff ff ff ff
ff  
02FB  ff ff ff ff ff ff ff ff  ff ff ff ff ff ff ff
ff  
030B  ff ff ff ff ff ff ff ff  ff ff ff ff ff ff ff
ff  
031B  ff ff ff ff ff ff ff ff  ff ff ff ff ff ff ff
ff  
032B  27 00 9a 82 05 00 01 00  00 00 96 08 00 00 9d 82
'... 

Is there any way to tell Python to ignore the situation and treat the
entire thing as simply a stream of bytes?  I cannot seem to find one,
though I have found a great many posts on this mailing list regarding
issues in the past.  It doesn't look like translating the file to base64
is an option for me.

— Mike

--
Michael B. Trausch
[EMAIL PROTECTED]
Phone: (404) 592-5746
  Jabber IM:
[EMAIL PROTECTED]
  [EMAIL PROTECTED]
Demand Freedom!  Use open and free protocols, standards, and software!


signature.asc
Description: This is a digitally signed message part
-- 
http://mail.python.org/mailman/listinfo/python-list

tag replacement in toxml()

2007-04-01 Thread Manuel Ospina
Hi all,

I am new on the list and I already have a question :-(.

I have something like this:

import xml.dom.minidom
from xml.dom.minidom import getDOMImplementation
impl = getDOMImplementation()
myDoc = impl.createDocument(None, "example", None)
myRoot = myDoc.documentElement
myNode1 = myDoc.createElement("node")
myNode2 = myDoc.createElement("nodeTwo")
 myText = myDoc.createTextNode("Here is the problem")
myNode2.appendChild(myText)
myNode1.appendChild(myNode2)
myRoot.appendChild(myNode1)
print myDoc.toxml()

The result is:
'\nHere is the 
problem'


My question is how I can avoid that toxml() replaces the tags?

Regards,
Manuel





__ 
LLama Gratis a cualquier PC del Mundo. 
Llamadas a fijos y móviles desde 1 céntimo por minuto. 
http://es.voice.yahoo.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pygame Q (linux) beginner

En Sat, 31 Mar 2007 23:37:16 -0300, enquiring mind <"enquiring  
mind"@braindead.com> escribió:

> Running 2.4.1 Python (learning)
> Running SUSE Linux 10
>
> Am learning from a new books that mostly deals with windows python and
> Pygames called "Game Programming" by Randy Harris (2007)  His books
> references Python 2.4.2 and Pygame 1.7.1 with these comments:
>
> "If you are using a Linux machine, you probably won't have the simple
> installer that came with the windows version.  Follow the instructions
> at http://pygame.org/install.  You may have to run a couple of scripts
> to make everything work, but just follow the directions and you will be
> fine."
>
> Could anybody suggest or make a helpful comment in view of what
> information I have supplied.  At Chapter 5 is where the Pygame module is
> introduced so I have a little time before I have to figure out what I
> have to download and install.

First: read that page.
I don't use SUSE myself, but the first hit on Google for "pygame SUSE"  
goes into the Novell site, and  SUSE 10.1 appears to include pygame  
1.7.1release14 (or at least, you should be able to download and install  
the RPM from Novell)

-- 
Gabriel Genellina

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Character set woes with binary data

En Sun, 01 Apr 2007 05:21:25 -0300, Michael B. Trausch <[EMAIL PROTECTED]>  
escribió:

> I am attempting to piece together a Python client for Fotobilder, the
> picture management server on Livejournal.
>
> The protocol calls for binary data to be transmitted, and I cannot seem
> to be able to do it, because I get this error:
>
 sb.UploadSinglePicture('/home/mbt/IMG_2618.JPG')
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "scrapbook.py", line 181, in UploadSinglePicture
> {Request['UploadPic.Meta.Filename']: pic_mem})
>   File "scrapbook.py", line 237, in ComposeMIME
> return(self.EncodeMIME(fields, files))
>   File "scrapbook.py", line 226, in EncodeMIME
> body = eol.join(L)
> UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0:
> ordinal not in range(128)


What's scrapbook.py? Where do you find it?

> When putting the MIME segments (listed line-by-line in a Python list)
> together to transmit them.  The files are typically JPG or some other
> binary format, and as best as I understand the protocol, the binary data
> needs to be transmitted directly (this is evidenced by looking at the
> tcp-stream of an existing client for uploading files).

But I think your problem has nothing to do with MIME: you are mixing  
unicode and string objects; from your traceback, either the "L" list or  
"eol" contain unicode objects that can't be represented as ASCII strings.

> It seems that Python thinks it knows better than I do, though.  I want
> to send this binary data straightaway to the server.  :-)

You don't appear to be using the standard email package (which includes  
MIME support) so don't blame Python...

-- 
Gabriel Genellina

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: shelf membership

Aaron Brady wrote:

> can you shelve objects with membership?
> 
> this gives you:
> 
> TypeError: object does not support item assignment
> dict 0 True
> Exception exceptions.TypeError: 'object does not support item assignment' 
> in  ignored
> 
> > ignored is a bit mysterious.  tx in advance.
> 
> from shelve import *
> class MyShelf(DbfilenameShelf):
>   def __init__(self, filename, flag='c', protocol=None, 
> writeback=False, binary=None):
>   self.__dict__['ready']=False
>   DbfilenameShelf.__init__(self, filename, flag, protocol, 
> writeback, binary)
>   self.ready=True
>   def __setattr__(self,name,value):
>   if not self.ready:
>   self.__dict__[name]=value
>   else:
>   print name, value, self.ready
>   self.__dict__[name]=value
>   DbfilenameShelf.__setitem__(self,name,value)
> 
> def open(filename, flag='c', protocol=None, writeback=False, binary=None):
>   return MyShelf(filename, flag, protocol, writeback, binary)

The root cause of your problems is that you are mixing two namespaces: that
of the shelved items and that used internally by DbfilenameShelf to
implement the shelving functionality.

While the cleanest approach is to not do it, you can make such a mix work
in /some/ cases if you give precedence to the "implementation namespace".
This requires that you pass by any implementation attributes

pass_through_attributes = [...]
def __setattr__(self, name, value):
if name in pass_through_attributes:
self.__dict__[name] = value # *
else:
# do whatever you like

(*) Assuming that DbfilenameShelve is an oldstyle class and does not itself
implement a __setattr__() method.

>From your error message one can infer that pass_through_attributes must
contain at least the name "dict"; for a complete list you have to inspect
the Dbfilename source code.

An approach that is slightly more robust is to wrap DbfilenameShelf -- make
it an attribute of MyShelf -- and pass the relevant method calls to the
attribute.

Peter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: tag replacement in toxml()

En Sun, 01 Apr 2007 05:26:48 -0300, Manuel Ospina <[EMAIL PROTECTED]>  
escribió:

> I am new on the list and I already have a question :-(.

Welcome!

> I have something like this:
>
> import xml.dom.minidom
> from xml.dom.minidom import getDOMImplementation
> impl = getDOMImplementation()
> myDoc = impl.createDocument(None, "example", None)
> myRoot = myDoc.documentElement
> myNode1 = myDoc.createElement("node")
> myNode2 = myDoc.createElement("nodeTwo")
>  myText = myDoc.createTextNode("Here is the problem")
> myNode2.appendChild(myText)
> myNode1.appendChild(myNode2)
> myRoot.appendChild(myNode1)
> print myDoc.toxml()
>
> The result is:
> '\nHere is the  
> problem'

That's right...

> My question is how I can avoid that toxml() replaces the tags?

createTextNode is used to create a *text* node: its argument is  
interpreted as the node contents, and quoted as appropiate. What if you  
want it to say "Price<1000"? The < sign must be quoted.

You need a text node AND a  node, both children of nodeTwo.

Note: Using ElementTree is a lot easier!

-- 
Gabriel Genellina

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Mastering Python

Hendrik van Rooyen wrote:
>  "Dennis Lee Bieber" <[EMAIL PROTECTED]> wrote:
> 
> 
>> On Wed, 28 Mar 2007 07:55:20 +0200, "Hendrik van Rooyen"
>> <[EMAIL PROTECTED]> declaimed the following in comp.lang.python:
> 
>>> Pretty obvious of course, as is the pronounciation of the
>>> name:  "Cholmondely"
>>>
>> Is that a scottish "Ch" (as in LoCH Lomond), plain hard "Ch" (as in
>> CHristmas) or a soft "Ch" (as in CHicken)?
> 
> It comes out something like "Chum-lee", with the ch like chicken...
> 
> (that's what I have heard -  but who knows - It may have been 
> a regional dialect, a case of the blind leading the blind, or 
> someone pulling the piss..)
> 
You have been correctly informed. It's one of the least intuitive names 
in the English language.

regards
  Steve
-- 
Steve Holden   +44 150 684 7255  +1 800 494 3119
Holden Web LLC/Ltd  http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
Recent Ramblings   http://holdenweb.blogspot.com

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: socket read timeout

Hendrik van Rooyen wrote:
>  <[EMAIL PROTECTED]> wrote:
> 
> 
>> hg> My issue with that is the effect on write: I only want a timeout on
>> hg> read ...  but anyway ...
>>
>> So set a long timeout when you want to write and short timeout when you want
>> to read.
>>
> 
> Are sockets full duplex?
> 
Yes. But you have to use non-blocking calls in your application to use 
them as full-duplex in your code.

> I know Ethernet isn't.
> 
Don't know much, then, do you? ;-)

regards
  Steve
-- 
Steve Holden   +44 150 684 7255  +1 800 494 3119
Holden Web LLC/Ltd  http://www.holdenweb.com
Skype: holdenweb http://del.icio.us/steve.holden
Recent Ramblings   http://holdenweb.blogspot.com

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unicode list

Rehceb Rotkiv schrieb:
> Hello,
> 
> I have this little grep-like program:
> 
> ++snip++
> #!/usr/bin/python
> 
> import sys
> import re
> 
> pattern = sys.argv[1]
> inputfile = file(sys.argv[2], 'r')
> 
> for line in inputfile:
> matches = re.findall(pattern, line)
> if matches:
> print matches
> ++snip++
> 
> Like this, the program prints some characters as strange escape 
> sequences, which is due to the input file being encoded in utf-8

As Paul said, your terminal is likely set to iso-8859 encoding, which
is why it doesn't display UTF-8 correctly. The above program produces
correct UTF-8 output.

What you could do is:
1. read the file in as unicode
2. print the unicode to the terminal (will use the terminal encoding) or
convert the unicode to strings with an explicit encoding before printing

codecs.open() is very helpful for step 1, BTW.

Georg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: re.findall() hangs in python

On Apr 1, 6:12 am, "[EMAIL PROTECTED]"
<[EMAIL PROTECTED]> wrote:
> But when 'data' does not contain pattern, it just hangs at
> 're.findall'
>
> pattern = re.compile("(.*) re.S)

That pattern is just really slow to evaluate. What you want is
probably something more like this:

re.compile(r']*src\s*=\s*"([^"]*img[^"]*)"')

"dot" is usually not so great. Prefer "NOT end-character", like [^>]
or [^"].

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] Python 3000 PEP: Postfix type declarations

Brilliant!

On 4/1/07, Georg Brandl <[EMAIL PROTECTED]> wrote:
>  def foo${LATIN SMALL LETTER LAMBDA WITH STROKE}$(x${DOUBLE-STRUCK 
> CAPITAL C}$):
>  return None${ZERO WIDTH NO-BREAK SPACE}$
>
> This is still easy to read and makes the full power of type-annotated Python
> available to ASCII believers.

+1

J
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Generic logic/conditional class or library for classification of data

On Sat, 31 Mar 2007 21:54:46 -0700, Basilisk96 wrote:

> As a very basic example, consider a set of uncategorized objects that
> have text descriptions associated with them. The objects are some type
> of tangible product, e.g., books. So the input object has a
> Description attribute, and the output object (a categorized book)
> would have some attributes like Discipline, Target audience, etc.
> Let's say that one such rule is "if ( 'description' contains
> 'algebra') then ('discipline' = 'math', 'target' = 'student')". Keep
> in mind that all these attribute names and their values are not known at
> design time.

Easy-peasy.

rules = {'algebra': {'discipline': 'math', 'target': 'student'},
'python': {'section': 'programming', 'os': 'linux, windows'}}

class Input_Book(object):
def __init__(self, description):
self.description = description

class Output_Book(object):
def __repr__(self):
return "Book - %s" % self.__dict__

def process_book(book):
out = Output_Book()
for desc in rules:
if desc in book.description:
attributes = rules[desc]
for attr in attributes:
setattr(out, attr, attributes[attr])
return out

book1 = Input_Book('python for cheese-makers')
book2 = Input_Book('teaching algebra in haikus')
book3 = Input_Book('how to teach algebra to python programmers')


>>> process_book(book1)
Book - {'section': 'programming', 'os': 'linux, windows'}
>>> process_book(book2)
Book - {'discipline': 'math', 'target': 'student'}
>>> process_book(book3)
Book - {'discipline': 'math', 'section': 'programming', 
'os': 'linux, windows', 'target': 'student'}


I've made some simplifying assumptions: the input object always has a
description attribute. Also the behaviour when two or more rules set the
same attribute is left undefined. If you want more complex rules you can
follow the same technique, except you'll need a set of meta-rules to
decide what rules to follow.

But having said that, I STRONGLY recommend that you don't follow that
approach of creating variable instance attributes at runtime. The reason
is, it's quite hard for you to know what to do with an Output_Book once
you've got it. You'll probably end up filling your code with horrible
stuff like this:

if hasattr(book, 'target'):
do_something_with(book.target)
elif hasattr(book, 'discipline'):
do_something_with(book.discipline)
elif ... # etc.


Replacing the hasattr() checks with try...except blocks isn't any
less icky.

Creating instance attributes at runtime has its place; I just don't think
this is it.

Instead, I suggest you encapsulate the variable parts of the book
attributes into a single attribute:

class Output_Book(object):
def __init__(self, name, data):
self.name = name # common attribute(s)
self.data = data # variable attributes


Then, instead of setting each variable attribute individually with
setattr(), simply collect all of them in a dict and save them in data:

def process_book(book):
data = {}
for desc in rules:
if desc in book.description:
data.update(rules[desc])
return Output_Book(book.name, data)


Now you can do this:

outbook = process_book(book)
# handle the common attributes that are always there
print outbook.name
# handle the variable attributes
print "Stock = %s" % output.data.setdefault('status', 0)
print "discipline = %s" % output.data.get('discipline', 'none')
# handle all the variable attributes
for key, value in output.data.iteritems():
do_something_with(key, value)


Any time you have to deal with variable attributes that may or may not be
there, you have to use more complex code, but you can minimize the
complexity by keeping the variable attributes separate from the common
attributes.


-- 
Steven.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: saving Python process state for later debugging

On 1 avr, 09:39, [EMAIL PROTECTED] wrote:
> On Apr 1, 2:07 am, "aspineux" <[EMAIL PROTECTED]> wrote:
>
>
>
> > Pylon has something like 
> > that.http://pylonshq.com/docs/0.9.4.1/interactive_debugger.html
>
> > Turbogears has the same with option tg.fancy_exception
>
> I could get it wrong, but these things seem to be about debugging
> crashed processes "online", not saving snapshots to files for later
> inspection. Can you e-mail a process snapshot to a different machine
> with them, for example? I understood that you are supposed to debug
> the original process, which is kept alive, via the web. I'm talking
> about a situation where you have a Python program deployed to a user
> who is not running a web server, and have the user send you a snapshot
> as a bug report.
>
> -- Yossi

A context in python is no more than 2 dictionaries ( globals() and
locals()).
You can easily serialize both to store them.
You can navigate into the python stack using module inspect and
generate the context for all
the functions in the stack trace.

This is probably no more than 50 lines of code, maybe 20 :-)

You can find sample of how to get these info and use them in the
sample I was reffering before.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Mastering Python

Before we get to far away from the original question...
as you have may have noticed you reached one of the best user
groups on the net , where help from the top gurus and best minds in
the python universe is only a question away.
Go for it, you are in good hands.

Db

-- 
http://mail.python.org/mailman/listinfo/python-list


Extract information from HTML table

Hello,

I'm trying to extract the data from HTML table. Here is the part of
the HTML source :
"""

  

  
Sat, 31.03.2007 - 20:24:00
  
http://s2.bitefight.fr/bite/
bericht.php?q=01bf0ba7258ad976d890379f987d444e&beid=2628033">Vous
avez tendu une embuscade à votre victime !


  

  
Sat, 31.03.2007 - 20:14:35
  
http://s2.bitefight.fr/bite/
bericht.php?q=01bf0ba7258ad976d890379f987d444e&beid=2628007">Vous
avez tendu une embuscade à votre victime !


  

  
Sat, 31.03.2007 - 20:11:39
   Vous avez bien accompli votre
tâche de Gardien de Cimetière et vous vous
voyez remis votre salaire comme récompense.
Vous recevez 320

et collectez 3 d'expérience !

"""

I would like to transform this in following thing :

Date : Sat, 31.03.2007 - 20:24:00
ContainType : Link
LinkText : Vous avez tendu une embuscade à votre victime !
LinkURL : 
http://s2.bitefight.fr/bite/bericht.php?q=01bf0ba7258ad976d890379f987d444e&beid=2628033

Date : Sat, 31.03.2007 - 20:14:35
ContainType : Link
LinkText : Vous avez tendu une embuscade à votre victime !
LinkURL : 
http://s2.bitefight.fr/bite/bericht.php?q=01bf0ba7258ad976d890379f987d444e&beid=2628007

Date : Sat, 31.03.2007 - 20:14:35
ContainType : Text
Contain : Vous avez bien accompli votre tâche de Gardien de Cimetière
et vous vous
voyez remis votre salaire comme récompense.
Vous recevez 320 et collectez 3 d'expérience !



Do you know the way to do it ?

Thanks

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Generic logic/conditional class or library for classification of data


On Mar 31, 2007, at 11:54 PM, Basilisk96 wrote:

> This topic is difficult to describe in one subject sentence...
>
> Has anyone come across the application of the simple statement "if
> (object1's attributes meet some conditions) then (set object2's
> attributes to certain outcomes)", where "object1" and "object2" are
> generic objects, and the "conditions" and "outcomes" are dynamic run-
> time inputs? Typically, logic code for any application out there is
> hard-coded. I have been working with Python for a year, and its
> flexibility is nothing short of amazing. Wouldn't it be possible to
> have a class or library that can do this sort of dynamic logic?
>
> The main application of such code would be for classification
> algorithms which, based on the attributes of a given object, can
> classify the object into a scheme. In general, conditions for
> classification can be complex, sometimes involving a collection of
> "and", "or", "not" clauses. The simplest outcome would involve simply
> setting a few attributes of the output object to given values if the
> input condition is met. So each such "if-then" clause can be viewed as
> a rule that is custom-defined at runtime.
>
> As a very basic example, consider a set of uncategorized objects that
> have text descriptions associated with them. The objects are some type
> of tangible product, e.g., books. So the input object has a
> Description attribute, and the output object (a categorized book)
> would have some attributes like Discipline, Target audience, etc.
> Let's say that one such rule is "if ( 'description' contains
> 'algebra') then ('discipline' = 'math', 'target' = 'student') ". Keep
> in mind that all these attribute names and their values are not known
> at design time.
>
> Is there one obvious way to do this in Python?
> Perhaps this is more along the lines of data mining methods?
> Is there a library with this sort of functionality out there already?
>
> Any help will be appreciated.

You may be interested in http://divmod.org/trac/wiki/DivmodReverend  
-- it is a general purpose Bayesian classifier written in python.

hope this helps,
Michael
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Converting _node* to a Code object?

Gabriel Genellina wrote:
> En Sun, 01 Apr 2007 01:35:59 -0300, Brendon Costa <[EMAIL PROTECTED]>  
> escribió:
> 
>> How do i convert a _node* object returned from:
>> PyParser_SimpleParseStringFlagsFilename()
>>
>> into a code object i can use as a module to import with:
>> PyImport_ExecCodeModule()
> 
> Using PyNode_Compile. But why don't you use Py_CompileXXX instead?
> And look into import.c, maybe there is something handy.
> 

Thanks for the pointer. I am not using Py_CompileXXX because i could
only find Py_CompileString... i could not find a file version of it
(Which i thought should exist).

My original email though i copied and pasted the wrong function into.
Instead of:
PyParser_SimpleParseStringFlagsFilename()

i meant to use:
PyParser_SimpleParseFileFlags()


Basically i will open a FILE* for the file requested, parse it and load
it into the module. Using this method i don't have to load its contents
first into a string to be compiled, but just get the python library to
parse directly from the file.

It all seems to work fine now. Thanks for the help.
Brendon.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] Python 3000 PEP: Postfix type declarations


On 4/1/07, Georg Brandl <[EMAIL PROTECTED]> wrote:
[...]


Example
===

This is the standard ``os.path.normpath`` function, converted to type
declaration
syntax::

 def normpathƛ(path✎)✎:
 """Normalize path, eliminating double slashes, etc."""
 if path✎ == '':
 return '.'
 initial_slashes✓ = path✎.startswithƛ('/')✓
 # POSIX allows one or two initial slashes, but treats three or
more
 # as single slash.
 if (initial_slashes✓ and
 path✎.startswithƛ('//')✓ and not path✎.startswithƛ('///')✓)✓:
 initial_slashesℕ = 2
 comps♨ = path✎.splitƛ('/')♨
 new_comps♨ = []♨
 for comp✎ in comps♨:
 if comp✎ in ('', '.')⒯:
 continue
 if (comp✎ != '..' or (not initial_slashesℕ and not
new_comps♨)✓ or
  (new_comps♨ and new_comps♨[-1]✎ == '..')✓)✓:
 new_comps♨.appendƛ(comp✎)
 elif new_comps♨:
 new_comps♨.popƛ()✎
 comps♨ = new_comps♨
 path✎ = '/'.join(comps♨)✎
 if initial_slashesℕ:
 path✎ = '/'*initial_slashesℕ + path✎
 return path✎ or '.'

As you can clearly see, the type declarations add expressiveness, while at
the
same time they make the code look much more professional.



 Is this supposed to be a joke?  Please tell me this isn't a real PEP.
While I'm all for allowing unicode identifiers in Python, postfix type
annotations make Python look like Perl.  And how can you claim this code is
more readable?  It certainly is _less_ readable, not more.

 I agree that Python should support type annotations, but they should be
optional and only present at the function interfaces, i.e. specify the type
in the function parameter lists, like in plain old C.

 +1 from me for allowing unicode identifiers.

 -MAXVOTE for type annotations in identifiers.

--
Gustavo J. A. M. Carneiro
"The universe is always one step beyond logic."
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: [Python-Dev] Python 3000 PEP: Postfix type declarations


On 4/1/07, Gustavo Carneiro <[EMAIL PROTECTED]> wrote:


On 4/1/07, Georg Brandl <[EMAIL PROTECTED]> wrote:
[...]

> Example
> ===
>
> This is the standard ``os.path.normpath`` function, converted to type
> declaration
> syntax::
>
>  def normpathƛ(path✎)✎:
>  """Normalize path, eliminating double slashes, etc."""
>  if path✎ == '':
>  return '.'
>  initial_slashes✓ = path✎.startswithƛ('/')✓
>  # POSIX allows one or two initial slashes, but treats three or
> more
>  # as single slash.
>  if (initial_slashes✓ and
>  path✎.startswithƛ('//')✓ and not
> path✎.startswithƛ('///')✓)✓:
>  initial_slashesℕ = 2
>  comps♨ = path✎.splitƛ('/')♨
>  new_comps♨ = []♨
>  for comp✎ in comps♨:
>  if comp✎ in ('', '.')⒯:
>  continue
>  if (comp✎ != '..' or (not initial_slashesℕ and not
> new_comps♨)✓ or
>   (new_comps♨ and new_comps♨[-1]✎ == '..')✓)✓:
>  new_comps♨.appendƛ(comp✎)
>  elif new_comps♨:
>  new_comps♨.popƛ()✎
>  comps♨ = new_comps♨
>  path✎ = '/'.join(comps♨)✎
>  if initial_slashesℕ:
>  path✎ = '/'*initial_slashesℕ + path✎
>  return path✎ or '.'
>
> As you can clearly see, the type declarations add expressiveness, while
> at the
> same time they make the code look much more professional.


  Is this supposed to be a joke?



 /me ashamed for not having noticed the date of this PEP... :P

--
Gustavo J. A. M. Carneiro
"The universe is always one step beyond logic."
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Extract information from HTML table

On Apr 1, 10:13 pm, "Ulysse" <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I'm trying to extract the data from HTML table. Here is the part of
> the HTML source :
> """
> 
>   
>  type="checkbox">
>   
> Sat, 31.03.2007 - 20:24:00
>   
> http://s2.bitefight.fr/bite/
> bericht.php?q=01bf0ba7258ad976d890379f987d444e&beid=2628033">Vous
> avez tendu une embuscade à votre victime !
> 
> 
>   
>  type="checkbox">
>   
> Sat, 31.03.2007 - 20:14:35
>   
> http://s2.bitefight.fr/bite/
> bericht.php?q=01bf0ba7258ad976d890379f987d444e&beid=2628007">Vous
> avez tendu une embuscade à votre victime !
> 
> 
>   
>  type="checkbox">
>   
> Sat, 31.03.2007 - 20:11:39
>Vous avez bien accompli votre
> tâche de Gardien de Cimetière et vous vous
> voyez remis votre salaire comme récompense.
> Vous recevez 320
>  alt="Or" align="absmiddle" border="0">
> et collectez 3 d'expérience !
> 
> """
>
> I would like to transform this in following thing :
>
> Date : Sat, 31.03.2007 - 20:24:00
> ContainType : Link
> LinkText : Vous avez tendu une embuscade à votre victime !
> LinkURL 
> :http://s2.bitefight.fr/bite/bericht.php?q=01bf0ba7258ad976d890379f987...
>
> Date : Sat, 31.03.2007 - 20:14:35
> ContainType : Link
> LinkText : Vous avez tendu une embuscade à votre victime !
> LinkURL 
> :http://s2.bitefight.fr/bite/bericht.php?q=01bf0ba7258ad976d890379f987...
>
> Date : Sat, 31.03.2007 - 20:14:35
> ContainType : Text
> Contain : Vous avez bien accompli votre tâche de Gardien de Cimetière
> et vous vous
> voyez remis votre salaire comme récompense.
> Vous recevez 320 et collectez 3 d'expérience !
>
> 
>
> Do you know the way to do it ?

You can use Beautiful Soup http://www.crummy.com/software/BeautifulSoup/

see this page to see how you can search for tags, then retrieve the
contents

http://www.crummy.com/software/BeautifulSoup/documentation.html#Searching%20Within%20the%20Parse%20Tree

Cheers



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Extract information from HTML table

On Apr 1, 3:13 pm, "Ulysse" <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I'm trying to extract the data from HTML table. Here is the part of
> the HTML source :
>
> 
>
> Do you know the way to do it ?

Beautiful Soup is an easy way to parse HTML (that may be broken).
http://www.crummy.com/software/BeautifulSoup/

Here's a start of a parser for your HTML:

soup = BeautifulSoup(txt)
for tr in soup('tr'):
dateTd, textTd = tr('td')[1:]
print 'Date :', dateTd.contents[0].strip()
print textTd #element still needs parsing

where txt is the string in your message.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] Python 3000 PEP: Postfix type declarations


>   Is this supposed to be a joke? 

First of April? Likely.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: help with dates


On 31 Mar 2007 14:21:23 -0700, [EMAIL PROTECTED] <[EMAIL PROTECTED]>
wrote:



I'm working on a website for some firefighters that want to be able to
sign-up for overtime and I need some help figuring out a date related
problem.

Here's the scenario:

Four groups of firefighters (group1, group2, group3, group4). Each
group works a 24 hr shift. So group1 works April 1, group2 works April
2, group3 works April 3, group4 works April 4, group 1 works April 5,
etc. It just keeps rolling like this forever into next year, etc.

I need to come up with a methodology for the following:



I have a small program for you, hope this helps.

dutycycle.py


#!/usr/bin/env python

"""Data Cycle Finder
The input date is of the form dd/mm/.
"""

from datetime import date
import sys

def dutyfinder(_date):
   start = date(2007,1,1) # the duty cycle started on 1st jan 2007
   return (date.toordinal(_date)-date.toordinal(start))%4


indate = sys.argv[1].split('/')
enddate = date(int(indate[2]),int(indate[1]),int(indate[0]))
onduty = int(dutyfinder(enddate)) + 1
print onduty

---

Now, call this program with your input date in dd/mm/ format. You can
modify the code to suit mm/dd/ format . Also modify the start date which
is now 1st January 2007. The output will be the number of the group.

Ex: $ python dutycycle.py 10/1/2007
output: 2

--
With Regards


Parthan "Technofreak"

http://technofreakatchennai.wordpress.com
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: CRC CCITT UPDATE in Python


Hi Martin!

 OK, thanks; but its work in AVR... After compile it and work in agree
with the device that I need to talk, the software work fine, but I would
like port it to PC in Python...

Hi Folks,

 So sorry, but I try find it googling, searkoding (
http://www.koders.com) and try understand so bether and unhapilly don´t had
sucess... I try normally  the hands-on technique and to practise the DIY
philosophy; but in this case my mind isn´t helping me!:-)

 Please, could you help me? :-) How to port this hi8 and lo8 to Python,
is there some function similar?

Tnx,

./Fernando -Py - thorneiro -


On 4/1/07, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote:



>   Please, is there here some good soul that understand this
> functions hi8 and lo8 (from GCC AVR) and can help me to port it to
> Python?
>
> uint16_t
> crc_ccitt_update (uint16_t crc, uint8_t data)
> {
>data ˆ= lo8 (crc);
>data ˆ= data << 4;
>return uint16_t)data << 8) | hi8 (crc)) ˆ (uint8_t)(data >> 4)
> ˆ ((uint16_t)data << 3));
> }

Most likely, lo8(crc) == crc & 0xFF, and hi8(crc) == (crc >> 8) & 0xFF
(the last bit masking might be redundant, as crc should not occupy more
than 16 bits, anyway).

HTH,
Martin


-- 
http://mail.python.org/mailman/listinfo/python-list

Re: I18n issue with optik

* Steven Bethard (Sat, 31 Mar 2007 20:08:45 -0600)
> Thorsten Kampe wrote:
> > I've written a script which uses Optik/Optparse to display the 
> > options (which works fine). The text for the help message is localised 
> > (with german umlauts) and when I execute the script with the localised 
> > environment variable set, I get this traceback[1]. The interesting 
> > thing is that the localised optparse messages from displays fine - 
> > it's only my localisation that errors.
> > 
> > From my understanding, my script doesn't put out anything, it's 
> > optik/optparse who does that. My po file is directly copied from the 
> > optik po file (who displays fine) and modified so the po file should 
> > be fine, too.
> > 
> > What can I do to troubleshoot whether the culprit is my script, optik 
> > or gettext?
> > 
> > Would it make sense to post the script and the mo or po files?
> 
> Yes, probably.  Though if you can reduce it to the simplest test case 
> that produces the error, it'll increase your chances of having someone 
> look at it.

The most simple test.py is:

###
#! /usr/bin/env python

import gettext, \
   os,  \
   sys

gettext.textdomain('optparse')
gettext.install('test')

from optparse import OptionParser, \
 OptionGroup

cmdlineparser = OptionParser(description = _('THIS SOFTWARE COMES 
WITHOUT WARRANTY, LIABILITY OR SUPPORT!'))

options, args = cmdlineparser.parse_args()
###

When I run LANGUAGE=de ./test.py --help I get the error.

### This is the test.de.po file
# Copyright (C) 2006 Thorsten Kampe
# Thorsten Kampe <[EMAIL PROTECTED]>, 2006

msgid  ""
msgstr ""

"Project-Id-Version: Template 1.0\n"
"POT-Creation-Date: Tue Sep  7 22:20:34 2004\n"
"PO-Revision-Date: 2005-07-03 16:47+0200\n"
"Last-Translator: Thorsten Kampe <[EMAIL PROTECTED]>\n"
"Language-Team: Thorsten Kampe <[EMAIL PROTECTED]>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=ISO-8859-15\n"
"Content-Transfer-Encoding: 8-bit\n"
"Generated-By: pygettext.py 1.5\n"

msgid  "THIS SOFTWARE COMES WITHOUT WARRANTY, LIABILITY OR SUPPORT!"
msgstr "DIESES PROGRAMM HAT WEDER GEWÄHRLEISTUNG, HAFTUNG NOCH 
UNTERSTÜTZUNG!"
###

The localisation now produces an error in the localised optik files, 
too.

Under Windows I get " File "G:\program files\python\lib\encodings
\cp1252.py", line 12, in encode
   return codecs.charmap_encode(input,errors,encoding_table)"

Is there something I have to do to put the terminal in "non-ascii 
output mode"?

I tried

###
#! /usr/bin/env python
# -*- coding: ISO-8859-15 -*-

print "DIESES PROGRAMM HAT WEDER GEWÄHRLEISTUNG, HAFTUNG NOCH 
UNTERSTÜTZUNG!"
###

...and this worked. That means that my terminal is willing to print, 
right?! 
 
> You could also try posting to the optik list:
>  http://lists.sourceforge.net/lists/listinfo/optik-users

I already did this via Gmane (although the list seems pretty dead to 
me). Sourceforge seems to have a bigger problem as [1] and [2] error.

Sorry for the confusion but this Unicode magic is far from being 
rational. I guess most people just don't get it...


Thorsten
[1] http://sourceforge.net/mailarchive/forum.php?forum=optik-users
[2] https://lists.sourceforge.net/lists/listinfo
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unicode list

> When printing a list, the individual elements are converted with repr(),
> not with str(). For a string object, repr() adds escape codes for all
> bytes that are not printable ASCII characters.

Thanks Martin, you're right, it were the repr() calls that messed up the 
output. Iterating the array like you proposed is even 1/100s faster ;)

Regards,
Rehceb
-- 
http://mail.python.org/mailman/listinfo/python-list


SimpleXMLRPCServer - client address

Hello all,

   I writing an application based on the SimpleXMLRPCServer class. I
would like to know the IP address of the client performing the RPC. Is
that possible, without having to abandon the SimpleXMLRPCServer class?

-- 
Kind regards,
Jan Danielsson
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Extract information from HTML table

On Apr 1, 2:52 pm, [EMAIL PROTECTED] wrote:
> On Apr 1, 3:13 pm, "Ulysse" <[EMAIL PROTECTED]> wrote:
>
> > Hello,
>
> > I'm trying to extract the data from HTML table. Here is the part of
> > the HTML source :
>
> > 
>
> > Do you know the way to do it ?
>
> Beautiful Soup is an easy way to parse HTML (that may be 
> broken).http://www.crummy.com/software/BeautifulSoup/
>
> Here's a start of a parser for your HTML:
>
> soup = BeautifulSoup(txt)
> for tr in soup('tr'):
> dateTd, textTd = tr('td')[1:]
> print 'Date :', dateTd.contents[0].strip()
> print textTd #element still needs parsing
>
> where txt is the string in your message.

I have seen the Beautiful Soup online help and tried to apply that to
my problem. But it seems to be a little bit hard. I will rather try to
do this with regular expressions...

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: tag replacement in toxml()

> import xml.dom.minidom
> from xml.dom.minidom import getDOMImplementation
> impl = getDOMImplementation()
> myDoc = impl.createDocument(None, "example", None)
> myRoot = myDoc.documentElement
> myNode1 = myDoc.createElement("node")
> myNode2 = myDoc.createElement("nodeTwo")
>  myText = myDoc.createTextNode("Here is the problem")
> myNode2.appendChild(myText)
> myNode1.appendChild(myNode2)
> myRoot.appendChild(myNode1)
> print myDoc.toxml()
> 
> The result is:
> '\nHere is the 
> problem'
> 
> 
> My question is how I can avoid that toxml() replaces the tags?

Gabriel already answered the question: you need to add a 'b'
element, which has a text child with the text 'problem'; this
b element needs to be a sibling of the text node 'Here is the '.

This still won't give you the output "Here is the problem",
as that will insert a closing tag. If you really want to produce
the text

'\nHere is the
problem'

you cannot use an XML library to do so: this text is not
well-formed XML (because  is illegal syntax).

Regards,
Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Shed Skin Python-to-C++ Compiler 0.0.21, Help needed

> Anyway, the only real point is that if there is a concern about the
> copyright and licensing of the output of ShedSkin, then we merely need
> to ask the author of it to clarify matters and move on with life.  With
> the exception of GNAT, to date no GPL'd compiler has ever placed a GPL
> restriction on its output.  Whether this is explicit or implicit doesn't
> matter, so long as it's there.

it's fine if people want to create non-GPL software with Shed Skin. it
is at least my intention to only have the compiler proper be GPL
(LICENSE states that the run-time libraries are BSD..)


mark dufour (Shed Skin author).

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: I18n issue with optik

Just an addition : when I insert this statement...

print _('THIS SOFTWARE COMES WITHOUT WARRANTY, LIABILITY OR SUPPORT!')

into this skript, the line is printed out. So if my Skript can output 
the localised text but Optparse can't it should be an optparse bug, 
right?!

Thorsten
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Mastering Python (OT)

"Steve Holden" <[EMAIL PROTECTED]>  wrote:


> Hendrik van Rooyen wrote:

> > It comes out something like "Chum-lee", with the ch like chicken...
> > 
> > (that's what I have heard -  but who knows - It may have been 
> > a regional dialect, a case of the blind leading the blind, or 
> > someone pulling the piss..)
> > 
> You have been correctly informed. It's one of the least intuitive names 
> in the English language.

Oh No! - don't tell me there is worse - this is already enough to drive 
a saint to drink!

I will have to move to "Hants"...

: - ) 

- Hendrik

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: socket read timeout

 "Steve Holden" <[EMAIL PROTECTED]>


> Hendrik van Rooyen wrote:
> >  <[EMAIL PROTECTED]> wrote:
> >
> >
> >> hg> My issue with that is the effect on write: I only want a timeout on
> >> hg> read ...  but anyway ...
> >>
> >> So set a long timeout when you want to write and short timeout when you
want
> >> to read.
> >>
> >
> > Are sockets full duplex?
> >
> Yes. But you have to use non-blocking calls in your application to use
> them as full-duplex in your code.

This seems to bear out the scenario I have described elsewhere in this
thread - I think its caused by the file handlers, but I don't  _know_  it.

>
> > I know Ethernet isn't.
> >
> Don't know much, then, do you? ;-)

No not really - I easily get confused by such things as collisions...

: - )

- Hendrik

-- 
http://mail.python.org/mailman/listinfo/python-list


ISO programming projects




I'm looking for a collection of useful programming projects, at
the "hobbyist" level.

My online search did turn up a few collections (isolated projects
are less useful to me at the moment), but these projects are either
more difficult than what I'm looking for (e.g. code a C compiler)
or not terribly useful to the average person (e.g. a function that
efficiently computes the n-th Fibonacci number).

Any pointers would be much appreciated.

kj

-- 
NOTE: In my address everything before the first period is backwards;
and the last period, and everything after it, should be discarded.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to remove specified cookie in cookie jar?

"ken" <[EMAIL PROTECTED]> writes:

> How to remove specified cookie (via a given name) in cookie jar?
> 
> I have the following code, but how can I remove a specified cookie in
> the cookie jar?
>  cj = cookielib.LWPCookieJar()
> 
>  if cj is not None:
>  if os.path.isfile(COOKIEFILE):
> print 'Loading Cookie--'
> cj.load(COOKIEFILE)

cj.clear('example.com', '/', 'cookiename')


Note that the domain arg must match the cookie domain exactly (and the
domain might, for example, start with a dot).  You may want to iterate
over the Cookie objects in the CookieJar to find the one(s) you want
to remove, but it's not supported to remove them at the same time as
iterating, so (UNTESTED):

remove = []
for cookie in cj:
if is_to_be_removed(cookie):
remove.append(cookie)
for cookie in remove:
cj.clear(cookie.domain, cookie.path, cookie.name)


http://docs.python.org/lib/cookie-jar-objects.html

"""
clear(  [domain[, path[, name]]])
Clear some cookies.

If invoked without arguments, clear all cookies. If given a single
argument, only cookies belonging to that domain will be
removed. If given two arguments, cookies belonging to the
specified domain and URL path are removed. If given three
arguments, then the cookie with the specified domain, path and
name is removed.

Raises KeyError if no matching cookie exists. 
"""


John
-- 
http://mail.python.org/mailman/listinfo/python-list


Python Based API

Hi,

I work on a project that is built entirely using python and Tkinter.
We are at the point where we would like to give access to our
functionality to others via some sort of API.  People who would use
our API develop in all kinds of languages from C/C++ to Pascal.

Ideas that come to mind that allow us to build such an API are:

1)  Require others to imbed the python interpreter via the c API to be
able to utilize our functionality.
2)  Build an XML RPC interface around our code.
3)  Rewrite our code base in C/C++, which should make it accessible to
all modern languages.

I'm looking for more and potentially better ideas that will allow us
to offer an API to our customers without having to throw away or redo
a lot of the python code that we have already written.

Thanks in advance,
Dean

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: saving Python process state for later debugging

On Apr 1, 2:57 pm, "aspineux" wrote:
>
> A context in python is no more than 2 dictionaries ( globals() and
> locals()).
> You can easily serialize both to store them.

I don't think it will work with objects defined by extension modules,
except if they somehow support serialization, will it? I guess I can
dump core for debugging these objects, and serialize the Python
context for debugging the rest. Finding the extension objects in the
core dump can be a pain though, as will be figuring out the stack with
interlaced Python & native code. And then there are the lovely cases
when CPython crashes, like "deletion of interned string failed. Abort
(core dumped)", where you get to shovel through CPython state with a
native debugger.

The thing with mixing native code with Python is that when native code
misbehaves, it's a big problem, and if Python code misbehaves, it's
still a problem, although a smaller one (serializing the native state
& navigating through it). Maybe the best way around this is to spawn
sub-processes for running native code...

-- 
http://mail.python.org/mailman/listinfo/python-list


zip files as nested modules?

Supposing that I have a directory tree like so:

a/
  __init__.py
  b/
__init__.py
   c.py

and b.py has some method (let's call it d) within it.  I can, from python, do:

from a.b.c import d
d()

And, that works.  Now, suppose I want to have a zipped module under a,
called b.zip.  Is there any way that I can accomplish the same thing,
but using the zip file as the inner module?

My directory layout is then

a/
  __init__.py
  b.zip

And b is a zipfile laid out like

b/
  __init__.py
  c.py

I tried populating a's __init__ with this:

import zipimport
import os
here = os.path.join(os.getcwd(), __path__[0])
zips = [f for f in os.listdir(here) if f.endswith('.zip')]
zips = [os.path.join(here, z) for z in zips]

for z in zips:
  print z
  mod = os.path.split(z)[-1][:-4]
  print mod
  globals()[mod] = zipimport.zipimporter(z).load_module(mod)

All the zip modules appear (I actually have a few zips, but that
shouldn't be important), but their contents do not seem to be
accessible in any way.  I could probably put import statements in all
the __init__.py files to import everything in the level below, but I
am under the impression that relative imports are frowned upon, and it
seems pretty bug-prone anyhow.

Any pointers on how to accomplish zip modules being nested within normal ones?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: I18n issue with optik

I guess the culprit is this snippet from optparse.py:

# used by test suite
def _get_encoding(self, file):
encoding = getattr(file, "encoding", None)
if not encoding:
encoding = sys.getdefaultencoding()
return encoding

def print_help(self, file=None):
"""print_help(file : file = stdout)

Print an extended help message, listing all options and any
help text provided with them, to 'file' (default stdout).
"""
if file is None:
file = sys.stdout
encoding = self._get_encoding(file)
file.write(self.format_help().encode(encoding, "replace"))

So this means: when the encoding of sys.stdout is US-ASCII, Optparse 
sets the encoding to of the help text to ASCII, too. But that's 
nonsense because the Encoding is declared in the Po (localisation) 
file.

How can I set the encoding of sys.stdout to another encoding? Of 
course this would be a terrible hack if the encoding of the 
localisation changes or different translators use different 
encodings...

Thorsten
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Cheeseshop needs mirrors

Richard Jones <[EMAIL PROTECTED]> writes:
[...]
> And of course I'll reiterate the same line I always do: the Cheese Shop was
> set up by a volunteer, enhanced by some other volunteers and exactly
> nothing more will get done unless more volunteers offer their time.

PyPI has "just worked" for me, so thanks for the work you've put into
it.

My theory is that if an open-source project is fairly new and
unstable, you'll often get lots of people saying nice things about it
hoping to get help.  Then if it gets better, people shut up, since it
just does its job.  Then they get used to it just working, and start
giving abuse instead of praise when it doesn't do everything they
want.

I still occasionally get praise for my open source stuff, so I figure
I've got a long way to go ;-)


John
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Opening Photoshop EPS with PIL?


.eps  ==> vector ; not bitmap


-- 
http://mail.python.org/mailman/listinfo/python-list


reverse engineering Excel spreadsheet

Hello,
 I am currently implementing (mainly in Python) 'models' that come
to me as Excel spreadsheets, with little additional information.  I am
expected to use these models in a web application.  Some contain many
worksheets and various macros.

What I'd like to do is extract the data and business logic so that I can
figure out exactly what these models actually do and code it up.  An
obvious (I think) idea is to generate an acyclic graph of the cell
dependencies so that I can identify which cells contain only data (no
parents) and those that depend on other cells.  If I could also extract
the relationships (functions), then I could feasibly produce something
in pure Python that would mirror the functionality of the original
spreadsheet (using e.g. Matplotlib for plots and more reliable RNGs /
statistical functions).

The final application will be running on a Linux server, but I can use a
Windows box (i.e. win32all) for processing the spreadsheets (hopefully
not manually).  Any advice on the feasibility of this, and how I might
achieve it would be appreciated.

I assume there are plenty of people who have a better knowledge of e.g.
COM than I do.  I suppose an alternative would be to convert to Open
Office and use PyUNO, but I have no experience with PyUNO and am not
sure how much more reliable the statistical functions of Open Office
are.  At the end of the day, the business logic will not generally be
complex, it's extracting it from the spreadsheet that's awkward.  Any
advice appreciated.  TIA.  Cheers.

Duncan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Shed Skin Python-to-C++ Compiler 0.0.21, Help needed

Kay Schluehr wrote:
> Indeed. The only serious problem from an acceptance point of view is
> that Mark tried to solve the more difficult problem first and hung on
> it. Instead of integrating a translator/compiler early with CPython,
> doing some factorization of Python module code into compilable and
> interpretable functions ( which can be quite rudimentary at first )
> together with some automatically generated glue code and *always have
> a running system* with monotone benefit for all Python code he seemed
> to stem an impossible task, namely translating the whole Python to C++
> and created therefore a "lesser Python". 

Trying to incrementally convert an old interpreter into a compiler
is probably not going to work.

> Otherwise it
> wouldn't be a big deal to do what is necessary here and even extend
> the system with perspective on Py3K annotations or other means to ship
> typed Python code into the compiler.

 Shed Skin may be demonstrating that "annotations" are unnecessary
cruft and need not be added to Python.  Automatic type inference
may be sufficient to get good performance.

 The Py3K annotation model is to some extent a repeat of the old
Visual Basic model.  Visual Basic started as an interpreter with one
default type, which is now called Variant, and later added the usual types,
Integer, String, Boolean, etc., which were then manually declared.
That's where Py3K is going.  Shed Skin may be able to do that job
automatically, which is a step forward and more compatible with
existing code.  Doing more at compile time means doing less work
at run time, where it matters.  This looks promising.

John Nagle

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: reverse engineering Excel spreadsheet

On Apr 1, 6:59 pm, Duncan Smith <[EMAIL PROTECTED]> wrote:
> Hello,
>  I am currently implementing (mainly in Python) 'models' that come
> to me as Excel spreadsheets, with little additional information.  I am
> expected to use these models in a web application.  Some contain many
> worksheets and various macros.
>
> What I'd like to do is extract the data and business logic so that I can
> figure out exactly what these models actually do and code it up.  An
> obvious (I think) idea is to generate an acyclic graph of the cell
> dependencies so that I can identify which cells contain only data (no
> parents) and those that depend on other cells.  If I could also extract
> the relationships (functions), then I could feasibly produce something
> in pure Python that would mirror the functionality of the original
> spreadsheet (using e.g. Matplotlib for plots and more reliable RNGs /
> statistical functions).
>
> The final application will be running on a Linux server, but I can use a
> Windows box (i.e. win32all) for processing the spreadsheets (hopefully
> not manually).  Any advice on the feasibility of this, and how I might
> achieve it would be appreciated.
>
> I assume there are plenty of people who have a better knowledge of e.g.
> COM than I do.  I suppose an alternative would be to convert to Open
> Office and use PyUNO, but I have no experience with PyUNO and am not
> sure how much more reliable the statistical functions of Open Office
> are.  At the end of the day, the business logic will not generally be
> complex, it's extracting it from the spreadsheet that's awkward.  Any
> advice appreciated.  TIA.  Cheers.
>
> Duncan

I'm not sure I understood what kind of information you want to
get out of the Excel sheet, sorry. But I hope this'll get you started
(at least it has a few nice tokens that might help you in googling):

import win32com.client

class Excel:
def __init__(self, filename):
self.closed = True
self.xlApp =
win32com.client.dynamic.Dispatch('Excel.Application')
self.xlBook = self.xlApp.Workbooks.Open(filename)
self.closed = False

def sheet(self, sheetName):
return self.xlBook.Worksheets(sheetName)

def __del__(self):
if not self.closed:
self.close()

def close(self):
self.xlBook.Close(SaveChanges=1)
self.xlApp.Quit()
self.closed = True

excel = Excel('file.xls')
sheet = excel.sheet(1)
print sheet.Cells(6, 3)


I used it a few years ago to read and populate spreadsheet cells.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: I18n issue with optik

Thorsten Kampe wrote:
> * Steven Bethard (Sat, 31 Mar 2007 20:08:45 -0600)
>> Thorsten Kampe wrote:
>>> I've written a script which uses Optik/Optparse to display the 
>>> options (which works fine). The text for the help message is localised 
>>> (with german umlauts) and when I execute the script with the localised 
>>> environment variable set, I get this traceback[1]. The interesting 
>>> thing is that the localised optparse messages from displays fine - 
>>> it's only my localisation that errors.
>>>
>>> From my understanding, my script doesn't put out anything, it's 
>>> optik/optparse who does that. My po file is directly copied from the 
>>> optik po file (who displays fine) and modified so the po file should 
>>> be fine, too.
>>>
>>> What can I do to troubleshoot whether the culprit is my script, optik 
>>> or gettext?
>>>
>>> Would it make sense to post the script and the mo or po files?
>> Yes, probably.  Though if you can reduce it to the simplest test case 
>> that produces the error, it'll increase your chances of having someone 
>> look at it.
> 
> The most simple test.py is:
> 
> ###
> #! /usr/bin/env python
> 
> import gettext, \
>os,  \
>sys
> 
> gettext.textdomain('optparse')
> gettext.install('test')
> 
> from optparse import OptionParser, \
>  OptionGroup
> 
> cmdlineparser = OptionParser(description = _('THIS SOFTWARE COMES 
> WITHOUT WARRANTY, LIABILITY OR SUPPORT!'))
> 
> options, args = cmdlineparser.parse_args()
> ###
> 
> When I run LANGUAGE=de ./test.py --help I get the error.
> 
> ### This is the test.de.po file
> # Copyright (C) 2006 Thorsten Kampe
> # Thorsten Kampe <[EMAIL PROTECTED]>, 2006
> 
> msgid  ""
> msgstr ""
> 
> "Project-Id-Version: Template 1.0\n"
> "POT-Creation-Date: Tue Sep  7 22:20:34 2004\n"
> "PO-Revision-Date: 2005-07-03 16:47+0200\n"
> "Last-Translator: Thorsten Kampe <[EMAIL PROTECTED]>\n"
> "Language-Team: Thorsten Kampe <[EMAIL PROTECTED]>\n"
> "MIME-Version: 1.0\n"
> "Content-Type: text/plain; charset=ISO-8859-15\n"
> "Content-Transfer-Encoding: 8-bit\n"
> "Generated-By: pygettext.py 1.5\n"
> 
> msgid  "THIS SOFTWARE COMES WITHOUT WARRANTY, LIABILITY OR SUPPORT!"
> msgstr "DIESES PROGRAMM HAT WEDER GEWÄHRLEISTUNG, HAFTUNG NOCH 
> UNTERSTÜTZUNG!"
> ###
> 
> The localisation now produces an error in the localised optik files, 
> too.
> 
> Under Windows I get " File "G:\program files\python\lib\encodings
> \cp1252.py", line 12, in encode
>return codecs.charmap_encode(input,errors,encoding_table)"

I'm not very experienced with internationalization, but if you change::

 gettext.install('test')

to::

 gettext.install('test', unicode=True)

what happens?

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Does Numpy work on QNX 4.25 with Python 2.2?

Robert Kern <[EMAIL PROTECTED]> writes:

> ZMY wrote:
> > I am trying to convert some old Fortran code into Python program and
> > get them work on a QNX 4.25 system. Since the program requires speed,
> > I think using Numpy is really necessary. But I haven't found anything
> > on web about using numpy on QNX 4.25 (especially the for python
> > version 2.2).
> 
> numpy requires Python 2.3+. I haven't heard of anyone trying QNX.

Perhaps he could try Numeric (numpy's predecessor).


John
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: I18n issue with optik

Thorsten Kampe wrote:
> I guess the culprit is this snippet from optparse.py:
> 
> # used by test suite
> def _get_encoding(self, file):
> encoding = getattr(file, "encoding", None)
> if not encoding:
> encoding = sys.getdefaultencoding()
> return encoding
> 
> def print_help(self, file=None):
> """print_help(file : file = stdout)
> 
> Print an extended help message, listing all options and any
> help text provided with them, to 'file' (default stdout).
> """
> if file is None:
> file = sys.stdout
> encoding = self._get_encoding(file)
> file.write(self.format_help().encode(encoding, "replace"))
> 
> So this means: when the encoding of sys.stdout is US-ASCII, Optparse 
> sets the encoding to of the help text to ASCII, too. But that's 
> nonsense because the Encoding is declared in the Po (localisation) 
> file.
> 
> How can I set the encoding of sys.stdout to another encoding? Of 
> course this would be a terrible hack if the encoding of the 
> localisation changes or different translators use different 
> encodings...

If print_help() is what's wrong, you should probably hack print_help() 
instead of sys.stdout.  You could try something like::

 def print_help(self, file=None):
 """print_help(file : file = stdout)

 Print an extended help message, listing all options and any
 help text provided with them, to 'file' (default stdout).
 """
 if file is None:
 file = sys.stdout
 file.write(self.format_help())

 optparse.OptionParser.print_help = print_help

 cmdlineparser = optparse.OptionParser(description=...)
 ...

That is, you could monkey-patch print_help() before you create an 
OptionParser.

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Pickling a class with a __getattr__

Hi, I'm trying to pickle an object instance of a class that is like a
dict but with a __getattr__ and I'm getting pickling errors.

This works but is not good enough.

$ python2.4
>>> import cPickle as pickle
>>> class Dict(dict):
... pass
...
>>>
>>>
>>> friend = Dict(name='Zahid', age=40)
>>> friend
{'age': 40, 'name': 'Zahid'}
>>> v=pickle.dumps(friend)
>>> p=pickle.loads(v)
>>> p
{'age': 40, 'name': 'Zahid'}

This is what happens when I'm trying to be clever:

>>> import cPickle as pickle
>>> class Dict(dict):
... def __getattr__(self, key):
... return self.__getitem__(key)
...
>>> friend = Dict(name='Zahid', age=40)
>>> friend
{'age': 40, 'name': 'Zahid'}
>>> friend.name
'Zahid'
>>> v=pickle.dumps(friend)
Traceback (most recent call last):
  File "", line 1, in ?
  File "/usr/lib/python2.4/copy_reg.py", line 73, in _reduce_ex
getstate = self.__getstate__
  File "", line 3, in __getattr__
KeyError: '__getstate__'


Why can't I pickle the slightly more featurefull class there called
'Dict'? I've got my reasons for not going for a simple type dict but
feel that that is irrelevant right now.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] Python 3000 PEP: Postfix type declarations

On 4/1/07, Georg Brandl <[EMAIL PROTECTED]> wrote:
[snip several pages of excellent ideas]
>
> The mapping between types and declarators is not static. It can be completely
> customized by the programmer, but for convenience there are some predefined
> mappings for some built-in types:
>
> =  ===
> Type   Declarator
> =  ===
> ``object`` � (REPLACEMENT CHARACTER)
> ``int``ℕ (DOUBLE-STRUCK CAPITAL N)
> ``float``  ℮ (ESTIMATED SYMBOL)
> ``bool``   ✓ (CHECK MARK)
> ``complex``ℂ (DOUBLE-STRUCK CAPITAL C)
> ``str``✎ (LOWER RIGHT PENCIL)
> ``unicode``✒ (BLACK NIB)
> ``tuple``  ⒯ (PARENTHESIZED LATIN SMALL LETTER T)
> ``list``   ♨ (HOT SPRINGS)
> ``dict``   ⧟ (DOUBLE-ENDED MULTIMAP)
> ``set``∅ (EMPTY SET) (*Note:* this is also for full sets)
> ``frozenset``  ☃ (SNOWMAN)
> ``datetime``   ⌚ (WATCH)
> ``function``   ƛ (LATIN SMALL LETTER LAMBDA WITH STROKE)
> ``generator``  ⚛ (ATOM SYMBOL)
> ``Exception``  ⌁ (ELECTRIC ARROW)
> =  ===
>
> The declarator for the ``None`` type is a zero-width space.
>
> These characters should be obvious and easy to remember and type for every
> programmer.
>
[snip]
>
> Example
> ===
>
> This is the standard ``os.path.normpath`` function, converted to type 
> declaration
> syntax::
>
>  def normpathƛ(path✎)✎:
>  """Normalize path, eliminating double slashes, etc."""
>  if path✎ == '':
>  return '.'
>  initial_slashes✓ = path✎.startswithƛ('/')✓
>  # POSIX allows one or two initial slashes, but treats three or more
>  # as single slash.
>  if (initial_slashes✓ and
>  path✎.startswithƛ('//')✓ and not path✎.startswithƛ('///')✓)✓:
>  initial_slashesℕ = 2
>  comps♨ = path✎.splitƛ('/')♨
>  new_comps♨ = []♨
>  for comp✎ in comps♨:
>  if comp✎ in ('', '.')⒯:
>  continue
>  if (comp✎ != '..' or (not initial_slashesℕ and not new_comps♨)✓ 
> or
>   (new_comps♨ and new_comps♨[-1]✎ == '..')✓)✓:
>  new_comps♨.appendƛ(comp✎)
>  elif new_comps♨:
>  new_comps♨.popƛ()✎
>  comps♨ = new_comps♨
>  path✎ = '/'.join(comps♨)✎
>  if initial_slashesℕ:
>  path✎ = '/'*initial_slashesℕ + path✎
>  return path✎ or '.'
>
> As you can clearly see, the type declarations add expressiveness, while at the
> same time they make the code look much more professional.

My only concern is that this doesn't go far enough. While knowing that
some object is a ⒯ is a good start, it would be so much more helpful
to know that it's a ⒯ of ✎s. I think something like ✎✎✎3⒯ to indicate
a 3-⒯ of ✎s would be nice. This would change the line in the above
from "if comp✎ in ('', '.')⒯:" to "if comp✎ in ('', '.')✎✎2⒯:", which
I think is a nice win in terms of readability, EIBTI and all that.

(Sidebar: I think the PEP should feature a section on how these new
type declarations will cut down on mailing list volume and
documentation size.)

In light of this PEP, PEP 3107's function annotations should be
rejected. All that hippie feel-good crap about "user-defined
annotations" and "open-ended semantics" and "no rules, man" was just
going to get us into trouble. This PEP's more modern conception of
type annotations give the language a power and expressiveness that my
PEP could never hope to match.

This is clearly a move in the right direction. +4 billion.

Collin Winter
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Pickling a class with a __getattr__

Peter Bengtsson wrote:

> Hi, I'm trying to pickle an object instance of a class that is like a
> dict but with a __getattr__ and I'm getting pickling errors.

> This is what happens when I'm trying to be clever:
> 
 import cPickle as pickle
 class Dict(dict):
> ... def __getattr__(self, key):
> ... return self.__getitem__(key)
> ...
 friend = Dict(name='Zahid', age=40)
 friend
> {'age': 40, 'name': 'Zahid'}
 friend.name
> 'Zahid'
 v=pickle.dumps(friend)
> Traceback (most recent call last):
>   File "", line 1, in ?
>   File "/usr/lib/python2.4/copy_reg.py", line 73, in _reduce_ex
> getstate = self.__getstate__
>   File "", line 3, in __getattr__
> KeyError: '__getstate__'
> 
> 
> Why can't I pickle the slightly more featurefull class there called
> 'Dict'? 

Because you allow your __getattr__() implementation to raise the wrong kind
of exception.

>>> from cPickle import dumps, loads
>>> class Dict(dict):
... def __getattr__(self, key):
... try:
... return self[key]
... except KeyError:
... raise AttributeError
...
>>> friend = Dict(name="Zaphod", age=42)
>>> v = dumps(friend)
>>> p = loads(v)
>>> p
{'age': 42, 'name': 'Zaphod'}

Peter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pickling a class with a __getattr__

On Apr 1, 5:48 pm, Peter Otten <[EMAIL PROTECTED]> wrote:
> Peter Bengtsson wrote:
> > Hi, I'm trying to pickle an object instance of a class that is like a
> > dict but with a __getattr__ and I'm getting pickling errors.
> > This is what happens when I'm trying to be clever:
>
>  import cPickle as pickle
>  class Dict(dict):
> > ... def __getattr__(self, key):
> > ... return self.__getitem__(key)
> > ...
>  friend = Dict(name='Zahid', age=40)
>  friend
> > {'age': 40, 'name': 'Zahid'}
>  friend.name
> > 'Zahid'
>  v=pickle.dumps(friend)
> > Traceback (most recent call last):
> >   File "", line 1, in ?
> >   File "/usr/lib/python2.4/copy_reg.py", line 73, in _reduce_ex
> > getstate = self.__getstate__
> >   File "", line 3, in __getattr__
> > KeyError: '__getstate__'
>
> > Why can't I pickle the slightly more featurefull class there called
> > 'Dict'?
>
> Because you allow your __getattr__() implementation to raise the wrong kind
> of exception.
>
> >>> from cPickle import dumps, loads
> >>> class Dict(dict):
>
> ... def __getattr__(self, key):
> ... try:
> ... return self[key]
> ... except KeyError:
> ... raise AttributeError
> ...>>> friend = Dict(name="Zaphod", age=42)
> >>> v = dumps(friend)
> >>> p = loads(v)
> >>> p
>
> {'age': 42, 'name': 'Zaphod'}
>
> Peter

Thanks! That did the trick. I also noticed that I could define
__getstate__ and __setstate__ (needed for protocol 2) but your
solution works much better.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: reverse engineering Excel spreadsheet

Duncan Smith wrote:

> Hello,
>  I am currently implementing (mainly in Python) 'models' that come
> to me as Excel spreadsheets, with little additional information.  I am
> expected to use these models in a web application.  Some contain many
> worksheets and various macros.
> 
> What I'd like to do is extract the data and business logic so that I can
> figure out exactly what these models actually do and code it up.  An
> obvious (I think) idea is to generate an acyclic graph of the cell
> dependencies so that I can identify which cells contain only data (no
> parents) and those that depend on other cells.  If I could also extract
> the relationships (functions), then I could feasibly produce something
> in pure Python that would mirror the functionality of the original
> spreadsheet (using e.g. Matplotlib for plots and more reliable RNGs /
> statistical functions).
> 
> The final application will be running on a Linux server, but I can use a
> Windows box (i.e. win32all) for processing the spreadsheets (hopefully
> not manually).  Any advice on the feasibility of this, and how I might
> achieve it would be appreciated.
> 
> I assume there are plenty of people who have a better knowledge of e.g.
> COM than I do.  I suppose an alternative would be to convert to Open
> Office and use PyUNO, but I have no experience with PyUNO and am not
> sure how much more reliable the statistical functions of Open Office
> are.  At the end of the day, the business logic will not generally be
> complex, it's extracting it from the spreadsheet that's awkward.  Any
> advice appreciated.  TIA.  Cheers.
> 
> Duncan

As I remember, there is a documentation about Excel documents in xlrd
package. And with that, you dont need to use Excel via COM to find data in
the document.
http://www.lexicon.net/sjmachin/xlrd.htm

May also look at  pyExcelerator
http://sourceforge.net/projects/pyexcelerator/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python Based API

[EMAIL PROTECTED] wrote:

> Hi,
> 
> I work on a project that is built entirely using python and Tkinter.
> We are at the point where we would like to give access to our
> functionality to others via some sort of API.  People who would use
> our API develop in all kinds of languages from C/C++ to Pascal.
> 
> Ideas that come to mind that allow us to build such an API are:
> 
> 1)Require others to imbed the python interpreter via the c API to be
> able to utilize our functionality.
> 2)Build an XML RPC interface around our code.
> 3)Rewrite our code base in C/C++, which should make it accessible to
> all modern languages.
> 
> I'm looking for more and potentially better ideas that will allow us
> to offer an API to our customers without having to throw away or redo
> a lot of the python code that we have already written.
> 
> Thanks in advance,
> Dean

IMHO the simplest solution is XML RPC, and unless you have huge data to
transmit or time constraint, it may be enough.


-- 
http://mail.python.org/mailman/listinfo/python-list


In beloved Iraq, blood flows between brothers in the shadow of illegitimate foreign occupation Re: +++ Russia Watched 9/11 In REAL TIME on SATELLITE +++

On 911 Yank mother fuckers of IVY LEAGUE killed their own people and
blamed on other people. The mother fucker, Thomas Eager of MIT
Materials Science Department and welding lab, was the first to defend
the lies of the government by an IDIOTIC pancake theory. Then we have
the BASTARD of Harvard, Samuel Shit Huntington, who gave that idiotic
theory of the conflict of civilizations, as if the Bibble waving Newt
Gingrich was an Islamic who cheated on his both the wives. The
pedophile bastard, Ronald Reagan and Herbert Walker Bush and the FBI
bastards who covered up for them were Islamic ? Moshe Katsav was
Islamic ? who raped 20 jewish damsels ? Please watch the videos by
HONORABLE Alex Jones on his sites: infowars.com and prisonplanet.org.
Also watch the video "Conspiracy of Silence" and watch the FBI
Motherfucking BASTARDS covering up for the pedophilia of Reagan and
Bush on video.google.com.

On Mar 18, 11:10 pm, [EMAIL PROTECTED] wrote:
> Anyone remember that marvelous photo of Putin when Bush visited Russia
> after 9/11. The most astute leader in the world, was looking at Bush
> into his eyes as if Bush was his adored boyfriend. It was a scene from
> a romance novel. It was the most hilarious photo. Indeed, Putin
> EXACTLY knew what 9/11 was and sought to turn that to his advantage.
> What could be more pleasing to this Russian than his competitor going
> and fighting with his enemy - the Afghan Mujahideen who drove USSR out
> of Afghanistan and led to its economic collapse. But Putin was
> watching US like a hawk, working on his economy, and plotting to put
> Khodorkovsky into jail. He had himself pioneered the use of a
> falseflag based on exploding apartment buildings to enter the second
> Chechen war.
>
> ===http://home.att.net/~south.tower/911RussianSatellite1.htm<
> EXCELLENT LINK
>
> Russia Watched 9/11
> In Real Time On Satellite
>
> By Jon Carlson
>
> Your countrymen have been murdered and the more you delve into it
> the more it looks as though they were murdered by our government, who
> used it as an excuse to murder other people thousands of miles away.
> If you ridicule others who have sincere doubts and who know
> factual information that directly contradicts the official report and
> who want explanations from those who hold the keys to our government,
> and have motive, means, and opportunity to pull off a 9/11, but you
> are too lazy or fearful, or ... to check into the facts yourself, what
> does that make you?
>
> Full Statement of Lt. Col. Shelton F. Lankford, US Marine Corps (ret)
> Retired U.S. Marine Corps Fighter Pilot
> February 20, 2007
>
> http://www.patriotsquestion911.com/Statement%20Lankford.html
>
> In April, 2006, Journalist Webster Tarpley interviewed Thierry
> Meyssan, President of the Voltaire Network, an organization of 20
> different news agencies in Europe, Middle East and Africa, with
> correspondents in many countries. Thierry trumped America's pseudo-
> journalists with his 2002 book, Pentagate, drawing first blood on the
> Pentagon 9/11 Hoax.
>
> TM:. He (Gen. Ivashov) was the chief of armed forces in Russia on
> 9/11. He says the Russian forces were watching North America because
> of the large military exercises being carried out by the US that day,
> so they saw in real time by satellite what was happening on that day.
>
> TM: When they heard about the attacks, Pres. Putin tried to call
> Bush to tell him that the Russians were not involved. He was not able
> to reach him. But they understood already that the collapse of the
> buildings could not have been done by the planes. They already
> realized it was controlled demolition - an internal problem and not an
> external attack
>
> WGT. How did US government, the State Dept respond to your
> (Pentagate) critique?
> TM. First they said I would not be allowed to go your country any
> more. Then Ms. Clark of State Dept said that if any journalist talks
> about my book in the US they will not be allowed to attend press
> conferences at the Pentagon. They published on their website a page
> trying to refute my book.
>
> http://www.waronfreedom.org/tarpley/rbn/RBN-42206-Meyssan.html
>
> In April, 2005, writer Chevalier Désireé, from France but formerly
> USA, revealed that Russia watched on their satellite as the A3
> Skywarrior left a carrier and impacted the Pentagon:
>
> It seems that it is common knowledge in these circles that Russian
> satellites photographed a ship-launched craft (seems to have been a
> drone type plane rather than a missle) that ended up impacting the
> Pentagon on Sept 11, 2001, and that, for various reasons this
> information has been withheld from the public.
> I was naturally startled to hear this even though I have long held
> the opinion that it was NOT a commercial jetliner that hit the
> Pentagon. I think the thing that startled me was the fact that, if
> Russia (and perhaps other countries with satellites?) had proof that
> Flight 77 did

Re: Character set woes with binary data

On Sun, 2007-04-01 at 06:09 -0300, Gabriel Genellina wrote: 

> > When putting the MIME segments (listed line-by-line in a Python list)
> > together to transmit them.  The files are typically JPG or some other
> > binary format, and as best as I understand the protocol, the binary data
> > needs to be transmitted directly (this is evidenced by looking at the
> > tcp-stream of an existing client for uploading files).
> 
> But I think your problem has nothing to do with MIME: you are mixing  
> unicode and string objects; from your traceback, either the "L" list or  
> "eol" contain unicode objects that can't be represented as ASCII strings.
> 


I never said it did.  It just happens to be the context with which I am
working.  I said I wanted to concatenate materials without regard for
the character set.  I am mixing binary data with ASCII and Unicode, for
sure, but I should be able to do this.

The current source can be found at
http://fd0man.theunixplace.com/scrapbook.py which is the version that I
am having the problem with.


> > It seems that Python thinks it knows better than I do, though.  I want
> > to send this binary data straightaway to the server.  :-)
> 
> You don't appear to be using the standard email package (which includes  
> MIME support) so don't blame Python...
> 


I am not saying anything about Python's standard library.  Furthermore,
I am not using MIME e-mail.  The MIME component that I am using, which
should be ideal for me, builds the message just fine—when not using a
binary component.  I am looking for how to tell Python to combine these
objects as nothing more than a stream of bytes, without regard for what
is inside the bytes.  That is what I was asking.  You did not tell me
how to do that, or if that is even possible, so why flame me?  I am not
saying "Python is bad, evil!" and blaming it for my ignorance, but what
I am asking for is how to accomplish what I am attempting to accomplish.
The MIME component is a (slightly modified) version of the recipe
provided from the ASPN Python Cookbook.

In short:  How do I create a string that contains raw binary content
without Python caring?  Is that possible?

Thanks,
Mike


> -- 
> Gabriel Genellina
> 


--
Michael B. Trausch
[EMAIL PROTECTED]
Phone: (404) 592-5746
  Jabber IM:
[EMAIL PROTECTED]
  [EMAIL PROTECTED]
Demand Freedom!  Use open and free protocols, standards, and software!


signature.asc
Description: This is a digitally signed message part
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Character set woes with binary data

Michael B. Trausch wrote:
>
> I never said it did.  It just happens to be the context with which I am
> working.  I said I wanted to concatenate materials without regard for
> the character set.  I am mixing binary data with ASCII and Unicode, for
> sure, but I should be able to do this.

The problem is that Unicode has no default representation for mixing
with binary data and ASCII. What you should therefore ask yourself is,
"Which encoded representation of Unicode should I be using to mix my
text with those things?" Then, you should choose an encoding, call the
encode method on your Unicode objects, take the result, and mix away!

[...]

> In short:  How do I create a string that contains raw binary content
> without Python caring?  Is that possible?

All strings can contain raw binary content without Python caring.
Unicode objects, however, work on a higher level of abstraction:
characters, not bytes. Thus, you need to make sure that your Unicode
objects have been converted to bytes (ie. encoded to strings) in order
for the content to be workable at the same level as that binary
content.

Paul

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: re.findall() hangs in python

On Apr 1, 5:23 am, [EMAIL PROTECTED] wrote:
> On Apr 1, 6:12 am, "[EMAIL PROTECTED]"
>
> <[EMAIL PROTECTED]> wrote:
> > But when 'data' does not contain pattern, it just hangs at
> > 're.findall'
>
> > pattern = re.compile("(.*) > re.S)
>
> That pattern is just really slow to evaluate. What you want is
> probably something more like this:
>
> re.compile(r']*src\s*=\s*"([^"]*img[^"]*)"')
>
> "dot" is usually not so great. Prefer "NOT end-character", like [^>]
> or [^"].

Thank you. Your suggestion solves my problem!

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Shed Skin Python-to-C++ Compiler 0.0.21, Help needed


> I don't see how that can be--we're talking about a GCC-based compiler,
> right?

no, Shed Skin is a completely separate entity, that outputs C++ code.
it's true I only use GCC to test the output, and I use some GCC-
specific extensions (__gnu_cxx::hash_map/hash_set), but people have
managed to compile things with Visual Studio or whatever it is
called.

btw, the windows version of Shed Skin comes with GCC so it's easy to
compile things further (two commands, 'ss program' and 'make run'
suffice to compile and run some program 'program.py')


mark dufour (Shed Skin author).

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Character set woes with binary data


"Michael B. Trausch" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
| The protocol calls for binary data to be transmitted, and I cannot seem
| to be able to do it, because I get this error:

| UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0:
| ordinal not in range(128)

| When putting the MIME segments (listed line-by-line in a Python list)
| together to transmit them.

Python byte strings currently serve double duty: text and binary blobs.
Best not to mix the two uses.  Since most string usage is for text, most 
string methods are text methods and are not appropriate for binary data. 
As you discovered.

In the present case, do you really need to join the mix of text and binary 
data *before* sending it?  Just send the pre-text, the binary data, and 
then the post-text and they will be joined in the transmission stream.  The 
receiving site should not know the difference.

| It seems that Python thinks it knows better than I do, though.

Python is doing what you told it to do.  See below.

|   I want to send this binary data straightaway to the server.  :-)

Then do just that, as I suggested above.  You are *not* sending it 
'straightaway'.  It you did, you would have no problem..  Instead, you are 
doing a preliminary mixing, which I suspect is not needed.

| Is there any way to tell Python to ignore the situation and treat the
| entire thing as simply a stream of bytes?

Don't tell Python to treat the byte streams as interpreted text by using a 
text method.  If you really must join before sending, write your own binary 
join function using either '+' or a slices into a preallocated array (from 
the array module).

Terry Jan Reedy



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ISO programming projects


"kj" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
|
|
|
| I'm looking for a collection of useful programming projects, at
| the "hobbyist" level.
|
| My online search did turn up a few collections (isolated projects
| are less useful to me at the moment), but these projects are either
| more difficult than what I'm looking for (e.g. code a C compiler)
| or not terribly useful to the average person (e.g. a function that
| efficiently computes the n-th Fibonacci number).
|
| Any pointers would be much appreciated.

There have been similar questions (with responses) on this newsgroup.  Have 
you searched those?



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: I18n issue with optik

* Steven Bethard (Sun, 01 Apr 2007 10:21:40 -0600)
> Thorsten Kampe wrote:
> > * Steven Bethard (Sat, 31 Mar 2007 20:08:45 -0600)
> >> Thorsten Kampe wrote:
> >>> I've written a script which uses Optik/Optparse to display the 
> >>> options (which works fine). The text for the help message is localised 
> >>> (with german umlauts) and when I execute the script with the localised 
> >>> environment variable set, I get this traceback[1]. The interesting 
> >>> thing is that the localised optparse messages from displays fine - 
> >>> it's only my localisation that errors.
> >>>
> >>> From my understanding, my script doesn't put out anything, it's 
> >>> optik/optparse who does that. My po file is directly copied from the 
> >>> optik po file (who displays fine) and modified so the po file should 
> >>> be fine, too.
> >>>
> >>> What can I do to troubleshoot whether the culprit is my script, optik 
> >>> or gettext?
> >>>
> >>> Would it make sense to post the script and the mo or po files?
> >> Yes, probably.  Though if you can reduce it to the simplest test case 
> >> that produces the error, it'll increase your chances of having someone 
> >> look at it.
> > 
> > The most simple test.py is:
> > 
> > ###
> > #! /usr/bin/env python
> > 
> > import gettext, \
> >os,  \
> >sys
> > 
> > gettext.textdomain('optparse')
> > gettext.install('test')
> > 
> > from optparse import OptionParser, \
> >  OptionGroup
> > 
> > cmdlineparser = OptionParser(description = _('THIS SOFTWARE COMES 
> > WITHOUT WARRANTY, LIABILITY OR SUPPORT!'))
> > 
> > options, args = cmdlineparser.parse_args()
> > ###
> > 
> > When I run LANGUAGE=de ./test.py --help I get the error.
> > 
> > ### This is the test.de.po file
> > # Copyright (C) 2006 Thorsten Kampe
> > # Thorsten Kampe <[EMAIL PROTECTED]>, 2006
> > 
> > msgid  ""
> > msgstr ""
> > 
> > "Project-Id-Version: Template 1.0\n"
> > "POT-Creation-Date: Tue Sep  7 22:20:34 2004\n"
> > "PO-Revision-Date: 2005-07-03 16:47+0200\n"
> > "Last-Translator: Thorsten Kampe <[EMAIL PROTECTED]>\n"
> > "Language-Team: Thorsten Kampe <[EMAIL PROTECTED]>\n"
> > "MIME-Version: 1.0\n"
> > "Content-Type: text/plain; charset=ISO-8859-15\n"
> > "Content-Transfer-Encoding: 8-bit\n"
> > "Generated-By: pygettext.py 1.5\n"
> > 
> > msgid  "THIS SOFTWARE COMES WITHOUT WARRANTY, LIABILITY OR SUPPORT!"
> > msgstr "DIESES PROGRAMM HAT WEDER GEWÄHRLEISTUNG, HAFTUNG NOCH 
> > UNTERSTÜTZUNG!"
> > ###
> > 
> > The localisation now produces an error in the localised optik files, 
> > too.
> > 
> > Under Windows I get " File "G:\program files\python\lib\encodings
> > \cp1252.py", line 12, in encode
> >return codecs.charmap_encode(input,errors,encoding_table)"
> 
> I'm not very experienced with internationalization, but if you change::
> 
>  gettext.install('test')
> 
> to::
> 
>  gettext.install('test', unicode=True)
> 
> what happens?

No traceback anymore from optparse but the non-ascii umlauts are 
displayed as question marks ("?").

Thorsten
-- 
http://mail.python.org/mailman/listinfo/python-list


capturing system exit status


Hi folks,

in a program I'm writing I have several commands I pass to the unix OS
underneath the code.

I want to perform error checking to make sure that the OS commands' exit
gracefully, but I'm not seeing a simple python module to do this. The
closest I can see is system(), as detailed here:
http://www.python.org/doc/2.1.3/lib/os-process.html, but I can't formulate a
way to use it.

What I want is a simple if statement such that:

if ExitStatusIsBad:
 sys.exit()
else:
 go on to next code chunk



James
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Character set woes with binary data

Michael B. Trausch wrote:
> In short:  How do I create a string that contains raw binary content 
> without Python caring?  Is that possible?

Given where we're now at with strings in Python, Python should
really have a "byte" type and a way to deal with arrays of bytes,
independent of the string operators.

Efficient handling of lists of bytes would do it.

John Nagle
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: I18n issue with optik

* Steven Bethard (Sun, 01 Apr 2007 10:26:54 -0600)
> Thorsten Kampe wrote:
> > I guess the culprit is this snippet from optparse.py:
> > 
> > # used by test suite
> > def _get_encoding(self, file):
> > encoding = getattr(file, "encoding", None)
> > if not encoding:
> > encoding = sys.getdefaultencoding()
> > return encoding
> > 
> > def print_help(self, file=None):
> > """print_help(file : file = stdout)
> > 
> > Print an extended help message, listing all options and any
> > help text provided with them, to 'file' (default stdout).
> > """
> > if file is None:
> > file = sys.stdout
> > encoding = self._get_encoding(file)
> > file.write(self.format_help().encode(encoding, "replace"))
> > 
> > So this means: when the encoding of sys.stdout is US-ASCII, Optparse 
> > sets the encoding to of the help text to ASCII, too. But that's 
> > nonsense because the Encoding is declared in the Po (localisation) 
> > file.
> > 
> > How can I set the encoding of sys.stdout to another encoding? Of 
> > course this would be a terrible hack if the encoding of the 
> > localisation changes or different translators use different 
> > encodings...
> 
> If print_help() is what's wrong, you should probably hack print_help() 
> instead of sys.stdout.  You could try something like::
> 
>  def print_help(self, file=None):
>  """print_help(file : file = stdout)
> 
>  Print an extended help message, listing all options and any
>  help text provided with them, to 'file' (default stdout).
>  """
>  if file is None:
>  file = sys.stdout
>  file.write(self.format_help())
> 
>  optparse.OptionParser.print_help = print_help
> 
>  cmdlineparser = optparse.OptionParser(description=...)
>  ...
> 
> That is, you could monkey-patch print_help() before you create an 
> OptionParser.

Yes, I could do that but I'd rather know first if my code is wrong or 
the optparse code.

Thorsten
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: I18n issue with optik

* Thorsten Kampe (Sun, 1 Apr 2007 19:45:59 +0100)
> Yes, I could do that but I'd rather know first if my code is wrong or 
> the optparse code.

It might be the bug mentioned in 
http://mail.python.org/pipermail/python-dev/2006-May/065458.html

The patch although doesn't work. From my unicode-charset-codepage-
codeset-challenged point of view the encoding of sys.stdout doesn't 
matter. The charset is defined in the .po/.mo file (but of course 
optparse can't know if the message has been translated by gettext 
("_").

Thorsten
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pygame Q (linux) beginner

enquiring mind wrote:

> Running 2.4.1 Python (learning)
> Running SUSE Linux 10
> 
> At Chapter 5 is where the Pygame module is 
> introduced so I have a little time before I have to figure out what I
> have to download and install.

Are you asking for advice how to install pygame on SuSE 10 ?
Well, that's easy:

python-pygamerpm comes with SuSE.
Just install it with YaST2; the additional packages it
needs (like libSDL) are installed automatically then.
So you don't have to download any packages from www.pygame.org.

Another hint: If sound in pygame doesn't work, try

export SDL_AUDIODRIVER=alsa

right before starting your script.

H.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Character set woes with binary data

On Apr 1, 11:44 am, John Nagle <[EMAIL PROTECTED]> wrote:
> Michael B. Trausch wrote:
> > In short:  How do I create a string that contains raw binary content
> > without Python caring?  Is that possible?
>
> Given where we're now at with strings in Python, Python should
> really have a "byte" type and a way to deal with arrays of bytes,
> independent of the string operators.
>
> Efficient handling of lists of bytes would do it.
>
> John Nagle

Python has a module base64 that allows you to handle binary data as a
string.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Shed Skin Python-to-C++ Compiler 0.0.21, Help needed

[EMAIL PROTECTED] writes:
> > I don't see how that can be--we're talking about a GCC-based compiler,
> > right?
> 
> no, Shed Skin is a completely separate entity, 

I was referring to GNAT.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: I18n issue with optik

* Thorsten Kampe (Sun, 1 Apr 2007 20:08:39 +0100)
> * Thorsten Kampe (Sun, 1 Apr 2007 19:45:59 +0100)
> > Yes, I could do that but I'd rather know first if my code is wrong or 
> > the optparse code.
> 
> It might be the bug mentioned in 
> http://mail.python.org/pipermail/python-dev/2006-May/065458.html
> 
> The patch although doesn't work. From my unicode-charset-codepage-
> codeset-challenged point of view the encoding of sys.stdout doesn't 
> matter. The charset is defined in the .po/.mo file (but of course 
> optparse can't know if the message has been translated by gettext 
> ("_").

If I "patch" line 1648 (the one mentioned in the traceback) of 
optparse.py from

file.write(self.format_help().encode(encoding, "replace"))
to
file.write(self.format_help())

...then everything works and is displayed fine (even without the 
"unicode = True" parameter to gettext.install).

But the "patch" might make other things fail, of course...

Thorsten
-- 
http://mail.python.org/mailman/listinfo/python-list


Which will come first: Perl 6 or Python 3000?

http://home.inklingmarkets.com/market/show/4018

(Interesting site by the way - although a bit heavily weighted towards
US politics for my tastes).

Anyway, I know which way my money is going :-)

Fuzzyman
http://www.voidspace.org.uk/python/articles.shtml

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Shed Skin Python-to-C++ Compiler 0.0.21, Help needed

On Apr 1, 6:07 pm, John Nagle <[EMAIL PROTECTED]> wrote:
> Kay Schluehr wrote:
> > Indeed. The only serious problem from an acceptance point of view is
> > that Mark tried to solve the more difficult problem first and hung on
> > it. Instead of integrating a translator/compiler early with CPython,
> > doing some factorization of Python module code into compilable and
> > interpretable functions ( which can be quite rudimentary at first )
> > together with some automatically generated glue code and *always have
> > a running system* with monotone benefit for all Python code he seemed
> > to stem an impossible task, namely translating the whole Python to C++
> > and created therefore a "lesser Python".
>
> Trying to incrementally convert an old interpreter into a compiler
> is probably not going to work.

I'm talking about something that is not very different from what Psyco
does but Psyco works at runtime and makes continous measurements for
deciding whether it can compile some bytecodes just-in-time or let the
interpreter perform their execution. You can also try a different
approach and decide statically whether you can compile some function
or interpret it. Then you factorize each module m into m = m_native *
m_interp. This factorization shall depend only on the capabilities of
the translator / native compiler and the metadata available for your
functions. Since you care for the correct interfaces and glue code
early and maintain it continually you never run into severe
integration problems.

--

A factorization always follows a certain pattern that preserves the
general form and creates a specialization:

def func(x,y):
# algorithm

>

from native import func_int_int

def func(x,y):
if isinstance(x, int) and isinstance(y, int):
   return func_int_int(x,y)  # wrapper of natively compiled
specialized function
else:
   # perform original unmodified algorithm on bytecode interpreter

Or in decorator notation:

from native import func_int_int

@apply_special( ((int, int), func_int_int) )
def func(x,y):
# algorithm

where apply_special transforms the first version of func into the
second version.

Now we have the correct form and the real and hard work can begin i.e.
the part Mark was interested and engaged in.

>
> > Otherwise it
> > wouldn't be a big deal to do what is necessary here and even extend
> > the system with perspective on Py3K annotations or other means to ship
> > typed Python code into the compiler.
>
>  Shed Skin may be demonstrating that "annotations" are unnecessary
> cruft and need not be added to Python.  Automatic type inference
> may be sufficient to get good performance.

You still dream of this, isn't it? Type inference in dynamic languages
doesn't scale. It didn't scale in twenty years of research on
SmallTalk and it doesn't in Python. However there is no no-go theorem
that prevents ambitious newbies to type theory wasting their time and
efforts.

> The Py3K annotation model is to some extent a repeat of the old
> Visual Basic model.  Visual Basic started as an interpreter with one
> default type, which is now called Variant, and later added the usual types,
> Integer, String, Boolean, etc., which were then manually declared.
> That's where Py3K is going.

Read the related PEP, John. You will see that Guidos genius is that of
a good project manager in that case who knows that the community works
for him. The trade is that he supplies the syntax/interface and the
hackers around him fantasize about semantics and start
implementations. Not only annotations are optional but also their
meaning. This has nothing to do with VB and it has not even much to do
with what existed before in language design.

Giving an example of annotation semantics:

def func(x:int, y:int):
# algorithm

can be translated according to the same pattern as above. The meaning
of the annotation according to the envisioned annotation handler is as
follows: try to specialize func on the types of the arguments and
perform local type inference. When successfull, compile func with
these arguments and map the apply_special decorator. When translation
is unfeasible, send a warning. If type violation is detected under
this specialization send a warning or an exception in strict-checking-
mode.

I fail to see how this violates duck-typing and brings VisualBasic to
the Python community. But maybe I just underrate VB :)

Kay

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Character set woes with binary data

On 2007-04-01, Ene <[EMAIL PROTECTED]> wrote:
> On Apr 1, 11:44 am, John Nagle <[EMAIL PROTECTED]> wrote:
>> Michael B. Trausch wrote:
>> > In short:  How do I create a string that contains raw binary content
>> > without Python caring?  Is that possible?
>>
>> Given where we're now at with strings in Python, Python should
>> really have a "byte" type and a way to deal with arrays of
>> bytes, independent of the string operators.

I agree.  It would be very nice to have a built-in data-type
that is an efficient implementation of a list/array of bytes.
When manipulating binary data, it really makes it unreadable 
when it's full of chr() and ord() calls that are only there
because when you index into a string you get a string of length
1 and not a byte.

>> Efficient handling of lists of bytes would do it.
>>
>> John Nagle
>
> Python has a module base64 that allows you to handle binary
> data as a string.

Huh?  How does the base64 module address the problem?  It still
relies on the "string" type as an immutable list of bytes.

-- 
Grant Edwards   grante Yow!  I like your SNOOPY
  at   POSTER!!
   visi.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: I18n issue with optik

* Thorsten Kampe (Sun, 1 Apr 2007 20:22:51 +0100)
> * Thorsten Kampe (Sun, 1 Apr 2007 20:08:39 +0100)
> > * Thorsten Kampe (Sun, 1 Apr 2007 19:45:59 +0100)
> > > Yes, I could do that but I'd rather know first if my code is wrong or 
> > > the optparse code.
> > 
> > It might be the bug mentioned in 
> > http://mail.python.org/pipermail/python-dev/2006-May/065458.html
> > 
> > The patch although doesn't work. From my unicode-charset-codepage-
> > codeset-challenged point of view the encoding of sys.stdout doesn't 
> > matter. The charset is defined in the .po/.mo file (but of course 
> > optparse can't know if the message has been translated by gettext 
> > ("_").
> 
> If I "patch" line 1648 (the one mentioned in the traceback) of 
> optparse.py from
> 
> file.write(self.format_help().encode(encoding, "replace"))
> to
> file.write(self.format_help())
> 
> ...then everything works and is displayed fine [...]

...but only in Cygwin rxvt, the standard Windows console doesn't show 
the right colors.

I give up and revert back to ASCII. This whole charset mess is not 
meant to solved by mere mortals.

Thorsten

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Extract information from HTML table

On 1 Apr 2007 07:56:04 -0700, Ulysse <[EMAIL PROTECTED]> wrote:
> I have seen the Beautiful Soup online help and tried to apply that to
> my problem. But it seems to be a little bit hard. I will rather try to
> do this with regular expressions...
>

If you think that Beautiful Soup is difficult than wait till you try
to do this with regexes. Granted you know the exact format of the HTML
you are scraping will help, if you ever need to parse HTML from an
unknown source than Beautiful Soup is the only way to go. Not all HTML
authors close their td and tr tags, and sometimes there are attributes
to those tags. If you plan on ever reusing the code or the format of
the HTML may change, then you are best off sticking with Beautiful
Soup.

Dotan Cohen

http://lyricslist.com/
http://what-is-what.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: I18n issue with optik

Thorsten Kampe napisał(a):

>>> Under Windows I get " File "G:\program files\python\lib\encodings
>>> \cp1252.py", line 12, in encode
>>>return codecs.charmap_encode(input,errors,encoding_table)"
>> I'm not very experienced with internationalization, but if you change::
>>
>>  gettext.install('test')
>>
>> to::
>>
>>  gettext.install('test', unicode=True)
>>
>> what happens?
> 
> No traceback anymore from optparse but the non-ascii umlauts are 
> displayed as question marks ("?").

And this is expected behaviour of encode() with errors set to 'replace'.
I think this is the solution to your problem. I was a bit surprised I
never saw this error, but I always use the unicode=True setting to
gettext.install()...

-- 
Jarek Zgoda
http://jpa.berlios.de/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] Python 3000 PEP: Postfix type declarations

+18446744073709551616 from me too.

This also fits nicely in with my plan to abandon the python-dev and
python-3000 mailing lists. Mailing lists are so 20th century! I
propose that from now on, all Python development should be carried out
on blogs, so that readers can use customized RSS feeds to read only
those contributions they are interested in. I note that all the key
developers already have a blog, e.g.:

Aahz - http://www.artima.com/weblogs/index.jsp?blogger=aahz
Neal Norwitz - http://nnorwitz.blogspot.com/
Fredrik Lundh - http://effbot.org/pyref/blog.htm
Jeremy Hylton - http://www.python.org/~jeremy/weblog/
Anthony Baxter - http://codingweasel.blogspot.com/
Phillip Eby - http://dirtsimple.org/programming/index.html
Talin - http://www.advogato.org/person/Talin/diary.html
David Ascher - http://ascher.ca/blog/
Fred Drake - http://www.advogato.org/person/fdrake/diary.html

(and myself, of course - http://www.artima.com/weblogs/index.jsp?blogger=guido)

--Guido

On 4/1/07, Collin Winter <[EMAIL PROTECTED]> wrote:
> On 4/1/07, Georg Brandl <[EMAIL PROTECTED]> wrote:
> [snip several pages of excellent ideas]
> >
> > The mapping between types and declarators is not static. It can be 
> > completely
> > customized by the programmer, but for convenience there are some predefined
> > mappings for some built-in types:
> >
> > =  
> > ===
> > Type   Declarator
> > =  
> > ===
> > ``object`` � (REPLACEMENT CHARACTER)
> > ``int``ℕ (DOUBLE-STRUCK CAPITAL N)
> > ``float``  ℮ (ESTIMATED SYMBOL)
> > ``bool``   ✓ (CHECK MARK)
> > ``complex``ℂ (DOUBLE-STRUCK CAPITAL C)
> > ``str``✎ (LOWER RIGHT PENCIL)
> > ``unicode``✒ (BLACK NIB)
> > ``tuple``  ⒯ (PARENTHESIZED LATIN SMALL LETTER T)
> > ``list``   ♨ (HOT SPRINGS)
> > ``dict``   ⧟ (DOUBLE-ENDED MULTIMAP)
> > ``set``∅ (EMPTY SET) (*Note:* this is also for full 
> > sets)
> > ``frozenset``  ☃ (SNOWMAN)
> > ``datetime``   ⌚ (WATCH)
> > ``function``   ƛ (LATIN SMALL LETTER LAMBDA WITH STROKE)
> > ``generator``  ⚛ (ATOM SYMBOL)
> > ``Exception``  ⌁ (ELECTRIC ARROW)
> > =  
> > ===
> >
> > The declarator for the ``None`` type is a zero-width space.
> >
> > These characters should be obvious and easy to remember and type for every
> > programmer.
> >
> [snip]
> >
> > Example
> > ===
> >
> > This is the standard ``os.path.normpath`` function, converted to type 
> > declaration
> > syntax::
> >
> >  def normpathƛ(path✎)✎:
> >  """Normalize path, eliminating double slashes, etc."""
> >  if path✎ == '':
> >  return '.'
> >  initial_slashes✓ = path✎.startswithƛ('/')✓
> >  # POSIX allows one or two initial slashes, but treats three or more
> >  # as single slash.
> >  if (initial_slashes✓ and
> >  path✎.startswithƛ('//')✓ and not path✎.startswithƛ('///')✓)✓:
> >  initial_slashesℕ = 2
> >  comps♨ = path✎.splitƛ('/')♨
> >  new_comps♨ = []♨
> >  for comp✎ in comps♨:
> >  if comp✎ in ('', '.')⒯:
> >  continue
> >  if (comp✎ != '..' or (not initial_slashesℕ and not 
> > new_comps♨)✓ or
> >   (new_comps♨ and new_comps♨[-1]✎ == '..')✓)✓:
> >  new_comps♨.appendƛ(comp✎)
> >  elif new_comps♨:
> >  new_comps♨.popƛ()✎
> >  comps♨ = new_comps♨
> >  path✎ = '/'.join(comps♨)✎
> >  if initial_slashesℕ:
> >  path✎ = '/'*initial_slashesℕ + path✎
> >  return path✎ or '.'
> >
> > As you can clearly see, the type declarations add expressiveness, while at 
> > the
> > same time they make the code look much more professional.
>
> My only concern is that this doesn't go far enough. While knowing that
> some object is a ⒯ is a good start, it would be so much more helpful
> to know that it's a ⒯ of ✎s. I think something like ✎✎✎3⒯ to indicate
> a 3-⒯ of ✎s would be nice. This would change the line in the above
> from "if comp✎ in ('', '.')⒯:" to "if comp✎ in ('', '.')✎✎2⒯:", which
> I think is a nice win in terms of readability, EIBTI and all that.
>
> (Sidebar: I think the PEP should feature a section on how these new
> type declarations will cut down on mailing list volume and
> documentation size.)
>
> In light of this PEP, PEP 3107's function annotations should be
> rejected. All that hippie feel-good crap about "user-defined
> annotations" and "open-ended semantics" and "no rules, man" was just
> going to get us into trouble. This PEP's more modern conception of
> type anno

Re: socket read timeout

Steve Holden wrote:
> Hendrik van Rooyen wrote:
>> Are sockets full duplex?
>>
> Yes. But you have to use non-blocking calls in your application to use 
> them as full-duplex in your code.

Hmmm... I'm missing something. Suppose I have one thread (or
process) reading from a blocking-mode socket while another is
writing to it? What stops it from being full duplex?


-- 
--Bryan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Shed Skin Python-to-C++ Compiler 0.0.21, Help needed

Kay Schluehr wrote:
> On Apr 1, 6:07 pm, John Nagle <[EMAIL PROTECTED]> wrote:
> 
>>Kay Schluehr wrote:
>>
>>>Indeed. The only serious problem from an acceptance point of view is
>>>that Mark tried to solve the more difficult problem first and hung on
>>>it. Instead of integrating a translator/compiler early with CPython,
>>>doing some factorization of Python module code into compilable and
>>>interpretable functions ( which can be quite rudimentary at first )
>>>together with some automatically generated glue code and *always have
>>>a running system* with monotone benefit for all Python code he seemed
>>>to stem an impossible task, namely translating the whole Python to C++
>>>and created therefore a "lesser Python".
>>
>>Trying to incrementally convert an old interpreter into a compiler
>>is probably not going to work.
> 
> 
> I'm talking about something that is not very different from what Psyco
> does but Psyco works at runtime and makes continous measurements for
> deciding whether it can compile some bytecodes just-in-time or let the
> interpreter perform their execution.

That can work.  That's how the Tamirin JIT compiler does Javascript
inside Mozilla.  The second time something is executed interpretively,
it's compiled.  That's a tiny JIT engine, too; it's inside the Flash
player.  Runs both JavaScript and ActionScript generated programs.
Might be able to run Python, with some work.

> A factorization always follows a certain pattern that preserves the
> general form and creates a specialization:
> 
> def func(x,y):
> # algorithm
> 
> >
> 
> from native import func_int_int
> 
> def func(x,y):
> if isinstance(x, int) and isinstance(y, int):
>return func_int_int(x,y)  # wrapper of natively compiled
> specialized function
> else:
># perform original unmodified algorithm on bytecode interpreter

 You can probably offload that decision onto the linker by creating
specializations with different type signatures and letting the C++
name resolution process throw out the ones that aren't needed at
link time.

>>>Otherwise it
>>>wouldn't be a big deal to do what is necessary here and even extend
>>>the system with perspective on Py3K annotations or other means to ship
>>>typed Python code into the compiler.
>>
>> Shed Skin may be demonstrating that "annotations" are unnecessary
>>cruft and need not be added to Python.  Automatic type inference
>>may be sufficient to get good performance.
> 
> 
> You still dream of this, isn't it? Type inference in dynamic languages
> doesn't scale. It didn't scale in twenty years of research on
> SmallTalk and it doesn't in Python.

I'll have to ask some of the Smalltalk people from the PARC era
about that one.

 > However there is no no-go theorem
> that prevents ambitious newbies to type theory wasting their time and
> efforts.

Type inference analysis of Python indicates that types really don't
change all that much.  See

http://www.python.org/workshops/2000-01/proceedings/papers/aycock/aycock.html

Only a small percentage of Python variables ever experience a type change.
So type inference can work well on real Python code.

The PyPy developers don't see type annotations as a win.  See Karl Botlz'
comments in

http://www.velocityreviews.com/forums/t494368-p3-pypy-10-jit-compilers-for-free-and-more.html

where he writes:

"Also, I fail to see how type annotations can have a huge speed-advantage
versus what our JIT and Psyco are doing."

>>The Py3K annotation model is to some extent a repeat of the old
>>Visual Basic model.  Visual Basic started as an interpreter with one
>>default type, which is now called Variant, and later added the usual types,
>>Integer, String, Boolean, etc., which were then manually declared.
>>That's where Py3K is going.
>
> This has nothing to do with VB and it has not even much to do
> with what existed before in language design.

Type annotations, advisory or otherwise, aren't novel.  They
were tried in some LISP variants.  Take a look at this
experimental work on Self, too.

  http://www.cs.ucla.edu/~palsberg/paper/spe95.pdf

Visual Basic started out more or less declaration-free, and
gradually backed into having declarations.  VB kept a "Variant"
type, which can hold anything and was the implicit type.
Stripped of the Python jargon, that's what's proposed for Py3K.
Just because it has a new name doesn't mean it's new.

It's common for languages to start out untyped and "simple",
then slowly become more typed as the limits of the untyped
model are reached.

Another thing that can go wrong with a language: if you get too hung
up on providing ultimate flexibility in the type and object system,
too much of the language design and machinery is devoted to features
that are very seldom used.  C++ took that wrong turn a few years ago,
when the language designers became carried away with their template
mechanism, to the exclusion of fixing the real problems that drive the

Overlapping matches

In the re documentation, it says that the matching functions return "non-
overlapping" matches only, but I also need overlapping ones. Does anyone 
know how this can be done?

Regards,
Rehceb Rotkiv 
-- 
http://mail.python.org/mailman/listinfo/python-list


Port Function crc_ccitt_update from C++


Hi folks,

   Please, I don´t understand exactly what this function CRC CCITT UPDATE
in C++ AVR can be ported to Python..

uint16_t
crc_ccitt_update (uint16_t crc, uint8_t data)
{
  data ˆ= lo8 (crc);
  data ˆ= data << 4;
  return uint16_t)data << 8) | hi8 (crc)) ˆ (uint8_t)(data >> 4)
ˆ ((uint16_t)data << 3));
}

Source:
http://tldp.tuxhilfe.de/linuxfocus/common/src2/article352/avr-libc-user-manual-1.0.4.pdf

 In parts reason this lo8 and hi8, is there some good soul here that
can help-me with this conversion?

I try seach in Google and Koders some code that use it, but I don´t had
sucess...


Big Hugs
Thanks
Regards.


-Py -Thorneiro
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Overlapping matches

On Apr 1, 9:38 pm, Rehceb Rotkiv <[EMAIL PROTECTED]> wrote:
> In the re documentation, it says that the matching functions return "non-
> overlapping" matches only, but I also need overlapping ones. Does anyone
> know how this can be done?

Something like the following:

import re

s = ""
p = re.compile("oo")
out = []

while pos < endpos:
m = p.search(s, pos)
if not m:
break
out.append(m)
pos = m.start() + 1


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ISO programming projects

> I'm looking for a collection of useful programming projects, at
> the "hobbyist" level.
>
> My online search did turn up a few collections (isolated projects
> are less useful to me at the moment), but these projects are either
> more difficult than what I'm looking for (e.g. code a C compiler)
> or not terribly useful to the average person (e.g. a function that
> efficiently computes the n-th Fibonacci number).
>
> Any pointers would be much appreciated.
>
> kj



http://wiki.python.org/moin/CodingProjectIdeas
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: I18n issue with optik

On Apr 1, 8:47 am, Thorsten Kampe <[EMAIL PROTECTED]> wrote:
> I guess the culprit is this snippet from optparse.py:
>
> # used by test suite
> def _get_encoding(self, file):
> encoding = getattr(file, "encoding", None)
> if not encoding:
> encoding = sys.getdefaultencoding()
> return encoding
>
> def print_help(self, file=None):
> """print_help(file : file = stdout)
>
> Print an extended help message, listing all options and any
> help text provided with them, to 'file' (default stdout).
> """
> if file is None:
> file = sys.stdout
> encoding = self._get_encoding(file)
> file.write(self.format_help().encode(encoding, "replace"))
>
> So this means: when the encoding of sys.stdout is US-ASCII, Optparse
> sets the encoding to of the help text to ASCII, too.

.encode() method doesn't set an encoding. It encodes unicode text into
bytes according to specified encoding. That means optparse needs ascii
or unicode (at least) for help text. In other words you'd better use
unicode throughout your program.

> But that's
> nonsense because the Encoding is declared in the Po (localisation)
> file.

For backward compatibility gettext is working with bytes by default,
so the PO file encoding is not even involved. You need to use unicode
gettext.

> How can I set the encoding of sys.stdout to another encoding?

What are you going to set it to? As I understand you're going to
distribute your program to some users. How are you going to find out
the encoding of the terminal of your users?

  -- Leo

-- 
http://mail.python.org/mailman/listinfo/python-list


Clean "Durty" strings

Hello,

I need to clean the string like this :

string =
"""
bonne mentalité mec!:) \nbon pour
info moi je suis un serial posteur arceleur dictateur ^^*
\nmais pour avoir des resultats probant il
faut pas faire les mariolles, comme le "fondateur" de bvs
krew \n
mais pour avoir des resultats probant il faut pas faire les mariolles,
comme le "fondateur" de bvs krew \n
"""

into :
bonne mentalité mec!:) bon pour info moi je suis un serial posteur
arceleur dictateur ^^* mais pour avoir des resultats probant il faut
pas faire les mariolles, comme le "fondateur" de bvs krew
mais pour avoir des resultats probant il faut pas faire les mariolles,
comme le "fondateur" de bvs krew

To do this I wold like to use only strandard librairies.

Thanks

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ISO programming projects

kj <[EMAIL PROTECTED]> writes:

> I'm looking for a collection of useful programming projects, at
> the "hobbyist" level.
>
> My online search did turn up a few collections (isolated projects
> are less useful to me at the moment), but these projects are either
> more difficult than what I'm looking for (e.g. code a C compiler)
> or not terribly useful to the average person (e.g. a function that
> efficiently computes the n-th Fibonacci number).
>
> Any pointers would be much appreciated.

Sourceforge.net and Savannah both have "help wanted" pages for open
source projects:




sherm--

-- 
Web Hosting by West Virginians, for West Virginians: http://wv-www.net
Cocoa programming in Perl: http://camelbones.sourceforge.net
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] Python 3000 PEP: Postfix type declarations


OMG, I was starting to reconsider Ruby.



Maël Benjamin Mettler wrote:
  Is this supposed to be a joke? 



First of April? Likely.
  


--
Shane Geiger
IT Director
National Council on Economic Education
[EMAIL PROTECTED]  |  402-438-8958  |  http://www.ncee.net

Leading the Campaign for Economic and Financial Literacy

begin:vcard
fn:Shane Geiger
n:Geiger;Shane
org:National Council on Economic Education (NCEE)
adr:Suite 215;;201 N. 8th Street;Lincoln;NE;68508;United States
email;internet:[EMAIL PROTECTED]
title:IT Director
tel;work:402-438-8958
x-mozilla-html:FALSE
url:http://www.ncee.net
version:2.1
end:vcard

-- 
http://mail.python.org/mailman/listinfo/python-list

Question about using urllib2 to load a url

Hi,

i have the following code to load a url.
My question is what if I try to load an invalide url ("http://
www.heise.de/"), will I get an IOException? or it will wait forever?

Thanks for any help.

 opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
urllib2.install_opener(opener)

txheaders = {'User-agent': 'Mozilla/5.0 (X11; U; Linux i686; en-
US; rv:1.8.1.3) Gecko/20070309 Firefox/2.0.0.3'}

try:
req = Request(url, txdata, txheaders)
handle = urlopen(req)
except IOError, e:
print e
print 'Failed to open %s' % url
return 0;

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Using Simple MAPI with MS Outlook 2007

[EMAIL PROTECTED] wrote:
> Hi there,
> 
> I'd like to send emails from a Python program using Simple MAPI. I've
> tried this code: 
> http://mail.python.org/pipermail/python-list/2004-December/298066.html
> and it works well with Outlook Express 6 and Thunderbird 1.5, but it
> doens't work at all with Microsoft Outlook 2007. I keep getting this
> message: "WindowsError: MAPI error 2".
> 
> I don't want to use Extended MAPI because it doesn't support
> thunderbird not OE. Therefore, Simple MAPI is the only option for me.
> 
> So, what did I miss here?
> 
> From - Sun

Error code 2 error translates to:

"MAPI_E_FAILURE
One or more unspecified errors occurred. No message was sent. "

, rather vague. I haven't used Simple MAPI with Outlook, but did a quick 
google search. I found that there is a security feature that may effect 
access to Outlook. I have included a few links on the chance that this 
is the cause of the problem.


Outlook Email Security Update:

http://www.slipstick.com/outlook/esecup.htm


Outlook security block & Simple MAPI:

http://help.lockergnome.com/office/Outlook-security-block-Simple-Mapi-ftopict946357.html


Customize programmatic settings in Outlook 2007:

http://technet2.microsoft.com/Office/en-us/library/8a611f92-e197-4dd3-9417-5ed513891af11033.mspx?mfr=true


The google search also uncovered a memory issues with Simple MAPI and 
Outlook:

http://blogs.msdn.com/stephen_griffin/archive/2006/11/03/the-intentional-memory-leak.aspx


Hope this helps,

Lenard Lindstrom
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Character set woes with binary data

John Nagle wrote:
> Michael B. Trausch wrote:
>> In short:  How do I create a string that contains raw binary content 
>> without Python caring?  Is that possible?
> 
>Given where we're now at with strings in Python, Python should
> really have a "byte" type and a way to deal with arrays of bytes,
> independent of the string operators.
> 
>Efficient handling of lists of bytes would do it.
> 
> John Nagle

array.array or ctypes.create_string_buffer.


Lenard Lindstrom
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: shutil.copy Problem

On Mar 28, 7:01 am, David Nicolson <[EMAIL PROTECTED]> wrote:
> Hi John,
>
> That was an excellent idea and it was the cause problem. Whether this  
> is a bug inshutilI'm not sure.
>
> Here is the traceback, Python 2.4.3 on Windows XP:
>
>
>
>
>
> > C:\Documents and Settings\Güstav>C:\python243\python Z:\sh.py
> > Copying  u'C:\\Documents and Settings\\G\xfcstav\\My Documents\\My  
> > Music\\iTunes
> > \\iTunes Music Library.xml' ...
> > Traceback (most recent call last):
> >   File "Z:\sh.py", line 12, in ?
> >shutil.copy(xmlfile,"C:iTunes Music Library.xml")

Note, there is no backslash after C:. shutil will try to make an
absolute file name and concatenate it with a current directory name (C:
\Documents and Settings\Güstav) that contains non-ascii characters.
Because of backward compatibility the absolute name won't be unicode.
On the other hand data coming from registry is unicode. When shutil
tries to compare those two file names it fails. To avoid the problem
you need either make both file names unicode or both file names byte-
strings.

However one thing is still mystery to me. Your source code contains
backslash but your traceback doesn't:

> >shutil.copy(xmlfile,"C:\iTunes Music Library.xml")
>



> Theshutilline needed to be changed to this to be successful:
>
> >shutil.copy(xmlfile.encode("windows-1252"),"C:\iTunes Music  
> > Library.xml"

It will work only in some European locales. Using of locale module you
can make it work for 99% of world users, but it will still fail in
cases like German locale and Greek characters in file names. Only
using unicode everywhere in your program is a complete solution. Like

shutil.copy(xmlfile, u"C:\iTunes Music Library.xml")

if you use constant or make sure your file name is unicode:

dest = unicode()
shutil.copy(xmlfile, dest)


  -- Leo.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Overlapping matches

On Apr 1, 1:38 pm, Rehceb Rotkiv <[EMAIL PROTECTED]> wrote:
> In the re documentation, it says that the matching functions return "non-
> overlapping" matches only, but I also need overlapping ones. Does anyone
> know how this can be done?


Perhaps lookahead assertions are what you're
looking for?

import re
import string

non_overlap = re.compile(r'[0-9a-fA-F]{2}')
pairs = [ match.group(0) for match in
non_overlap.finditer(string.hexdigits) ]
print pairs

overlap = re.compile(r'[0-9a-fA-F](?=([0-9a-fA-F]))')
pairs = [ match.group(0) + match.group(1) for match in
overlap.finditer(string.hexdigits) ]
print pairs

--
Hope this helps,
Steven

-- 
http://mail.python.org/mailman/listinfo/python-list


Launch script on Linux using Putty

Hello,

I have a python script which runs all the time (using of library
threading). I would like this scipt to run on a remote linux Os using
Putty. The problem is, when I close Putty command line window running
on my Win PC, the python script stops to run too.

I tried to use cron tables instead. By setting the time and restart
cron process, but it's not practical.

Do you know the right way to do this ?

Regards

-- 
http://mail.python.org/mailman/listinfo/python-list


Is this a wxPython 2.8.0 bug with GetItemText method of wxTreeCtrl?

I use a tree control in my application and was hoping to use use the
GetItemText method to read the new label of the tree item after the
user has edited it. So in the EVT_TREE_END_LABEL_EDIT event handler,
i call this method but the old label (previous value before the edti)
is returned.
Is there something else i have to do or is this a bug?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Launch script on Linux using Putty

Ulysse wrote:
> Hello,
> 
> I have a python script which runs all the time (using of library
> threading). I would like this scipt to run on a remote linux Os using
> Putty. The problem is, when I close Putty command line window running
> on my Win PC, the python script stops to run too.
> 
> I tried to use cron tables instead. By setting the time and restart
> cron process, but it's not practical.
> 
> Do you know the right way to do this ?

There are a few ways to do this, in order of easiest to most involved:

1. The easiest is to run nohup on your script in the background:

$ nohup myscript.py > output.txt 2> error.txt &

Then you can disconnect but your script will keep running. Try man nohup 
  for more information.

2. Use GNU screen on your remote terminal, and detach the screen instead 
of logging off.

3. Set up your script to fork as a daemon. Google for ["python cookbook" 
fork daemon] to find a few recipes for this.
-- 
Michael Hoffman
-- 
http://mail.python.org/mailman/listinfo/python-list


How can i compare a string which is non null and empty


Hi,

how can i compare a string which is non null and empty?


i look thru the string methods here, but cant find one which does it?

http://docs.python.org/lib/string-methods.html#string-methods

In java,I do this:
if (str != null) && (!str.equals("")) 

how can i do that in python?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How can i compare a string which is non null and empty

[EMAIL PROTECTED] wrote:

> 
> Hi,
> 
> how can i compare a string which is non null and empty?
> 
> 
> i look thru the string methods here, but cant find one which does it?
> 
> http://docs.python.org/lib/string-methods.html#string-methods
> 
> In java,I do this:
> if (str != null) && (!str.equals("")) 
> 
> how can i do that in python?

What about

if str != "":
pass

?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How can i compare a string which is non null and empty

[EMAIL PROTECTED] schrieb:
> Hi,
> 
> how can i compare a string which is non null and empty?
> 
> 
> i look thru the string methods here, but cant find one which does it?
> 
> http://docs.python.org/lib/string-methods.html#string-methods
> 
> In java,I do this:
> if (str != null) && (!str.equals("")) 
> 
> how can i do that in python?

Strings cannot be "null" in Python.

If you want to check if a string is not empty, use "if str".

This also includes the case that "str" may not only be an empty
string, but also None.

Georg

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pygame Q (linux) beginner

Gabriel Genellina wrote:
> 
> En Sat, 31 Mar 2007 23:37:16 -0300, enquiring mind <"enquiring
> mind"@braindead.com> escribió:
> 
> > Running 2.4.1 Python (learning)
> > Running SUSE Linux 10
> >
> > Am learning from a new books that mostly deals with windows python and
> > Pygames called "Game Programming" by Randy Harris (2007)  His books
> > references Python 2.4.2 and Pygame 1.7.1 with these comments:
> >
> > "If you are using a Linux machine, you probably won't have the simple
> > installer that came with the windows version.  Follow the instructions
> > at http://pygame.org/install.  You may have to run a couple of scripts
> > to make everything work, but just follow the directions and you will be
> > fine."
> >
> > Could anybody suggest or make a helpful comment in view of what
> > information I have supplied.  At Chapter 5 is where the Pygame module is
> > introduced so I have a little time before I have to figure out what I
> > have to download and install.
> 
> First: read that page.
> I don't use SUSE myself, but the first hit on Google for "pygame SUSE"
> goes into the Novell site, and  SUSE 10.1 appears to include pygame
> 1.7.1release14 (or at least, you should be able to download and install
> the RPM from Novell)
> 
> --
> Gabriel Genellina
Thanks, Gabriel.  I only googled pygame and not SUSE.  The next post
says it is
already included so I will ask my programmer buddy to help me.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How can i compare a string which is non null and empty

On Apr 2, 12:22 am, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
wrote:
> Hi,
>
> how can i compare a string which is non null and empty?
>
> i look thru the string methods here, but cant find one which does it?
>
> http://docs.python.org/lib/string-methods.html#string-methods
>
> In java,I do this:
> if (str != null) && (!str.equals("")) 
>
> how can i do that in python?
The closest to null in python is None.
Do you initialise the string to have a value of None? eg myStr = None

if so, you can test with
if myStr == None:
   dosomething...

But you might find that you do not need to use None - just initialise
the string as empty eg myStr = ''
and then test for
if myStr != '':
or even simpler

if myStr:
  dosomething

btw. dont use 'str' - its a built in function of python

-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   >