SQLObject 0.11.4

2010-03-04 Thread Oleg Broytman
Hello!

I'm pleased to announce version 0.11.4, a minor bugfix release of 0.11 branch
of SQLObject.


What is SQLObject
=

SQLObject is an object-relational mapper.  Your database tables are described
as classes, and rows are instances of those classes.  SQLObject is meant to be
easy to use and quick to get started with.

SQLObject supports a number of backends: MySQL, PostgreSQL, SQLite,
Firebird, Sybase, MSSQL and MaxDB (also known as SAPDB).


Where is SQLObject
==

Site:
http://sqlobject.org

Development:
http://sqlobject.org/devel/

Mailing list:
https://lists.sourceforge.net/mailman/listinfo/sqlobject-discuss

Archives:
http://news.gmane.org/gmane.comp.python.sqlobject

Download:
http://cheeseshop.python.org/pypi/SQLObject/0.11.4

News and changes:
http://sqlobject.org/News.html


What's New
==

News since 0.11.3
-

* Fixed a bug in inheritance - if creation of the row failed and if the
  connection is not a transaction and is in autocommit mode - remove
  parent row(s).

* Do not set _perConnection flag if get() or _init() is passed the same
  connection; this is often the case with select().

For a more complete list, please see the news:
http://sqlobject.org/News.html

Oleg.
-- 
 Oleg Broytmanhttp://phd.pp.ru/p...@phd.pp.ru
   Programmers don't die, they just GOSUB without RETURN.
-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


Python Bootcamp - Last week to Register (March 15-19, 2010)

2010-03-04 Thread Chander Ganesan

Just a reminder that there are only 3 weeks remaining to register for
the Open Technology Group's Python Bootcamp, a 5 day hands-on,
intensive, in-depth introduction to Python.  This course is confirmed
and guaranteed to run.

Worried about the costs of air and hotel to travel for training?  Don't!
 Our All-Inclusive Packages provide round-trip airfare and hotel
accommodations and are available for all students attending from the
Continental US, parts of Canada, and parts of Europe!  Best of all,
these packages can be booked up to March 12, 2010!

For complete course outline/syllabus, or to enroll, call us at
877-258-8987 or visit our web site at:

http://www.otg-nc.com/python-bootcamp

OTG's Python Bootcamp is a 5 day intensive course that teaches
programmers how to design, develop, and debug applications using the
Python programming language.  Over a 5 day period through a set of
lectures, demonstrations, and hands-on exercises, students will learn 
how to develop powerful applications using Python and integrate their 
new found Python skills in their day-to-day job activities.  Students 
will also learn how to utilize Python's Database API to interface with

relational databases.

This Python course is available for on-site delivery world-wide (we
bring the class to you) for a group as small as 3, for as little as
$8,000 (including instructor travel  per-diem)!

Our course is guaranteed to run, regardless of enrollment, and available
in an all inclusive package that includes round-trip airfare, 5 nights
of hotel accommodation, shuttle services (to/from the airport, to/from
our facility, and to/from local eateries/shopping), and our training.
All-inclusive packages are priced from $2,495 for the 5 day course
(course only is $2,295).

For more information - or to schedule an on-site course, please contact
us at 877-258-8987 .

The Open Technology Group is the world leader in the development and
delivery of training solutions focused around Open Source technologies.

--
Chander Ganesan
Open Technology Group, Inc.
One Copley Parkway, Suite 210
Morrisville, NC  27560
919-463-0999/877-258-8987
http://www.otg-nc.com





--
http://mail.python.org/mailman/listinfo/python-announce-list

   Support the Python Software Foundation:
   http://www.python.org/psf/donations/


Re: PYTHONPATH and eggs

2010-03-04 Thread geoffbache

On Mar 4, 3:24 am, David Cournapeau courn...@gmail.com wrote:
 On Wed, Mar 3, 2010 at 7:14 PM, geoffbache geoff.ba...@jeppesen.com wrote:
  Unfortunately, the location from PYTHONPATH ends up after the eggs in
  sys.path so I can't persuade Python to import my version. The only way
  I've found to fix it is to copy the main script and manually hack
  sys.path at the start of it which isn't really very nice. I wonder if
  there is any better way as I can't be the first person to want to do
  this, surely?

 One way is to never install things as eggs: I have a script
 hard_install which forces things to always install with
 --single-externally-managed blablabla. This has worked very well for
 me, but may not always be applicable (in particular if you are on a
 platform where building things from sources is difficult).

Thanks for the tips. Is your script generic at all? I wonder if you'd
be prepared to share it?

Figuring out virtualenv would also be an option, as would figuring out
how to build my own egg, but both these solutions feel like overkill
to me just to enable a small bit of tweaking.

/Geoff
-- 
http://mail.python.org/mailman/listinfo/python-list


Don't work __getattr__ with __add__

2010-03-04 Thread Андрей Симурзин
It is object of the class A, in conteiner's class tmpA. Not all method
from A are in the tmpA. So for exapmle:
A + B -- yes , tmpA + B no. I try to call method from A for tmpA. I
can to call simple method,  such as change(), bit __add__ -  don't
work. If to remove inheritance from object, the code work's. Help me,
please
#--
class A(object):
def __init__( self, x, y ):
self.x = x
self.y = y
pass
#---
def __add__( self, arg ):
   tmp1 = self.x + arg.x
   tmp2 = self.y + arg.y
   return tmpA( A( tmp1, tmp2 ) )

def change( self, x, y ):
self.x = x
self.y = y
pass
pass
#--
class tmpA( object ):
def __init__( self, theA ):
self.A = theA
pass
#---
def __call__( self ):
return self.A
#---
def __coerce__( self, *args ):
return None
#---
def __getattr__( self, *args ):
name = args[ 0 ]
try:
attr = None
exec attr = self.__call__().%s % name
return attr
except :
raise AttributeError
#--
class B( object ):
def __init__( self, x, y):
self.x = x
self.y = y
pass
#-
a=A( 1,2 )
b=B( 3,4 )
tmp_a = a + b  #well
tmp_a.change( 0, 0 ) # very well !!!
v = tmp_a + b  #TypeError: unsupported operand type(s) for +: 'tmpA'
and 'B'
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Don't work __getattr__ with __add__

2010-03-04 Thread Chris Rebert
On Thu, Mar 4, 2010 at 12:25 AM, Андрей Симурзин asimur...@gmail.com wrote:
 It is object of the class A, in conteiner's class tmpA. Not all method
 from A are in the tmpA. So for exapmle:
 A + B -- yes , tmpA + B no. I try to call method from A for tmpA. I
 can to call simple method,  such as change(), bit __add__ -  don't
 work. If to remove inheritance from object, the code work's. Help me,
 please

Some clarity has been lost in translation, but I think I get what you're saying.
__add__ and the other double-underscore special methods are not looked
up using __getattr__ or __getattribute__, hence trying to do addition
on tmpA, which does not define an __add__ method, fails.

For a full explanation, read:
http://docs.python.org/reference/datamodel.html#special-method-lookup-for-new-style-classes

Cheers,
Chris
--
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Don't work __getattr__ with __add__

2010-03-04 Thread Andrey Simurzin
On 4 мар, 11:38, Chris Rebert c...@rebertia.com wrote:
 On Thu, Mar 4, 2010 at 12:25 AM, Андрей Симурзин asimur...@gmail.com wrote:
  It is object of the class A, in conteiner's class tmpA. Not all method
  from A are in the tmpA. So for exapmle:
  A + B -- yes , tmpA + B no. I try to call method from A for tmpA. I
  can to call simple method,  such as change(), bit __add__ -  don't
  work. If to remove inheritance from object, the code work's. Help me,
  please

 Some clarity has been lost in translation, but I think I get what you're 
 saying.
 __add__ and the other double-underscore special methods are not looked
 up using __getattr__ or __getattribute__, hence trying to do addition
 on tmpA, which does not define an __add__ method, fails.

 For a full explanation, 
 read:http://docs.python.org/reference/datamodel.html#special-method-lookup...

 Cheers,
 Chris
 --http://blog.rebertia.com

Thank you very much
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Draft PEP on RSON configuration file format

2010-03-04 Thread Paul Rubin
mk mrk...@gmail.com writes:
 OK, but how? How would you make up e.g. for JSON's lack of comments?

Modify the JSON standard so that JSON 2.0 allows comments.

 OTOH, if YAML produces net benefit for as few as, say, 200 people in
 real world, the effort to make it has been well worth it.

Not if 200,000 other people have to deal with it but don't receive the
benefit.

 http://myarch.com/why-xml-is-bad-for-humans
 http://www.ibm.com/developerworks/xml/library/x-sbxml.html

You might like this one too:

  http://www.schnada.de/grapt/eriknaggum-xmlrant.html

 I also have to maintain a few applications that internally use XML as
 data format: while they are tolerable, they still leave smth to be
 desired, as those applications are really slow for larger datasets,

I thought we were talking about configuration files, not larger datasets.

 There are demonstrable benefits to this too: I for one am happy that
 ReST is available for me and I don't have to learn a behemoth such as
 DocBook to write documentation.

DocBook is so far off my radar I'd have never thought of it.  I just now
learned that it's not Windows-only.  There is already POD, Pydoc,
Texinfo, a billion or so flavors of wiki markup, vanilla LaTeX, and most
straightforwardly of all, plain old ascii.  ReST was another solution in
search of a problem.
-- 
http://mail.python.org/mailman/listinfo/python-list


My four-yorkshireprogrammers contribution

2010-03-04 Thread Gregory Ewing

MRAB wrote:


Mk14 from Science of Cambridge, a kit with hex keypad and 7-segment
display, which I had to solder together, and also make my own power
supply. I had the extra RAM and the I/O chip, so that's 256B (including
the memory used by the monitor) + 256B additional RAM + 128B more in the
I/O chip.


Luxury! Mine was a Miniscamp, based on a design published in
Electronics Australia in the 70s. 256 bytes RAM, 8 switches
for input, 8 LEDs for output. No ROM -- program had to be
toggled in each time.

Looked something like this:

  http://oldcomputermuseum.com/mini_scamp.html

except that mine wasn't built from a kit and didn't look
quite as professional as that one.

It got expanded in various ways, of course (hacked would
be a more accurate word). Memory expanded to 1.5KB, hex keyboard
and display (built into an old calculator case), cassette tape
interface based on a circuit scrounged from another magazine
article (never quite got it to work properly, wouldn't go at
more than about 4 bytes/sec, probably because I used resistors
and capacitors salvaged from old TV sets). Still no ROM, though.
Had to toggle in a bootstrap to load the keyboard/display
monitor (256 bytes) from tape.

Somewhere along the way I replaced the CPU with a 6800 -
much nicer instruction set! (Note for newtimers -- that's
*two* zeroes, not three.)

During that period, my holy grail was alphanumeric I/O. I was
envious of people who wrote articles about hooking surplus
teleprinters, paper tape equipment and other such cool
hardware to their homebrew micros -- sadly, no such thing was
available in NZ.

Then one day a breakthrough came -- a relative who worked
in the telephone business (government-owned in NZ at the time)
managed to get me an old teleprinter. It was Baudot, not ASCII,
which meant uppercase only, not much punctuation, and an
annoyingly stateful protocol involving letters/figures shift
characters, but it was heaps better than nothing. A bit of
hackery, of both hardware and software varieties, and I got
it working. It was as noisy as hell, but I could input and
output ACTUAL LETTERS! It was AMAZING!

As a proof of concept, I wrote an extremely small BASIC
interpreter that used one-character keywords. The amount of
room left over for a program was even smaller, making it
completely useless. But it worked, and I had fun writing it.

One thing I never really got a grip on with that computer
was a decent means of program storage. Towards the end of it's
life, I was experimenting with trying to turn an old 8-track
cartridge player into a random access block storage device,
using a tape loop. I actually got it to work, more or less,
and wrote a small TOS (Tape Operating System) for it that
could store and retrieve files. But it was never reliable
enough to be practical.

By that stage, umpteen layers of hackery using extremely
dubious construction techniques had turned the machine into
something of a Frankenstein monster. Calling it a bird's nest
would have been an insult to most birds. I wish I'd taken
some photos, they would have been good for scaring potential
future grandchildren.

My next computer was a Dick Smith Super 80 (*not* System 80,
which would have been a much better machine), Z80-based, built
from a kit. I had a lot of fun hacking around with that, too...
but that's another story!

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


Re: Docstrings considered too complicated

2010-03-04 Thread Gregory Ewing

D'Arcy J.M. Cain wrote:


And that is why text files in MS-DOS and CP/M before it end with ^Z.
They needed a way to tell where the end of the information was.  Why
they used ^Z (SUB - Substitute) instead of ^C (ETX - End of TeXt) or
even ^D (EOT - End Of Transmission) is anyone's guess.


Well, ^C is what people used for interrupting their BASIC
programs. And ^D would have made it almost compatible with
unix, which would have been far too sensible!

My guess is that it was chosen for its mnemonic value --
end of alphabet, end of file.

Also remember there were programs like Wordstar that used
control key combinations for all manner of things. It might
have been the only key left on the keyboard that wasn't
used for anything else.

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


Re: Docstrings considered too complicated

2010-03-04 Thread Gregory Ewing

Richard Brodie wrote:


It goes back to ancient PDP operating systems, so may well
predate Unix, depending which exact OS was the first to use it.


Yes, I think it was used in RT-11, which also had
block-oriented disk files.

There were two kinds of devices in RT-11, character
and block, and the APIs for dealing with them were
quite different. They hadn't fully got their heads
around the concept of device independent I/O in
those days, although they were trying.

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


Re: Docstrings considered too complicated

2010-03-04 Thread Gregory Ewing

Steve Holden wrote:


Puts me in mind of Mario Wolczko's early attempts to implement SmallTalk
on a VAX 11/750. The only bitmapped display we had available was a Three
Rivers PERQ, connected by a 9600bps serial line. We left it running at
seven o'clock one evening, and by nine am the next day it had brought up
about two thirds of the initial VM loader screen ...


A couple of my contemporary postgraduate students worked on
getting Smalltalk to run on an Apple Lisa. Their first attempt
at a VM implementation was written in Pascal, and it wasn't
very efficient. I remember walking into their room one day
and seeing one of them sitting there watching it boot, drawing
stuff on the screen  v...e...r...y...   s...l...o...w...l...y...

At least their display was wired directly to the machine
running the code. I hate to think what bitmapped graphics at
9600 baud would be like!

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


Re: case do problem

2010-03-04 Thread Michael Rudolf

Am 03.03.2010 18:38, schrieb Tracubik:

Il Wed, 03 Mar 2010 09:39:54 +0100, Peter Otten ha scritto:



def loop():
 count = 0
 m = 0
 lookup = {1: 1, 2: 10, 3: 100}
 for iterations in range(20): # off by one
 # ...
 print %2d %1d %3d % (iterations, count, m) # ...
 if generic_condition():
 count += 1
 # ...
 m = lookup.get(count, m)
 if count == 4:
 break

if __name__ == __main__:
 loop()
 print That's all, folks

Something must be wrong with me today because I find the Pascal code
/more/ readable..


i was think the same, Pascal seem to generate a great more readable code.
I'm a newbie, so my opinion is probably wrong, but i still think that
don't have CASE OF and REPEAT UNTIL code block return in difficult-to-
read code.


That is probably a side-effect of literal code translation.

No one would write something like you did when he writes in python the 
first place.



 Tracubik wrote:

 hi, i've to convert from Pascal this code:

 program loop;

 function generic_condition: boolean;
 begin
 generic_condition := random  0.7
 end;

 procedure loop;
 var
 iterations, count, m: integer;
 begin
 iterations := 0;
 count := 0;
 m := 0;
 repeat
iterations := iterations+1;
(*...*)
writeln(iterations:2, count:2, m:4);
(*...*)
if generic_condition then
inc(count);
(*...*)
case count of
1: m := 1;
2: m := 10;
3: m := 100
end
 until (count = 4) or (iterations = 20)
 end;

 begin
 loop;
 writeln(That's all, folks)
 end.

Hmmm lets see...
We have somewhat obscure and complicated logic.

If we cannot get rid of it, lets hide it:

class Loop:
def __init__(self, maxiterations=0, maxcount=1, m=0):
assert generic_condition.__call__
self.maxiterations = maxiterations
self.maxcount = maxcount
self.m=m
self.count=0
self.iterations=0

def __iter__(self):
while True:
yield self.next()

def next(self):
self.iterations+=1
if self.iterations  self.maxiterations:
raise StopIteration
if generic_condition():
self.count += 1
if self.count = self.maxcount:
raise StopIteration
self.m = (None,1,10,100)[self.count]
return self, self.m

# So we have:

#from complicatedlogic import Loop
from random import random

def generic_condition():
return random()  0.7

if __name__ == '__main__':
for loop, m in Loop(maxiterations=20, maxcount=4):
print(%2d %1d %3d % (loop.iterations, loop.count, m))
print(That's all, folks)


better? worse? I honestly do not know.

Note that while this is valid py3 and runs as intended, there might be 
some off-by-one or something left.


Also, this is of course not production level code ;)

Regards,
Michael
--
http://mail.python.org/mailman/listinfo/python-list


Re: Old farts playing with their toys

2010-03-04 Thread Gregory Ewing

D'Arcy J.M. Cain wrote:


Did you ever play Star Trek with sound effects?


Not on that machine, but I played a version on an Apple II
that had normal speaker-generated sounds. I can still
remember the sound that a photon torpedo (a # character IIRC)
made as it lurched its way drunkenly across the sector and
hit its target. Bwoop... bwoop... bwoop... bwoop... bwoop...
bwoowoowoowoowoop! (Yes, a photon torpedo makes exactly
five bwoops when it explodes. Apparently.)

I carried a listing of it around with me for many years
afterwards, and attempted to port it to various machines,
with varying degrees of success. The most successful port
was for a BBC Master that I picked up in a junk shop one
day.

But I couldn't get the sounds right, because the BBC's
sound hardware was too intelligent. The Apple made sounds
by directly twiddling the output bit connected to the
loudspeaker, but you can't do that with a BBC -- you
have to go through its fancy 3-voice waveform generating
chip. And I couldn't get it to ramp up the pitch rapidly
enough to make a proper photon-torpedo bwoop sound. :-(

I also discovered that the lovely low-pitched beep that
the original game used to make at the command prompt had
a lot to do with the resonant properties of the Apple
II's big plastic case. Playing a square wave through
something too high-fidelity doesn't sound the same at
all.


I was never able to
get it to work but supposedly if you put an AM radio tuned to a
specific frequency near the side with the I/O card it would generate
static that was supposed to be the sound of explosions.

Of course, the explosions were happening in a vaccum so maybe the
silence was accurate.  :-)


Something like that might possibly happen for real. I could
imagine an explosion in space radiating electromagnetic
noise that would sound explosion-like if you picked it
up on a radio.

This might explain why the Enterprise crew could hear things
exploding when they shot them. They were listening in at RF!

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


Re: Docstrings considered too complicated

2010-03-04 Thread Gregory Ewing

Steven D'Aprano wrote:

True, but one can look at best practice, or even standard practice. 
For Python coders, using docstrings is standard practice if not best 
practice. Using strings as comments is not.


In that particular case, yes, it would be possible to
objectively examine the code and determine whether docstrings
were being used as opposed to above-the-function comments.

However, that's only a very small part of what goes to make
good code. Much more important are questions like: Are the
comments meaningful and helpful? Is the code reasonably
self-explanatory outside of the comments? Is it well
modularised, and common functionality factored out where
appropriate? Are couplings between different parts
minimised? Does it make good use of library code instead
of re-inventing things? Is it free of obvious security
flaws?

You can't *measure* these things. You can't objectively
boil them down to a number and say things like This code
is 78.3% good; the customer requires it to be at least
75% good, so it meets the requirements in that area.

That's the way in which I believe that software engineering
is fundamentally different from hardware engineering.

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


Re: taking python enterprise level?...

2010-03-04 Thread simn_stv
till i think i absolutely need to trade-off easier and less
complicated code, better db structure (from a relational perspective)
and generally less head aches for speed, i think i'll stick with the
joins for now!...;)

the thought of denormalization really doesnt appeal to me...
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding to a module's __dict__?

2010-03-04 Thread Gregory Ewing

Roy Smith wrote:

The idea is I want to put in the beginning of the module:

declare('XYZ_FOO', 0, The foo property)
declare('XYZ_BAR', 1, The bar property)
declare('XYZ_BAZ', 2, reserved for future use)


Okay, that seems like a passable excuse.

One thing to watch out for is that if your 'declare' function
is defined in a different module, when it calls globals() it
will get the globals of the module it's defined in, not the
one it's being called from.

There's a hackish way of getting around that, but it might
be better to put all of these symbols in a module of their
own and import them from it. The declare() function can then
be defined in that same module so that it accesses the right
globals.

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


Re: Queue peek?

2010-03-04 Thread Gregory Ewing

Floris Bruynooghe wrote:

I was just wondering if
other people ever missed the q.put_at_front_of_queue() method or if
it is just me.


Sounds like you don't want a queue, but a stack. Or
maybe a double-ended queue.

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


Re: Pylint Argument number differs from overridden method

2010-03-04 Thread Jean-Michel Pichavant

Wanderer wrote:

On Mar 3, 2:33 pm, Robert Kern robert.k...@gmail.com wrote:
  

On 2010-03-03 11:39 AM, Wanderer wrote:



Pylint W0221 gives the warning
Argument number differs from overridden method.
  
Why is this a problem? I'm overriding the method to add additional

functionality.
  

There are exceptions to every guideline. Doing this could easily be a mistake,
so it's one of the many things that Pylint checks for. Silence the warning if
you like.

--
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth.
   -- Umberto Eco



Thanks I was just wondering if I was overlooking something about
inheritance.
  
This is only my opinion but you get this warning because of 2 disctinct 
issues:

1/ you just made a basic mistake in your signature and need to correct it
2/ you did not make any mistake in the signature, but this warning may 
reveal a (small) flaw in your class design.



I don't know the exact context for your code, but it's better to have a 
consistant interface over your methods and mask the implementation 
details from the user.
In your case, the getRays method may always ask for the lambda 
parameters and just ignore it for one of its implementation.


And don't write empty doctrings to trick pylint. Either write them, or 
remove this rule, you are loosing all the tool benefits.



JM


--
http://mail.python.org/mailman/listinfo/python-list


Partly erratic wrong behaviour, Python 3, lxml

2010-03-04 Thread Jussi Piitulainen
Dear group,

I am observing weird semi-erratic behaviour that involves Python 3 and
lxml, is extremely sensitive to changes in the input data, and only
occurs when I name a partial result. I would like some help with this,
please. (Python 3.1.1; GNU/Linux; how do I find lxml version?)

The test script regress/Tribug is at the end of this message, with a
snippet to show the form of regress/tridata.py where the XML is.

What I observe is this. Parsing an XML document (wrapped in BytesIO)
with lxml.etree.parse and then extracting certain elements with xpath
sometimes fails so that I get three times the correct number of
elements. On the first run of the script, it fails in one way, and on
each subsequent run in another way: subsequent runs are repeatable.

Second, the bug only occurs when I give a name to the result from
lxml.etree.parse! This is seen below in the lines labeled name1 or
name2 that sometimes exhibit the bug, and lines labeled nest1 or
nest2 that never do. That is, this fails in some complex way:

result = etree.parse(BytesIO(body))
n = len(result.xpath(title))

This fails to fail:

n = len(etree.parse(BytesIO(body)).xpath(title))

I have failed to observe the error interactively. I believe the
erroneus result lists are of the form [x x x y y y z z z] when they
should be [x y z] but I do not know if the x's are identical or
copies. I will know more later, of course, when I have written more
complex tests, unless somebody can lead me to a more profitable way of
debugging this.

Two versions of the test runs follow, before and after a trivial
change to the test data. Since the numbers are repeated n's of the
above snippets, they should all be the same: 5 observed 1000 times.

A first run after removing regress/tridata.pyc:

[1202] $ regress/Tribug 
name1: size 5 observed 969 times
name1: size 15 observed 31 times
name2: size 5 observed 1000 times
nest1: size 5 observed 1000 times
nest2: size 5 observed 1000 times

All subsequent runs, with regress/tridata.pyc recreated:

[1203] $ regress/Tribug 
name1: size 5 observed 1000 times
name2: size 5 observed 978 times
name2: size 15 observed 22 times
nest1: size 5 observed 1000 times
nest2: size 5 observed 1000 times

Adding an empty comment !-- -- to the XML document;
a first run:

[1207] $ regress/Tribug 
name1: size 5 observed 992 times
name1: size 15 observed 8 times
name2: size 5 observed 1000 times
nest1: size 5 observed 1000 times
nest2: size 5 observed 1000 times

And subsequent runs:

[1208] $ regress/Tribug 
name1: size 5 observed 991 times
name1: size 15 observed 9 times
name2: size 5 observed 998 times
name2: size 15 observed 2 times
nest1: size 5 observed 1000 times
nest2: size 5 observed 1000 times

---start of regress/Tribug---
#! /bin/env python3
# -*- mode: Python; -*-

from io import BytesIO
from lxml import etree
from tridata import body, title

def naming():
sizes = dict()
for k in range(0,1000):
result = etree.parse(BytesIO(body))
n = len(result.xpath(title))
sizes[n] = 1 + sizes.get(n, 0)
return sizes

def nesting():
sizes = dict()
for k in range(0,1000):
n = len(etree.parse(BytesIO(body)).xpath(title))
sizes[n] = 1 + sizes.get(n, 0)
return sizes

def report(label, sizes):
for size, count in sizes.items():
print('{}: size {} observed {} times'
  .format(label, size, count))

report('name1', naming())
report('name2', naming())
report('nest1', nesting())
report('nest2', nesting())
---end of regress/Tribug---

The file regress/tridata.py contains only the two constants. I omit
most of the XML. It would be several screenfuls.

---start of regress/tridata.py---
body = b'''OAI-PMH xmlns=http://www.opena...
...
/OAI-PMH
'''

title = '//*[name()=record]//*[name()=dc:title]'
---end of regress/tridata.py---
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: case do problem

2010-03-04 Thread Gregory Ewing

Peter Otten wrote:

Something must be wrong with me today because I find the Pascal code /more/ 
readable...


Actually I don't find either of them very readable. The
control flow is pretty convoluted either way. It might
be better if it used real-life variable and function
names to give some context, though.

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


Re: A scopeguard for Python

2010-03-04 Thread Jean-Michel Pichavant

Alf P. Steinbach wrote:
 From your post, the scope guard technique is used to ensure some 
desired cleanup at the end of a scope, even when the scope is exited 
via an exception. This is precisely what the try: finally: syntax is 
for. 


You'd have to nest it. That's ugly. And more importantly, now two 
people in this thread (namely you and Mike) have demonstrated that 
they do not grok the try functionality and manage to write incorrect 
code, even arguing that it's correct when informed that it's not, so 
it's a pretty fragile construct, like goto.


You want to execute some cleanup when things go wrong, use try except. 
You want to do it when things go right, use try else. You want to 
cleanup no matter what happen, use try finally.


There is no need of any Cleanup class, except for some technical 
alternative  concern.


JM


--
http://mail.python.org/mailman/listinfo/python-list


Re: Method / Functions - What are the differences?

2010-03-04 Thread Bruno Desthuilliers

Eike Welk a écrit :

Bruno Desthuilliers wrote:

John Posner a écrit :

Done -- see http://wiki.python.org/moin/FromFunctionToMethod

Done and well done !-)
Thanks again for the good job John.


I like it too, thanks to both of you!

I have two small ideas for improvement: 
- Swap the first two paragraphs. First say what it is, and then give the 
motivation.


Mmm... As far as I'm concerned, I like it the way its. John ?

- The section about the descriptor protocol is a bit difficult to 
understand.


I may eventually try to rework it a bit when I'll have time (death march 
here currently, duh...)


But judging from the official descriptor documentation, it seems 
to be hard to explain


Not that easy, indeed. I once posted on c.l.py a longer explanation of 
the whole lookup rule stuff, that IIRC included a naive python 
implementation example. Might be worth trying to google for it and 
turning it into another overview article.


: The official documentation is nearly incomprehensible 
(IMHO).


I should probably have a look at it !-)
--
http://mail.python.org/mailman/listinfo/python-list


Re: Working group for Python CPAN-equivalence?

2010-03-04 Thread Gregory Ewing

Peter Billam wrote:


A very important thing about CPAN modules is the consistent
basic install method:   perl Makefile.PL ; make ; make install


Well, we more or less have that with Python, too:

  python setup.py install

It may not always work smoothly, but it's the
one obvious thing to try when you've downloaded
a Python package in source form.

--
Greg
--
http://mail.python.org/mailman/listinfo/python-list


Re: Working group for Python CPAN-equivalence?

2010-03-04 Thread Olof Bjarnason
2010/3/4 Gregory Ewing greg.ew...@canterbury.ac.nz:
 Peter Billam wrote:

 A very important thing about CPAN modules is the consistent
 basic install method:   perl Makefile.PL ; make ; make install

 Well, we more or less have that with Python, too:

  python setup.py install

 It may not always work smoothly, but it's the
 one obvious thing to try when you've downloaded
 a Python package in source form.

 --
 Greg
 --
 http://mail.python.org/mailman/listinfo/python-list


I want to say thanks to all that have given information regarding the
original question - for example the blog mention was valuable, aswell
as the disctintion between CPAN and cpan.

It was definately *not* my intention to start another Where is CPAN
for Python?-thread, but it seems we're already there. :)


-- 
http://olofb.wordpress.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Generic singleton

2010-03-04 Thread mk

Steven D'Aprano wrote:
Groan. What is it with the Singleton design pattern? It is one of the 
least useful design patterns, and yet it's *everywhere* in Java and C++ 
world.


It's useful when larking about in language internals for learning 
purposes, for instance. I don't recall ever actually having significant 
need for it.




 def __new__(cls, impclass, *args, **kwargs):
 impid = id(impclass)


Yuck. Why are you using the id of the class as the key, instead of the 
class itself?


Bc I didn't know whether it was safe to do that: like Arnaud pointed 
out, the *type* of bultins is hashable.


Regards,
mk



--
http://mail.python.org/mailman/listinfo/python-list


Re: memory usage, temporary and otherwise

2010-03-04 Thread mk

Bruno Desthuilliers wrote:

mk a écrit :

Obviously, don't try this on low-memory machine:


a={}
for i in range(1000):



Note that in Python 2, this will build a list of 1000 int objects.
You may want to use xrange instead...


Huh? I was under impression that some time after 2.0 range was made to 
work under the covers like xrange when used in a loop? Or is it 3.0 
that does that?



And this build yet another list of 1000 int objects.


Well this explains much of the overhead.


(overly simplified)

When an object is garbage-collected, the memory is not necessarily
returned to the system - and the system doesn't necessarily claim it
back neither until it _really_ needs it.

This avoid a _lot_ of possibly useless work for both the python
interpreter (keeping already allocated memory costs less than immediatly
returning it, just to try and allocate some more memory a couple
instructions later) and the system (ditto - FWIW, how linux handles
memory allocations is somewhat funny, if you ever programmed in C).


Ah! That explains a lot. Thanks to you, I have again expanded my 
knowledge of Python!


Hmm I would definitely like to read smth on how CPython handles memory 
on Python wiki. Thanks for that doc on wiki on functions  methods to 
you and John Posner, I'm reading it every day like a bible. ;-)



Regards,
mk

--
http://mail.python.org/mailman/listinfo/python-list


Re: Docstrings considered too complicated

2010-03-04 Thread Ben Finney
Gregory Ewing greg.ew...@canterbury.ac.nz writes:

 However, that's only a very small part of what goes to make good code.
 Much more important are questions like: Are the comments meaningful
 and helpful? Is the code reasonably self-explanatory outside of the
 comments? Is it well modularised, and common functionality factored
 out where appropriate? Are couplings between different parts
 minimised? Does it make good use of library code instead of
 re-inventing things? Is it free of obvious security flaws?

 You can't *measure* these things. You can't objectively boil them down
 to a number and say things like This code is 78.3% good; the customer
 requires it to be at least 75% good, so it meets the requirements in
 that area.

That doesn't reduce the value of automating and testing those measures
we *can* make.

 That's the way in which I believe that software engineering is
 fundamentally different from hardware engineering.

Not at all. There are many quality issues in hardware engineering that
defy simple measurement; that does not reduce the value of standardising
quality minima for those measures that *can* be achieved simply.

-- 
 \“Spam will be a thing of the past in two years' time.” —Bill |
  `\ Gates, 2004-01-24 |
_o__)  |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Interest check in some delicious syntactic sugar for except:pass

2010-03-04 Thread Bruno Desthuilliers

Oren Elrad a écrit :

Howdy all, longtime appreciative user, first time mailer-inner.

I'm wondering if there is any support (tepid better than none) for the
following syntactic sugar:

silence:
 block

-

try:
block
except:
pass



Hopefully not.


The logic here is that there are a ton of except: pass statements[1]
floating around in code that do not need to be there.


s/do not need to be/NEVER should have been at first/



--
http://mail.python.org/mailman/listinfo/python-list


How to login https sever with inputing account name and password?

2010-03-04 Thread Karen Wang
Hi all,

I want to use python to access to https server, like
https://212.218.229.10/chinatest/;

If open it from IE, will see the pop-up login windows like this 



I tried several ways but always only get page for HTTP Error 401.2 -
Unauthorized error. ( myusername and mypassword are all correct)

Below is my code:

import urllib2

values = {

'user' : myusername, 

pass' : mypassword }

data = urllib2.urlencode(values)

t = urllib2.urlopen('https://212.218.229.10/chinatest/',data)

print t.read()

where I am wrong ?

Thank you very much.

 

 

Best Regards

Karen Wang

 

image001.jpg-- 
http://mail.python.org/mailman/listinfo/python-list


Re: NoSQL Movement?

2010-03-04 Thread mk

Jonathan Gardner wrote:


When you are starting a new project and you don't have a definitive
picture of what the data is going to look like or how it is going to
be queried, SQL databases (like PostgreSQL) will help you quickly
formalize and understand what your data needs to do. In this role,
these databases are invaluable. I can see no comparable tool in the
wild, especially not OODBMS.


FWIW, I talked to my promoting professor about the subject, and he 
claimed that there's quite a number of papers on OODBMS that point to 
fundamental problems with constructing capable query languages for 
OODBMS. Sadly, I have not had time to get  read those sources.


Regards,
mk

--
http://mail.python.org/mailman/listinfo/python-list


Re: Generic singleton

2010-03-04 Thread Duncan Booth
Steven D'Aprano ste...@remove.this.cybersource.com.au wrote:

 On Wed, 03 Mar 2010 19:54:52 +0100, mk wrote:
 
 Hello,
 
 So I set out to write generic singleton, i.e. the one that would do a
 singleton with attributes of specified class. At first:
 
 Groan. What is it with the Singleton design pattern? It is one of the 
 least useful design patterns, and yet it's *everywhere* in Java and C++ 
 world.

It is also *everywhere* in the Python world. Unlike Java and C++, Python 
even has its own built-in type for singletons.

If you want a singleton in Python use a module.

So the OP's original examples become:

--- file singleton.py ---
foo = {}
bar = []

--- other.py ---
from singleton import foo as s1
from singleton import foo as s2
from singleton import bar as s3
from singleton import bar as s4

... and then use them as you wish.



-- 
Duncan Booth http://kupuguy.blogspot.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: memory usage, temporary and otherwise

2010-03-04 Thread Duncan Booth
mk mrk...@gmail.com wrote:

 Hm, apparently Python didn't spot that 'spam'*10 in a's values is really 
 the same string, right?

If you want it to spot that then give it a hint that it should be looking 
for identical strings:

  a={}
  for i in range(1000):
... a[i]=intern('spam'*10)

should reduce your memory use somewhat.

-- 
Duncan Booth http://kupuguy.blogspot.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: taking python enterprise level?...

2010-03-04 Thread mk

Philip Semanchuk wrote:
Well OK, but that's a very different argument. Yes, joins can be 
expensive. They're often still the best option, though. The first step 
people usually take to get away from joins is denormalization which can 
improve SELECT performance at the expense of slowing down INSERTs, 
UPDATEs, and DELETEs, not to mention complicating one's code and data 
model. Is that a worthwhile trade? 


I'd say that in more than 99% of situations: NO.

More than that: if I haven't normalized my data as it should have been 
normalized, I wouldn't be able to do complicated querying that I really, 
really have to be able to do due to business logic. A few of my queries 
have a few hundred lines each with many sub-queries and multiple 
many-to-many joins: I *dread the thought* what would happen if I had to 
reliably do it in a denormalized db and still ensure data integrity 
across all the business logic contexts. And performance is still more 
than good enough: so there's no point for me, as of the contexts I 
normally work in, to denormalize data at all.


It's just interesting for me to see what happens in that 1% of situations.

Depends on the application. As I 
said, sometimes the cure is worse than the disease.


Don't worry about joins until you know they're a problem. As Knuth said, 
premature optimization is the root of all evil.


Sure -- the cost of joins is just interesting to me as a 'corner case'. 
I don't have datasets large enough for this to matter in the first place 
(and I probably won't have them that huge).



PS - Looks like you're using Postgres -- excellent choice. I miss using it.


If you can, I'd recommend using SQLAlchemy layer on top of 
Oracle/Mysql/Sqlite, if that's what you have to use: this *largely* 
insulates you from the problems below and it does the job of translating 
into a peculiar dialect very well. For my purposes, SQLAlchemy worked 
wonderfully: it's very flexible, it has middle-level sql expression 
language if normal querying is not flexible enough (and normal querying 
is VERY flexible), it has a ton of nifty features like autoloading and 
rarely fails bc of some lower-level DB quirk AND its high-level object 
syntax is so similar to SQL that you quickly  intuitively grasp it.


(and if you have to/prefer writing some query in low-level SQL, as I 
have done a few times, it's still easy to make SQLAlchemy slurp the 
result into objects provided you ensure there are all of the necessary 
columns in the query result)


Regards,
mk

--
http://mail.python.org/mailman/listinfo/python-list


WANTED: A good name for the pair (args, kwargs)

2010-03-04 Thread Jonathan Fine

Hi

We can call a function fn using
val = fn(*args, **kwargs)

I'm looking for a good name for the pair (args, kwargs).  Any suggestions?

Here's my use case:
def doit(fn , wibble, expect):
args, kwargs = wibble
actual = fn(*args, **kwargs)
if actual != expect:
# Something has gone wrong.
pass

This is part of a test runner.

For now I'll use argpair, but if anyone has a better idea, I'll use it.

--
Jonathan
--
http://mail.python.org/mailman/listinfo/python-list


15541 - Best, Cheapest Web-Hosting, Domain at $1.99!

2010-03-04 Thread jack
World's Cheapest Rate Hosting, 99.9% Uptime
US Based Dedicated Server, Fast Customer Service
Register Domain only at $1.99 per Year
3 Month Hosting Free with 1 year Package
http://hostwebspaces.com/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to login https sever with inputing account name and password?

2010-03-04 Thread Michael Rudolf

Am 04.03.2010 11:38, schrieb Karen Wang:

Hi all,

I want to use python to access to https server, like
https://212.218.229.10/chinatest/;

If open it from IE, will see the pop-up login windows like this



I tried several ways but always only get page for HTTP Error 401.2 -
Unauthorized error. ( myusername and mypassword are all correct)

Below is my code:

import urllib2

values = {

 'user' : myusername,

pass' : mypassword }

data = urllib2.urlencode(values)

t = urllib2.urlopen('https://212.218.229.10/chinatest/',data)

print t.read()

where I am wrong ?


AFAIR does urlopen() expect the password to be Base64-encoded, not 
urlencoded.


You might also need to add an AUTH-Line. But this all does not matter, 
as there is more than one AUTH-Method you'd have to implement them all, 
but fortunately urllib2.HTTPBasicAuthHandler and 
urllib2.HTTPBasicAuthHandler exist.


So use them:

import urllib2
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
passman.add_password(None, theurl, username, password)
authhandler = urllib2.HTTPBasicAuthHandler(passman)
opener = urllib2.build_opener(authhandler)
urllib2.install_opener(opener)
pagehandle = urllib2.urlopen(theurl)

(taken from 
http://www.voidspace.org.uk/python/articles/authentication.shtml )


Further reference:
http://www.python.org/doc/2.5.2/lib/module-urllib2.html
http://www.python.org/doc/2.5.2/lib/urllib2-examples.html

HTH,
Michael
--
http://mail.python.org/mailman/listinfo/python-list


Re: WANTED: A good name for the pair (args, kwargs)

2010-03-04 Thread Tim Chase

Jonathan Fine wrote:

We can call a function fn using
 val = fn(*args, **kwargs)

I'm looking for a good name for the pair (args, kwargs).  Any suggestions?

For now I'll use argpair, but if anyone has a better idea, I'll use it.


In the legacy of C and Java (okay, that doesn't carry _much_ 
weight with me), I'd go with varargs to refer to the pair of 
(args, kwargs)


-tkc



--
http://mail.python.org/mailman/listinfo/python-list


Re: How to login https sever with inputing account name and password?

2010-03-04 Thread Shashwat Anand
You may also want to look into mechanize module.

On Thu, Mar 4, 2010 at 6:11 PM, Michael Rudolf spamfres...@ch3ka.de wrote:

 Am 04.03.2010 11:38, schrieb Karen Wang:

  Hi all,

 I want to use python to access to https server, like
 https://212.218.229.10/chinatest/;

 If open it from IE, will see the pop-up login windows like this



 I tried several ways but always only get page for HTTP Error 401.2 -
 Unauthorized error. ( myusername and mypassword are all correct)

 Below is my code:

 import urllib2

 values = {

 'user' : myusername,

 pass' : mypassword }

 data = urllib2.urlencode(values)

 t = urllib2.urlopen('https://212.218.229.10/chinatest/',data)

 print t.read()

 where I am wrong ?


 AFAIR does urlopen() expect the password to be Base64-encoded, not
 urlencoded.

 You might also need to add an AUTH-Line. But this all does not matter, as
 there is more than one AUTH-Method you'd have to implement them all, but
 fortunately urllib2.HTTPBasicAuthHandler and urllib2.HTTPBasicAuthHandler
 exist.

 So use them:

 import urllib2
 passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
 passman.add_password(None, theurl, username, password)
 authhandler = urllib2.HTTPBasicAuthHandler(passman)
 opener = urllib2.build_opener(authhandler)
 urllib2.install_opener(opener)
 pagehandle = urllib2.urlopen(theurl)

 (taken from
 http://www.voidspace.org.uk/python/articles/authentication.shtml )

 Further reference:
 http://www.python.org/doc/2.5.2/lib/module-urllib2.html
 http://www.python.org/doc/2.5.2/lib/urllib2-examples.html

 HTH,
 Michael
 --
 http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: NoSQL Movement?

2010-03-04 Thread Duncan Booth
Avid Fan m...@privacy.net wrote:

 Jonathan Gardner wrote:
 
 
 I see it as a sign of maturity with sufficiently scaled software that
 they no longer use an SQL database to manage their data. At some 
point
 in the project's lifetime, the data is understood well enough that 
the
 general nature of the SQL database is unnecessary.
 
 
 I am really struggling to understand this concept.
 
 Is it the normalised table structure that is in question or the query 
 language?
 
 Could you give some sort of example of where SQL would not be the way 
to 
 go.   The only things I can think of a simple flat file databases.

Probably one of the best known large non-sql databases is Google's 
bigtable. Xah Lee of course dismissed this as he decided to write how 
bad non-sql databases are without actually looking at the prime example.

If you look at some of the uses of bigtable you may begin to understand 
the tradeoffs that are made with sql. When you use bigtable you have 
records with fields, and you have indices, but there are limitations on 
the kinds of queries you can perform: in particular you cannot do joins, 
but more subtly there is no guarantee that the index is up to date (so 
you might miss recent updates or even get data back from a query when 
the data no longer matches the query).

By sacrificing some of SQL's power, Google get big benefits: namely 
updating data is a much more localised option. Instead of an update 
having to lock the indices while they are updated, updates to different 
records can happen simultaneously possibly on servers on the opposite 
sides of the world. You can have many, many servers all using the same 
data although they may not have identical or completely consistent views 
of that data.

Bigtable impacts on how you store the data: for example you need to 
avoid reducing data to normal form (no joins!), its much better and 
cheaper just to store all the data you need directly in each record. 
Also aggregate values need to be at least partly pre-computed and stored 
in the database.

Boiling this down to a concrete example, imagine you wanted to implement 
a system like twitter. Think carefully about how you'd handle a 
sufficiently high rate of new tweets reliably with a sql database. Now 
think how you'd do the same thing with bigtable: most tweets don't 
interact, so it becomes much easier to see how the load is spread across 
the servers: each user has the data relevant to them stored near the 
server they are using and index changes propagate gradually to the rest 
of the system.

-- 
Duncan Booth http://kupuguy.blogspot.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Instance factory - am I doing this right?

2010-03-04 Thread eb303
On Mar 3, 6:41 pm, Laszlo Nagy gand...@shopzeus.com wrote:
 This is just an interesting code pattern that I have recently used:

 class CacheStorage(object):
     Generic cache storage class.
     @classmethod
     def get_factory(cls,*args,**kwargs):
         Create factory for a given set of cache storage creation
 parameters.
         class CacheStorageFactory(cls):
             _construct_args = args
             _construct_kwargs = kwargs
             def __init__(self):
                 cls.__init__(self,
                     *self._construct_args,**self._construct_kwargs)
         return CacheStorageFactory

 Then, I can create subclasses like:

 class GdbmCacheStorage(CacheStorage):
     Gdbm cache storage class.

     @param basedir: Base directory where gdbm files should be stored.
     @param basename: Base name for logging and creating gdbm files.
     
     def __init__(self,basedir,basename):
           . blablabla place initialization code here

 class MemoryCacheStorage(CacheStorage):
     In-Memory cache storage class.

     Please note that keys and values are always mashal-ed.
     E.g. when you cache an object, it makes a deep copy.
     
     def __init__(self):
           . blablabla place initialization code here

 And the finally, I can create a factory that can create cache storage
 instances for storing data in gdbm in a given directory:

 cache_factory = GdbmCacheStorage.get_factory(~gandalf/db,test)
 print cache_factory # class '__main__.CacheStorageFactory'
 print cache_factory()

 OR I can create a factory that can create instances for storing data in
 memory:

 cache_factory = MemoryCacheStorage.get_factory()
 print cache_factory # class '__main__.CacheStorageFactory'
 print cache_factory() # __main__.CacheStorageFactory object at 0x8250c6c

 Now, here is my question. Am I right in doing this? Or are there better
 language tools to be used in Python for the same thing? This whole thing
 about creating factories looks a bit odd for me. Is it Pythonic enough?

 Thanks,

    Laszlo

Seems you try to reinvent functools:

class GdbmCacheStorage(object):
  def __init__(self,basedir,basename):
...
cache_factory = functools.partial(GdbmCacheStorage, ~gandalf/db,
test)
print cache_factory()

Is it what you're after? I didn't see the point in creating a cached
factory for MemoryCacheStorage though, since it has no constructor
parameters to cache anyway. So doing 'cache_factory =
MemoryCacheStorage' in your example would do exactly the same thing as
what you did.

HTH
 - Eric -
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Method / Functions - What are the differences?

2010-03-04 Thread John Posner

On 3/4/2010 5:59 AM, Bruno Desthuilliers wrote:


I have two small ideas for improvement: - Swap the first two
paragraphs. First say what it is, and then give the motivation.


Mmm... As far as I'm concerned, I like it the way its. John ?


I think it doesn't make very much difference. But in the end, I believe 
it's the student, not the teacher, who gets to decide what's comprehensible.


What *does* make a difference, IMHO, is getting more people to 
participate in the process of shining lights into Python's darker 
corners. That's why I encouraged (and still encourage) Eike to roll up 
the sleeves and wade into these waters.


Metaphor-mixingly yours,
John
--
http://mail.python.org/mailman/listinfo/python-list


SQLObject 0.12.2

2010-03-04 Thread Oleg Broytman
Hello!

I'm pleased to announce version 0.12.2, a bugfix release of branch 0.12
of SQLObject.


What is SQLObject
=

SQLObject is an object-relational mapper.  Your database tables are described
as classes, and rows are instances of those classes.  SQLObject is meant to be
easy to use and quick to get started with.

SQLObject supports a number of backends: MySQL, PostgreSQL, SQLite,
Firebird, Sybase, MSSQL and MaxDB (also known as SAPDB).


Where is SQLObject
==

Site:
http://sqlobject.org

Development:
http://sqlobject.org/devel/

Mailing list:
https://lists.sourceforge.net/mailman/listinfo/sqlobject-discuss

Archives:
http://news.gmane.org/gmane.comp.python.sqlobject

Download:
http://cheeseshop.python.org/pypi/SQLObject/0.12.2

News and changes:
http://sqlobject.org/News.html


What's New
==

News since 0.12.1
-

* Fixed a bug in inheritance - if creation of the row failed and if the
  connection is not a transaction and is in autocommit mode - remove
  parent row(s).

* Do not set _perConnection flag if get() or _init() is passed the same
  connection; this is often the case with select().

For a more complete list, please see the news:
http://sqlobject.org/News.html

Oleg.
-- 
 Oleg Broytmanhttp://phd.pp.ru/p...@phd.pp.ru
   Programmers don't die, they just GOSUB without RETURN.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Passing FILE * types using ctypes

2010-03-04 Thread Francesco Bochicchio
On Mar 4, 12:50 am, Zeeshan Quireshi zeeshan.quire...@gmail.com
wrote:
 Hello, I'm using ctypes to wrap a library i wrote. I am trying to pass
 it a FILE *pointer, how do i open a file in Python and convert it to a
 FILE *pointer. Or do i have to call the C library using ctypes first,
 get the pointer and then pass it to my function.

 Also, is there any automated way to convert c struct and enum
 definitions to ctypes data types.

 Zeeshan

Python file objects have a method fileno() whic returns the 'C file
descriptor', i.e. the number used by low level IO in python as well as
in C.
I would use this as interface between python and C and then in the C
function using fdopen to get a FILE * for an already open file for
which you have a file descriptor.

If you don't want change the C interface, you could try using fdopen
in python by loading the standard C library ang using ctypes
to call the function. (I tried briefly but always get 0 from fdopen ).

But if you can change the C code, why not to pass the file name? The
idea of opening the file in python and manage it in C feels a bit
icky ...

Ciao

FB

-- 
http://mail.python.org/mailman/listinfo/python-list


SQLObject 0.11.4

2010-03-04 Thread Oleg Broytman
Hello!

I'm pleased to announce version 0.11.4, a minor bugfix release of 0.11 branch
of SQLObject.


What is SQLObject
=

SQLObject is an object-relational mapper.  Your database tables are described
as classes, and rows are instances of those classes.  SQLObject is meant to be
easy to use and quick to get started with.

SQLObject supports a number of backends: MySQL, PostgreSQL, SQLite,
Firebird, Sybase, MSSQL and MaxDB (also known as SAPDB).


Where is SQLObject
==

Site:
http://sqlobject.org

Development:
http://sqlobject.org/devel/

Mailing list:
https://lists.sourceforge.net/mailman/listinfo/sqlobject-discuss

Archives:
http://news.gmane.org/gmane.comp.python.sqlobject

Download:
http://cheeseshop.python.org/pypi/SQLObject/0.11.4

News and changes:
http://sqlobject.org/News.html


What's New
==

News since 0.11.3
-

* Fixed a bug in inheritance - if creation of the row failed and if the
  connection is not a transaction and is in autocommit mode - remove
  parent row(s).

* Do not set _perConnection flag if get() or _init() is passed the same
  connection; this is often the case with select().

For a more complete list, please see the news:
http://sqlobject.org/News.html

Oleg.
-- 
 Oleg Broytmanhttp://phd.pp.ru/p...@phd.pp.ru
   Programmers don't die, they just GOSUB without RETURN.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: WANTED: A good name for the pair (args, kwargs)

2010-03-04 Thread Paul Rubin
Jonathan Fine j.f...@open.ac.uk writes:
 I'm looking for a good name for the pair (args, kwargs).  Any suggestions?

 Here's my use case:
 def doit(fn , wibble, expect):
 args, kwargs = wibble
 actual = fn(*args, **kwargs)

I think this may have been broken in 3.x, but in 2.6 the compiler will
unpack directly if you put a tuple structure in the arg list:

 def doit(fn, (args, kwargs), expect):
 actual = fn(*args, **kwargs)

Otherwise I'd just say all_args or some such.  Or just args which
you unpack into pos_args (positional args) and kw_args.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: NoSQL Movement?

2010-03-04 Thread ccc31807
On Mar 3, 4:55 pm, toby t...@telegraphics.com.au wrote:
   where you have to store data and

 relational data

Data is neither relational nor unrelational. Data is data.
Relationships are an artifact, something we impose on the data.
Relations are for human convenience, not something inherent in the
data itself.

  perform a large number of queries.

 Why does the number matter?

Have you ever had to make a large number of queries to an XML
database? In some ways, an XML database is the counterpart to a
relational database in that the data descriptions constitute the
relations. However, since the search is to the XML elements, and you
can't construct indicies for XML databases in the same way you can
with relational databases, a large search can take much longer that
you might expect.

CC.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Working group for Python CPAN-equivalence?

2010-03-04 Thread John Gabriele
On Mar 3, 9:11 pm, John Bokma j...@castleamber.com wrote:
 Philip Semanchuk phi...@semanchuk.com writes:

  In other words, if I was a Perl user under Ubuntu would I use
  the pkg manager to add a Perl module, or CPAN, or would both work?

 Both would work, but I would make very sure to use a separate
 installation directory for modules installed via CPAN.

What's worked best for me is: use *only* the apt system to install
modules into your system Perl (`/usr/bin/perl`) and use *only* cpan/
cpanp/cpanm to install modules into *your own* Perl (for example, you
may have installed into `/opt`).

 AFAIK there are also programs that pack CPAN modules/bundles into
 something the package manager can use to install.

Right. If you really want to install a module for which there's no
Debian package, and you don't want to install your own Perl, this is a
good route to take.

Incidentally, this is the same way I recommend handling the situation
with Python: Use only aptitude to install packages for your system
Python, and use only pip to install packages into your own Python
(which you built and installed elsewhere, ex., `/opt/py-2.6.4`).

---John
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: NoSQL Movement?

2010-03-04 Thread mk

Duncan Booth wrote:

If you look at some of the uses of bigtable you may begin to understand 
the tradeoffs that are made with sql. When you use bigtable you have 
records with fields, and you have indices, but there are limitations on 
the kinds of queries you can perform: in particular you cannot do joins, 
but more subtly there is no guarantee that the index is up to date (so 
you might miss recent updates or even get data back from a query when 
the data no longer matches the query).


Hmm, I do understand that bigtable is used outside of traditional 
'enterprisey' contexts, but suppose you did want to do an equivalent of 
join; is it at all practical or even possible?


I guess when you're forced to use denormalized data, you have to 
simultaneously update equivalent columns across many tables yourself, 
right? Or is there some machinery to assist in that?


By sacrificing some of SQL's power, Google get big benefits: namely 
updating data is a much more localised option. Instead of an update 
having to lock the indices while they are updated, updates to different 
records can happen simultaneously possibly on servers on the opposite 
sides of the world. You can have many, many servers all using the same 
data although they may not have identical or completely consistent views 
of that data.


And you still have the global view of the table spread across, say, 2 
servers, one located in Australia, second in US?


Bigtable impacts on how you store the data: for example you need to 
avoid reducing data to normal form (no joins!), its much better and 
cheaper just to store all the data you need directly in each record. 
Also aggregate values need to be at least partly pre-computed and stored 
in the database.


So you basically end up with a few big tables or just one big table really?

Suppose on top of 'tweets' table you have 'dweebs' table, and tweets and 
dweebs sometimes do interact. How would you find such interacting pairs? 
Would you say give me some tweets to tweets table, extract all the 
dweeb_id keys from tweets and then retrieve all dweebs from dweebs table?


Boiling this down to a concrete example, imagine you wanted to implement 
a system like twitter. Think carefully about how you'd handle a 
sufficiently high rate of new tweets reliably with a sql database. Now 
think how you'd do the same thing with bigtable: most tweets don't 
interact, so it becomes much easier to see how the load is spread across 
the servers: each user has the data relevant to them stored near the 
server they are using and index changes propagate gradually to the rest 
of the system.


I guess that in a purely imaginary example, you could also combine two 
databases? Say, a tweet bigtable db contains tweet, but with column of 
classical customer_id key that is also a key in traditional RDBMS 
referencing particular customer?


Regards,
mk


--
http://mail.python.org/mailman/listinfo/python-list


Re: taking python enterprise level?...

2010-03-04 Thread Tim Wintle
On Wed, 2010-03-03 at 20:39 +0100, mk wrote:
 Hello Tim,
 
 Pardon the questions but I haven't had the need to use denormalization 
 yet, so:

 IOW you basically merged the tables like follows?
 
 CREATE TABLE projects (
  client_id BIGINT NOT NULL,
  project_id BIGINT NOT NULL,
  cost INT,
  date DATETIME,
  INDEX(client_id, project_id, date)
 );

Yup

 From what you write further in the mail I conclude that you have not 
 eliminated the first table, just made table projects look like I wrote 
 above, right? (and used stored procedures to make sure that both tables 
 contain the relevant data for client_id and project_id columns in both 
 tables)

Yup

 Have you had some other joins on denormalized keys? i.e. in example how 
 the join of hypothetical TableB with projects on projects.client_id 
 behave with such big tables? (bc I assume that you obviously can't 
 denormalize absolutely everything, so this implies the need of doing 
 some joins on denormalized columns like client_id).

For these joins (for SELECT statements) this _can_ end up running faster
- of course all of this depends on what kind of queries you normally end
up getting and the distribution of data in the indexes.

I've never written anything that started out with a schema like this,
but several have ended up getting denormalised as the projects have
matured and query behaviour has been tested

  assuming you can access the first mapping anyway -
 
 ? I'm not clear on what you mean here.

I'm refering to not eliminating the first table as you concluded

 
 Regards,
 mk
 

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: taking python enterprise level?...

2010-03-04 Thread Tim Wintle
On Wed, 2010-03-03 at 16:23 -0500, D'Arcy J.M. Cain wrote:
 On Wed, 03 Mar 2010 20:39:35 +0100
 mk mrk...@gmail.com wrote:
   If you denormalise the table, and update the first index to be on
   (client_id, project_id, date) it can end up running far more quickly -
 
 Maybe.  Don't start with denormalization.  Write it properly and only
 consider changing if profiling suggests that that is your bottleneck.

Quite - and I'd add to cache reads as much in front end machines as is
permissible in your use case before considering denormalisation.

 With a decent database engine and proper design it will hardly ever be.

I completely agree - I'm simply responding to the request for an example
where denormalisation may be a good idea.

Tim

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: A scopeguard for Python

2010-03-04 Thread Robert Kern

On 2010-03-03 18:49 PM, Alf P. Steinbach wrote:

* Robert Kern:

On 2010-03-03 15:35 PM, Alf P. Steinbach wrote:

* Robert Kern:

On 2010-03-03 13:32 PM, Alf P. Steinbach wrote:

* Robert Kern:

On 2010-03-03 11:18 AM, Alf P. Steinbach wrote:

* Robert Kern:

On 2010-03-03 09:56 AM, Alf P. Steinbach wrote:

* Mike Kent:

What's the compelling use case for this vs. a simple try/finally?


if you thought about it you would mean a simple try/else.
finally is
always executed. which is incorrect for cleanup


Eh? Failed execution doesn't require cleanup? The example you
gave is
definitely equivalent to the try: finally: that Mike posted.


Sorry, that's incorrect: it's not.

With correct code (mine) cleanup for action A is only performed when
action A succeeds.

With incorrect code cleanup for action A is performed when A fails.


Oh?

$ cat cleanup.py

class Cleanup:
def __init__( self ):
self._actions = []

def call( self, action ):
assert( callable( action ) )
self._actions.append( action )

def __enter__( self ):
return self

def __exit__( self, x_type, x_value, x_traceback ):
while( len( self._actions ) != 0 ):
try:
self._actions.pop()()
except BaseException as x:
raise AssertionError( Cleanup: exception during cleanup )

def print_(x):
print x

with Cleanup() as at_cleanup:
at_cleanup.call(lambda: print_(Cleanup executed without an
exception.))

with Cleanup() as at_cleanup:


*Here* is where you should

1) Perform the action for which cleanup is needed.

2) Let it fail by raising an exception.



at_cleanup.call(lambda: print_(Cleanup execute with an exception.))
raise RuntimeError()


With an exception raised here cleanup should of course be performed.

And just in case you didn't notice: the above is not a test of the
example I gave.



$ python cleanup.py
Cleanup executed without an exception.
Cleanup execute with an exception.
Traceback (most recent call last):
File cleanup.py, line 28, in module
raise RuntimeError()
RuntimeError


The actions are always executed in your example,


Sorry, that's incorrect.


Looks like it to me.


I'm sorry, but you're

1) not testing my example which you're claiming that you're
testing, and


Then I would appreciate your writing a complete, runnable example that
demonstrates the feature you are claiming. Because it's apparently not
ensur[ing] some desired cleanup at the end of a scope, even when the
scope is exited via an exception that you talked about in your
original post.

Your sketch of an example looks like mine:

with Cleanup as at_cleanup:
# blah blah
chdir( somewhere )
at_cleanup.call( lambda: chdir( original_dir ) )
# blah blah

The cleanup function gets registered immediately after the first
chdir() and before the second blah blah. Even if an exception is
raised in the second blah blah, then the cleanup function will still
run. This would be equivalent to a try: finally:

# blah blah #1
chdir( somewhere )
try:
# blah blah #2
finally:
chdir( original_dir )


Yes, this is equivalent code.

The try-finally that you earlier claimed was equivalent, was not.


Okay, but just because of the position of the chdir(), right?


Yes, since it yields different results.


and not a try: else:

# blah blah #1
chdir( somewhere )
try:
# blah blah #2
else:
chdir( original_dir )


This example is however meaningless except as misdirection. There are
infinitely many constructs that include try-finally and try-else, that
the with-Cleanup code is not equivalent to. It's dumb to show one such.

Exactly what are you trying to prove here?


I'm just showing you what I thought you meant when you told Mike that
he should have used a try/else instead of try/finally.


Your earlier claims are still incorrect.


Now, I assumed that the behavior with respect to exceptions occurring
in the first blah blah weren't what you were talking about because
until the chdir(), there is nothing to clean up.

There is no way that the example you gave translates to a try: else:
as you claimed in your response to Mike Kent.


Of course there is.

Note that Mike wrapped the action A within the 'try':


code author=Mike correct=False
original_dir = os.getcwd()
try:
os.chdir(somewhere)
# Do other stuff
finally:
os.chdir(original_dir)
# Do other cleanup
/code


The 'finally' he used, shown above, yields incorrect behavior.

Namely cleanup always, while 'else', in that code, can yield correct
behavior /provided/ that it's coded correctly:


code author=Alf correct=ProbablyTrue disclaimer=off the cuff
original_dir = os.getcwd()
try:
os.chdir(somewhere)
except Whatever:
# whatever, e.g. logging
raise
else:
try:
# Do other stuff
finally:
os.chdir(original_dir)
# Do other cleanup
/code


Ah, okay. Now we're getting somewhere. Now, please note that you did
not have any except: handling in your original example. So Mike made a
try: finally: example to attempt to match the semantics of your code.
When you tell him that he should 'mean a simple try/else. finally
is always executed. which is incorrect for 

Re: Working group for Python CPAN-equivalence?

2010-03-04 Thread John Gabriele
On Mar 3, 5:30 pm, Ben Finney ben+pyt...@benfinney.id.au wrote:

 Terry Reedy tjre...@udel.edu writes:

  On 3/3/2010 12:05 PM, John Nagle wrote:

   CPAN enforces standard organization on packages. PyPi does not.

 This is, I think, something we don't need as much in Python; there is a
 fundamental difference between Perl's deeply nested namespace hierarchy
 and Python's inherently flat hierarchy.

What do you think that difference is? Both use nested namespaces. My
understanding is that if 2 different dists contain like-named
packages, and if a user installs both of them, they all just go to the
same place in the user's site-packages dir.

One difference I see is that the CPAN has a *lot* of dist's which
constitute a very large number of modules. Keeping them organized by a
semi-agreed-upon hierarchy of packages seems like a great
organizational tool.

---John
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: A scopeguard for Python

2010-03-04 Thread Alf P. Steinbach

* Jean-Michel Pichavant:

Alf P. Steinbach wrote:
 From your post, the scope guard technique is used to ensure some 
desired cleanup at the end of a scope, even when the scope is exited 
via an exception. This is precisely what the try: finally: syntax is 
for. 


You'd have to nest it. That's ugly. And more importantly, now two 
people in this thread (namely you and Mike) have demonstrated that 
they do not grok the try functionality and manage to write incorrect 
code, even arguing that it's correct when informed that it's not, so 
it's a pretty fragile construct, like goto.


You want to execute some cleanup when things go wrong, use try except. 
You want to do it when things go right, use try else. You want to 
cleanup no matter what happen, use try finally.


There is no need of any Cleanup class, except for some technical 
alternative  concern.


Have you considered that your argument applies to the with construct?

You have probably not realized that.

But let me force it on you: when would you use with?

Check if that case is covered by your argument above.

Now that you've been told about the with angle, don't you think it's a kind of 
weakness in your argument that it calls for removing with from the language?


I recommend that you think about why your argument is invalid.

Or, as I like to say, why your argument is completely bogus.


Cheers  hth.,

- Alf
--
http://mail.python.org/mailman/listinfo/python-list


loop over list and process into groups

2010-03-04 Thread Sneaky Wombat
[ {'vlan_or_intf': 'VLAN2021'},
 {'vlan_or_intf': 'Interface'},
 {'vlan_or_intf': 'Po1'},
 {'vlan_or_intf': 'Po306'},
 {'vlan_or_intf': 'VLAN2022'},
 {'vlan_or_intf': 'Interface'},
 {'vlan_or_intf': 'Gi7/33'},
 {'vlan_or_intf': 'Po1'},
 {'vlan_or_intf': 'Po306'},
 {'vlan_or_intf': 'VLAN2051'},
 {'vlan_or_intf': 'Interface'},
 {'vlan_or_intf': 'Gi9/6'},
 {'vlan_or_intf': 'VLAN2052'},
 {'vlan_or_intf': 'Interface'},
 {'vlan_or_intf': 'Gi9/6'},]

I want it to be converted to:

[{'2021':['Po1','Po306']},{'2022':['Gi7/33','Po1','Po306']},etc etc]

I was going to write a def to loop through and look for certain pre-
compiled regexs, and then put them in a new dictionary and append to a
list, but I'm having trouble thinking of a good way to capture each
dictionary.  Each dictionary will have a key that is the vlan and the
value will be a list of interfaces that participate in that vlan.
Each list will be variable, many containing only one interface and
some containing many interfaces.

I thought about using itertools, but i only use that for fixed data.
I don't know of a good way to loop over variably sized data.  I was
wondering if anyone had any ideas about a good way to convert this
list or dictionary into the right format that I need.  The solution I
come up with will most likely be ugly and error prone, so I thought
i'd ask this python list while I work.  Hopefully I learn a better way
to solve this problem.

Thanks!

I also have the data in a list,

[ 'VLAN4065',
 'Interface',
 'Gi9/6',
 'Po2',
 'Po3',
 'Po306',
 'VLAN4068',
 'Interface',
 'Gi9/6',
 'VLAN4069',
 'Interface',
 'Gi9/6',]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: A scopeguard for Python

2010-03-04 Thread Robert Kern

On 2010-03-04 09:48 AM, Alf P. Steinbach wrote:

* Jean-Michel Pichavant:

Alf P. Steinbach wrote:

From your post, the scope guard technique is used to ensure some
desired cleanup at the end of a scope, even when the scope is exited
via an exception. This is precisely what the try: finally: syntax
is for.


You'd have to nest it. That's ugly. And more importantly, now two
people in this thread (namely you and Mike) have demonstrated that
they do not grok the try functionality and manage to write incorrect
code, even arguing that it's correct when informed that it's not, so
it's a pretty fragile construct, like goto.


You want to execute some cleanup when things go wrong, use try except.
You want to do it when things go right, use try else. You want to
cleanup no matter what happen, use try finally.

There is no need of any Cleanup class, except for some technical
alternative concern.


Have you considered that your argument applies to the with construct?

You have probably not realized that.

But let me force it on you: when would you use with?


When there is a specific context manager that removes the need for boilerplate.


Check if that case is covered by your argument above.

Now that you've been told about the with angle, don't you think it's a
kind of weakness in your argument that it calls for removing with from
the language?


No, it only argues that with Cleanup(): is supernumerary.

--
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco

--
http://mail.python.org/mailman/listinfo/python-list


Re: My four-yorkshireprogrammers contribution

2010-03-04 Thread MRAB

Gregory Ewing wrote:

MRAB wrote:


Mk14 from Science of Cambridge, a kit with hex keypad and 7-segment
display, which I had to solder together, and also make my own power
supply. I had the extra RAM and the I/O chip, so that's 256B (including
the memory used by the monitor) + 256B additional RAM + 128B more in the
I/O chip.


Luxury! Mine was a Miniscamp, based on a design published in
Electronics Australia in the 70s. 256 bytes RAM, 8 switches
for input, 8 LEDs for output. No ROM -- program had to be
toggled in each time.

Looked something like this:

  http://oldcomputermuseum.com/mini_scamp.html

except that mine wasn't built from a kit and didn't look
quite as professional as that one.


[snip]
By the standards of just a few years later, that's not so much a
microcomputer as a nanocomputer!

I was actually interested in electronics at the time, and it was such
things as Mk14 which lead me into computing.
--
http://mail.python.org/mailman/listinfo/python-list


Re: A scopeguard for Python

2010-03-04 Thread Jean-Michel Pichavant

Alf P. Steinbach wrote:

* Jean-Michel Pichavant:

Alf P. Steinbach wrote:
 From your post, the scope guard technique is used to ensure some 
desired cleanup at the end of a scope, even when the scope is 
exited via an exception. This is precisely what the try: finally: 
syntax is for. 


You'd have to nest it. That's ugly. And more importantly, now two 
people in this thread (namely you and Mike) have demonstrated that 
they do not grok the try functionality and manage to write incorrect 
code, even arguing that it's correct when informed that it's not, so 
it's a pretty fragile construct, like goto.


You want to execute some cleanup when things go wrong, use try 
except. You want to do it when things go right, use try else. You 
want to cleanup no matter what happen, use try finally.


There is no need of any Cleanup class, except for some technical 
alternative  concern.


Have you considered that your argument applies to the with construct?

You have probably not realized that.

But let me force it on you: when would you use with?

Check if that case is covered by your argument above.

Now that you've been told about the with angle, don't you think it's 
a kind of weakness in your argument that it calls for removing with 
from the language?


I recommend that you think about why your argument is invalid.

Or, as I like to say, why your argument is completely bogus.


Cheers  hth.,

- Alf
I am using python 2.5, so I know nothing about the with statement, and 
it may possible my arguments apply to it, you could remove it from the 
language, it wouldn't bother me at all.
I just don't see in what you've written (adding a class, with some 
__entry__, __exit__ protocol, using a with statement) what cannot be 
achieved with a try statement in its simpliest form.


Try except may be lame and noobish, but it works, is easy to read and 
understood at first glance.
It looks like to me that 'with' statements are like decorators: 
overrated. Sometimes people could write simple readable code, but yet  
they're tempted by the geek side of programming: using complex 
constructs when there's no need to. I myself cannot resist sometimes ;-)


JM
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to login https sever with inputing account name and password?

2010-03-04 Thread Steve Holden
Karen Wang wrote:
 Hi all,
 
 I want to use python to access to https server, like
 “https://212.218.229.10/chinatest/”
 
 If open it from IE, will see the pop-up login windows like this
 
 I tried several ways but always only get page for” HTTP Error 401.2 –
 Unauthorized” error. ( myusername and mypassword are all correct)
 
 Below is my code:
 
 import urllib2
 
 values = {
 
 'user' : myusername,
 
 “pass' : mypassword }
 
 data = urllib2.urlencode(values)
 
 t = urllib2.urlopen('https://212.218.229.10/chinatest/',data)
 
 print t.read()
 
 where I am wrong ?
 
Read the HTTP standard. The authentication data has to be send as HTTP
headers, not as the data to a POST request. The dialog box you see is
the browser attempting to collect the data it needs to put in the header.

regards
 Steve
-- 
Steve Holden   +1 571 484 6266   +1 800 494 3119
PyCon is coming! Atlanta, Feb 2010  http://us.pycon.org/
Holden Web LLC http://www.holdenweb.com/
UPCOMING EVENTS:http://holdenweb.eventbrite.com/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: My four-yorkshireprogrammers contribution

2010-03-04 Thread python
Not to out do you guys, but over here in the states, I started out with
a Radio Shack 'computer' that consisted of 10 slideable switches and 10
flashlight bulbs. You ran wires betweens the slideable switches to
create 'programs'. Wish I could remember what this thing was called - my
google-fu fails me. This was approx 1976 when Popular Science's ELF-1
with 256 bytes was quite a sensation.

Cheers,
Malcolm
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: NoSQL Movement?

2010-03-04 Thread Juan Pedro Bolivar Puente
On 04/03/10 16:21, ccc31807 wrote:
 On Mar 3, 4:55 pm, toby t...@telegraphics.com.au wrote:
  where you have to store data and

 relational data
 
 Data is neither relational nor unrelational. Data is data.
 Relationships are an artifact, something we impose on the data.
 Relations are for human convenience, not something inherent in the
 data itself.
 

No, relations are data. Data is data says nothing. Data is
information. Actually, all data are relations: relating /values/ to
/properties/ of /entities/. Relations as understood by the relational
model is nothing else but assuming that properties and entities are
first class values of the data system and the can also be related.

JP
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: loop over list and process into groups

2010-03-04 Thread mk

Sneaky Wombat wrote:

I was going to write a def to loop through and look for certain pre-
compiled regexs, and then put them in a new dictionary and append to a
list, 


regexes are overkill in this case I think.


[ 'VLAN4065',
 'Interface',
 'Gi9/6',
 'Po2',
 'Po3',
 'Po306',
 'VLAN4068',
 'Interface',
 'Gi9/6',
 'VLAN4069',
 'Interface',
 'Gi9/6',]


Why not construct an intermediate dictionary?

elems = [ 'VLAN4065',
 'Interface',
 'Gi9/6',
 'Po2',
 'Po3',
 'Po306',
 'VLAN4068',
 'Interface',
 'Gi9/6',
 'VLAN4069',
 'Interface',
 'Gi9/6',]

def makeintermdict(elems):
vd = {}
vlan = None
for el in elems:
if el.startswith('VLAN'):
vlan = el.replace('VLAN','')
elif el == 'Interface':
vd[vlan] = []
else:
vd[vlan].append(el)
return vd

def makelist(interm):
finlist = []
for k in interm.keys():
finlist.append({k:interm[k]})
return finlist

if __name__ == __main__:
intermediate = makeintermdict(elems)
print intermediate
finlist = makelist(intermediate)
print 'final', finlist


{'4068': ['Gi9/6'], '4069': ['Gi9/6'], '4065': ['Gi9/6', 'Po2', 'Po3', 
'Po306']}
final [{'4068': ['Gi9/6']}, {'4069': ['Gi9/6']}, {'4065': ['Gi9/6', 
'Po2', 'Po3', 'Po306']}]


I hope this is not your homework. :-)

Regards,
mk

--
http://mail.python.org/mailman/listinfo/python-list


Re: A scopeguard for Python

2010-03-04 Thread Alf P. Steinbach

* Robert Kern:

On 2010-03-03 18:49 PM, Alf P. Steinbach wrote:

* Robert Kern:

[snip]

 can you
understand why we might think that you were saying that try: finally:
was wrong and that you were proposing that your code was equivalent to
some try: except: else: suite?


No, not really. His code didn't match the semantics. Changing 'finally'
to 'else' could make it equivalent.


Okay, please show me what you mean by changing 'finally' to 'else'. I 
think you are being hinty again. It's not helpful.  

[snip middle of this paragraph]
Why do you think that we would interpret those words 
to mean that you wanted the example you give just above?


There's an apparent discrepancy between your call for an example and your 
subsequent (in the same paragraph) reference to the example given.


But as to why I assumed that that example, or a similar correct one, would be 
implied, it's the only meaningful interpretation.


Adopting a meaningless interpretation when a meaningful exists is generally just 
adversarial, but in this case I was, as you pointed out, extremely unclear, and 
I'm sorry: I should have given such example up front. Will try to do so.



[snip]



There are a couple of ways to do this kind of cleanup depending on the
situation. Basically, you have several different code blocks:

# 1. Record original state.
# 2. Modify state.
# 3. Do stuff requiring the modified state.
# 4. Revert to the original state.

Depending on where errors are expected to occur, and how the state
needs to get modified and restored, there are different ways of
arranging these blocks. The one Mike showed:

# 1. Record original state.
try:
# 2. Modify state.
# 3. Do stuff requiring the modified state.
finally:
# 4. Revert to the original state.

And the one you prefer:

# 1. Record original state.
# 2. Modify state.
try:
# 3. Do stuff requiring the modified state.
finally:
# 4. Revert to the original state.

These differ in what happens when an error occurs in block #2, the
modification of the state. In Mike's, the cleanup code runs; in yours,
it doesn't. For chdir(), it really doesn't matter. Reverting to the
original state is harmless whether the original chdir() succeeds or
fails, and chdir() is essentially atomic so if it raises an exception,
the state did not change and nothing needs to be cleaned up.

However, not all block #2s are atomic. Some are going to fail partway
through and need to be cleaned up even though they raised an
exception. Fortunately, cleanup can frequently be written to not care
whether the whole thing finished or not.


Yeah, and there are some systematic ways to handle these things. You
might look up Dave Abraham's levels of exception safety. Mostly his
approach boils down to making operations effectively atomic so as to
reduce the complexity: ideally, if an operation raises an exception,
then it has undone any side effects.

Of course it can't undo the launching of an ICBM, for example...

But ideally, if it could, then it should.


I agree. Atomic operations like chdir() help a lot. But this is Python, 
and exceptions can happen in many different places. If you're not just 
calling an extension module function that makes a known-atomic system 
call, you run the risk of not having an atomic operation.



If you call the possibly failing operation A, then that systematic
approach goes like this: if A fails, then it has cleaned up its own
mess, but if A succeeds, then it's the responsibility of the calling
code to clean up if the higher level (multiple statements) operation
that A is embedded in, fails.

And that's what Marginean's original C++ ScopeGuard was designed for,
and what the corresponding Python Cleanup class is designed for.


And try: finally:, for that matter.


Not to mention with.

Some other poster made the same error recently in this thread; it is a common 
fallacy in discussions about programming, to assume that since the same can be 
expressed using lower level constructs, those are all that are required.


If adopted as true it ultimately means the removal of all control structures 
above the level of if and goto (except Python doesn't have goto).




Both formulations can be correct (and both work perfectly fine with
the chdir() example being used). Sometimes one is better than the
other, and sometimes not. You can achieve both ways with either your
Cleanup class or with try: finally:.

I am still of the opinion that Cleanup is not an improvement over try:
finally: and has the significant ugliness of forcing cleanup code into
callables. This significantly limits what you can do in your cleanup
code.


Uhm, not really. :-) As I see it.


Well, not being able to affect the namespace is a significant 
limitation. Sometimes you need to delete objects from the namespace in 
order to ensure that their refcounts go to zero and their cleanup code 
gets executed.


Just a nit (I agree that a lambda can't do this, but as to what's required): 
assigning None is sufficient for that[1].



Re: A scopeguard for Python

2010-03-04 Thread Robert Kern

On 2010-03-04 10:32 AM, Jean-Michel Pichavant wrote:

Alf P. Steinbach wrote:

* Jean-Michel Pichavant:

Alf P. Steinbach wrote:

From your post, the scope guard technique is used to ensure some
desired cleanup at the end of a scope, even when the scope is
exited via an exception. This is precisely what the try: finally:
syntax is for.


You'd have to nest it. That's ugly. And more importantly, now two
people in this thread (namely you and Mike) have demonstrated that
they do not grok the try functionality and manage to write incorrect
code, even arguing that it's correct when informed that it's not, so
it's a pretty fragile construct, like goto.


You want to execute some cleanup when things go wrong, use try
except. You want to do it when things go right, use try else. You
want to cleanup no matter what happen, use try finally.

There is no need of any Cleanup class, except for some technical
alternative concern.


Have you considered that your argument applies to the with construct?

You have probably not realized that.

But let me force it on you: when would you use with?

Check if that case is covered by your argument above.

Now that you've been told about the with angle, don't you think it's
a kind of weakness in your argument that it calls for removing with
from the language?

I recommend that you think about why your argument is invalid.

Or, as I like to say, why your argument is completely bogus.


Cheers  hth.,

- Alf

I am using python 2.5, so I know nothing about the with statement,


You can try it out using from __future__ import with_statement.


and
it may possible my arguments apply to it, you could remove it from the
language, it wouldn't bother me at all.
I just don't see in what you've written (adding a class, with some
__entry__, __exit__ protocol, using a with statement) what cannot be
achieved with a try statement in its simpliest form.

Try except may be lame and noobish, but it works, is easy to read and
understood at first glance.
It looks like to me that 'with' statements are like decorators:
overrated. Sometimes people could write simple readable code, but yet
they're tempted by the geek side of programming: using complex
constructs when there's no need to. I myself cannot resist sometimes ;-)


PEP 343 is a good introduction to the real uses of the with: statement.

  http://www.python.org/dev/peps/pep-0343/

Basically, it allows you to package up your initialization and cleanup code into 
objects, stick them in your library, unit test them thoroughly, etc. so you 
don't have to repeat them everywhere and possibly get them wrong. It's DRY in 
action.


Where Alf's Cleanup class goes wrong, in my opinion, is that it does not package 
up any code to avoid repetition. You still repeat the same cleanup code 
everywhere you use it, so it is no better than try: finally:. It is not a real 
use case of the with: statement.


--
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco

--
http://mail.python.org/mailman/listinfo/python-list


Re: A scopeguard for Python

2010-03-04 Thread Alf P. Steinbach

* Robert Kern:

On 2010-03-04 09:48 AM, Alf P. Steinbach wrote:

* Jean-Michel Pichavant:

Alf P. Steinbach wrote:

From your post, the scope guard technique is used to ensure some
desired cleanup at the end of a scope, even when the scope is exited
via an exception. This is precisely what the try: finally: syntax
is for.


You'd have to nest it. That's ugly. And more importantly, now two
people in this thread (namely you and Mike) have demonstrated that
they do not grok the try functionality and manage to write incorrect
code, even arguing that it's correct when informed that it's not, so
it's a pretty fragile construct, like goto.


You want to execute some cleanup when things go wrong, use try except.
You want to do it when things go right, use try else. You want to
cleanup no matter what happen, use try finally.

There is no need of any Cleanup class, except for some technical
alternative concern.


Have you considered that your argument applies to the with construct?

You have probably not realized that.

But let me force it on you: when would you use with?


When there is a specific context manager that removes the need for 
boilerplate.


That's cleanup no matter what happen.



Check if that case is covered by your argument above.

Now that you've been told about the with angle, don't you think it's a
kind of weakness in your argument that it calls for removing with from
the language?


No, it only argues that with Cleanup(): is supernumerary.


I don't know what supernumerary means, but to the degree that the argument 
says anything about a construct that is not 'finally', it says the same about 
general with.


So whatever you mean by supernumerary, you're saying that the argument implies 
that with is supernumerary.


This is starting to look like some earlier discussions in this group, where even 
basic logic is denied.



Cheers,

- Alf
--
http://mail.python.org/mailman/listinfo/python-list


Re: A scopeguard for Python

2010-03-04 Thread Robert Kern

On 2010-03-04 10:56 AM, Alf P. Steinbach wrote:

* Robert Kern:

On 2010-03-03 18:49 PM, Alf P. Steinbach wrote:

* Robert Kern:

[snip]

can you
understand why we might think that you were saying that try: finally:
was wrong and that you were proposing that your code was equivalent to
some try: except: else: suite?


No, not really. His code didn't match the semantics. Changing 'finally'
to 'else' could make it equivalent.


Okay, please show me what you mean by changing 'finally' to 'else'.
I think you are being hinty again. It's not helpful.

[snip middle of this paragraph]

Why do you think that we would interpret those words to mean that you
wanted the example you give just above?


There's an apparent discrepancy between your call for an example and
your subsequent (in the same paragraph) reference to the example given.

But as to why I assumed that that example, or a similar correct one,
would be implied, it's the only meaningful interpretation.

Adopting a meaningless interpretation when a meaningful exists is
generally just adversarial, but in this case I was, as you pointed out,
extremely unclear, and I'm sorry: I should have given such example up
front. Will try to do so.


Thank you. I appreciate it.


[snip]



There are a couple of ways to do this kind of cleanup depending on the
situation. Basically, you have several different code blocks:

# 1. Record original state.
# 2. Modify state.
# 3. Do stuff requiring the modified state.
# 4. Revert to the original state.

Depending on where errors are expected to occur, and how the state
needs to get modified and restored, there are different ways of
arranging these blocks. The one Mike showed:

# 1. Record original state.
try:
# 2. Modify state.
# 3. Do stuff requiring the modified state.
finally:
# 4. Revert to the original state.

And the one you prefer:

# 1. Record original state.
# 2. Modify state.
try:
# 3. Do stuff requiring the modified state.
finally:
# 4. Revert to the original state.

These differ in what happens when an error occurs in block #2, the
modification of the state. In Mike's, the cleanup code runs; in yours,
it doesn't. For chdir(), it really doesn't matter. Reverting to the
original state is harmless whether the original chdir() succeeds or
fails, and chdir() is essentially atomic so if it raises an exception,
the state did not change and nothing needs to be cleaned up.

However, not all block #2s are atomic. Some are going to fail partway
through and need to be cleaned up even though they raised an
exception. Fortunately, cleanup can frequently be written to not care
whether the whole thing finished or not.


Yeah, and there are some systematic ways to handle these things. You
might look up Dave Abraham's levels of exception safety. Mostly his
approach boils down to making operations effectively atomic so as to
reduce the complexity: ideally, if an operation raises an exception,
then it has undone any side effects.

Of course it can't undo the launching of an ICBM, for example...

But ideally, if it could, then it should.


I agree. Atomic operations like chdir() help a lot. But this is
Python, and exceptions can happen in many different places. If you're
not just calling an extension module function that makes a
known-atomic system call, you run the risk of not having an atomic
operation.


If you call the possibly failing operation A, then that systematic
approach goes like this: if A fails, then it has cleaned up its own
mess, but if A succeeds, then it's the responsibility of the calling
code to clean up if the higher level (multiple statements) operation
that A is embedded in, fails.

And that's what Marginean's original C++ ScopeGuard was designed for,
and what the corresponding Python Cleanup class is designed for.


And try: finally:, for that matter.


Not to mention with.

Some other poster made the same error recently in this thread; it is a
common fallacy in discussions about programming, to assume that since
the same can be expressed using lower level constructs, those are all
that are required.

If adopted as true it ultimately means the removal of all control
structures above the level of if and goto (except Python doesn't
have goto).


What I'm trying to explain is that the with: statement has a use even if Cleanup 
doesn't. Arguing that Cleanup doesn't improve on try: finally: does not mean 
that the with: statement doesn't improve on try: finally:.



Both formulations can be correct (and both work perfectly fine with
the chdir() example being used). Sometimes one is better than the
other, and sometimes not. You can achieve both ways with either your
Cleanup class or with try: finally:.

I am still of the opinion that Cleanup is not an improvement over try:
finally: and has the significant ugliness of forcing cleanup code into
callables. This significantly limits what you can do in your cleanup
code.


Uhm, not really. :-) As I see it.


Well, not being able to affect the namespace is a significant

Re: memory usage, temporary and otherwise

2010-03-04 Thread lbolla
On Mar 4, 12:24 pm, Duncan Booth duncan.bo...@invalid.invalid wrote:

   a={}
   for i in range(1000):
 ...     a[i]=intern('spam'*10)


intern: another name borrowed from Lisp?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: A scopeguard for Python

2010-03-04 Thread Robert Kern

On 2010-03-04 11:02 AM, Alf P. Steinbach wrote:

* Robert Kern:

On 2010-03-04 09:48 AM, Alf P. Steinbach wrote:

* Jean-Michel Pichavant:

Alf P. Steinbach wrote:

From your post, the scope guard technique is used to ensure some
desired cleanup at the end of a scope, even when the scope is exited
via an exception. This is precisely what the try: finally: syntax
is for.


You'd have to nest it. That's ugly. And more importantly, now two
people in this thread (namely you and Mike) have demonstrated that
they do not grok the try functionality and manage to write incorrect
code, even arguing that it's correct when informed that it's not, so
it's a pretty fragile construct, like goto.


You want to execute some cleanup when things go wrong, use try except.
You want to do it when things go right, use try else. You want to
cleanup no matter what happen, use try finally.

There is no need of any Cleanup class, except for some technical
alternative concern.


Have you considered that your argument applies to the with construct?

You have probably not realized that.

But let me force it on you: when would you use with?


When there is a specific context manager that removes the need for
boilerplate.


That's cleanup no matter what happen.


For the # Do stuff block, yes. For the initialization block, you can write a 
context manager to do it either way, as necessary.



Check if that case is covered by your argument above.

Now that you've been told about the with angle, don't you think it's a
kind of weakness in your argument that it calls for removing with from
the language?


No, it only argues that with Cleanup(): is supernumerary.


I don't know what supernumerary means,


http://www.merriam-webster.com/dictionary/supernumerary


but to the degree that the
argument says anything about a construct that is not 'finally', it says
the same about general with.


He's ignorant of the use cases of the with: statement, true. Given only your 
example of the with: statement, it is hard to fault him for thinking that try: 
finally: wouldn't suffice.


--
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco

--
http://mail.python.org/mailman/listinfo/python-list


Re: loop over list and process into groups

2010-03-04 Thread Sneaky Wombat
On Mar 4, 10:55 am, mk mrk...@gmail.com wrote:
 Sneaky Wombat wrote:
  I was going to write a def to loop through and look for certain pre-
  compiled regexs, and then put them in a new dictionary and append to a
  list,

 regexes are overkill in this case I think.

  [ 'VLAN4065',
   'Interface',
   'Gi9/6',
   'Po2',
   'Po3',
   'Po306',
   'VLAN4068',
   'Interface',
   'Gi9/6',
   'VLAN4069',
   'Interface',
   'Gi9/6',]

 Why not construct an intermediate dictionary?

 elems = [ 'VLAN4065',
   'Interface',
   'Gi9/6',
   'Po2',
   'Po3',
   'Po306',
   'VLAN4068',
   'Interface',
   'Gi9/6',
   'VLAN4069',
   'Interface',
   'Gi9/6',]

 def makeintermdict(elems):
      vd = {}
      vlan = None
      for el in elems:
          if el.startswith('VLAN'):
              vlan = el.replace('VLAN','')
          elif el == 'Interface':
              vd[vlan] = []
          else:
              vd[vlan].append(el)
      return vd

 def makelist(interm):
      finlist = []
      for k in interm.keys():
          finlist.append({k:interm[k]})
      return finlist

 if __name__ == __main__:
      intermediate = makeintermdict(elems)
      print intermediate
      finlist = makelist(intermediate)
      print 'final', finlist

 {'4068': ['Gi9/6'], '4069': ['Gi9/6'], '4065': ['Gi9/6', 'Po2', 'Po3',
 'Po306']}
 final [{'4068': ['Gi9/6']}, {'4069': ['Gi9/6']}, {'4065': ['Gi9/6',
 'Po2', 'Po3', 'Po306']}]

 I hope this is not your homework. :-)

 Regards,
 mk

Thanks mk,

My approach was a lot more complex than yours, but your's is better.
I like itertools and was using islice to create tuples for (start,end)
string slicing.  Too much work though.  Thanks!

-j
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: NoSQL Movement?

2010-03-04 Thread George Neuner
On Thu, 04 Mar 2010 18:51:21 +0200, Juan Pedro Bolivar Puente
magnic...@gmail.com wrote:

On 04/03/10 16:21, ccc31807 wrote:
 On Mar 3, 4:55 pm, toby t...@telegraphics.com.au wrote:
  where you have to store data and

 relational data
 
 Data is neither relational nor unrelational. Data is data.
 Relationships are an artifact, something we impose on the data.
 Relations are for human convenience, not something inherent in the
 data itself.
 

No, relations are data. Data is data says nothing. Data is
information. Actually, all data are relations: relating /values/ to
/properties/ of /entities/. Relations as understood by the relational
model is nothing else but assuming that properties and entities are
first class values of the data system and the can also be related.

Well ... sort of.  Information is not data but rather the
understanding of something represented by the data.  The term
information overload is counter-intuitive ... it really means an
excess of data for which there is little understanding.

Similarly, at the level to which you are referring, a relation is not
data but simply a theoretical construct.  At this level testable
properties or instances of the relation are data, but the relation
itself is not.  The relation may be data at a higher level.

George
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: NoSQL Movement?

2010-03-04 Thread ccc31807
On Mar 4, 11:51 am, Juan Pedro Bolivar Puente magnic...@gmail.com
wrote:
 No, relations are data.

This depends on your definition of 'data.' I would say that
relationships is information gleaned from the data.

 Data is data says nothing. Data is
 information.

To me, data and information are not the same thing, and in particular,
data is NOT information. To me, information consists of the sifting,
sorting, filtering, and rearrangement of data that can be useful in
completing some task. As an illustration, consider some very large
collection of ones and zeros -- the information it contains depends on
whether it's views as a JPEG, an EXE, XML, WAV, or other sort of
information processing device. Whichever way it's processed, the
'data' (the ones and zeros) stay the same, and do not constitute
'information' in their raw state.

 Actually, all data are relations: relating /values/ to
 /properties/ of /entities/. Relations as understood by the relational
 model is nothing else but assuming that properties and entities are
 first class values of the data system and the can also be related.

Well, this sort of illustrates my point. The 'values' of 'properties'
relating to specific 'entities' depends on how one processes the data,
which can be processed various ways. For example, 1001 can either
be viewed as the decimal number 65 or the alpha character 'A' but the
decision as to how to view this value isn't inherent in the data
itself, but only as an artifact of our use of the data to turn it into
information.

CC.
-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: psutil 0.1.3 released

2010-03-04 Thread Giampaolo Rodola'
Hi,
I'm pleased to announce the 0.1.3 release of psutil:
http://code.google.com/p/psutil

=== About ===

psutil is a module providing an interface for retrieving information
on running processes and system utilization (CPU, memory) in a
portable way by using Python, implementing many functionalities
offered by tools like ps, top and Windows task manager.

It currently supports Linux, OS X, FreeBSD and Windows with Python
versions from 2.4 to 3.1 by using a unique code base.

=== Major enhancements ===

 * Python 3 support
 * per-process username
 * suspend / resume process
 * per-process current working directory (Windows and Linux only)
 * added support for Windows 7 and FreeBSD 64 bit

=== Links ===

* Home page: http://code.google.com/p/psutil
* Mailing list: http://groups.google.com/group/psutil/topics
* Source tarball: http://psutil.googlecode.com/files/psutil-0.1.3.tar.gz
* OS X installer: 
http://psutil.googlecode.com/files/psutil-0.1.3-py2.6-macosx10.4.dmg
* Windows Installer (Python 2.6): 
http://psutil.googlecode.com/files/psutil-0.1.3.win32-py2.6.exe
* Windows Installer (Python 3.1): 
http://psutil.googlecode.com/files/psutil-0.1.3.win32-py3.1.exe
* Api Reference: http://code.google.com/p/psutil/wiki/Documentation


Thanks

--- Giampaolo Rodola'
http://code.google.com/p/pyftpdlib
http://code.google.com/p/psutil/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: loop over list and process into groups

2010-03-04 Thread lbolla
On Mar 4, 3:57 pm, Sneaky Wombat joe.hr...@gmail.com wrote:
 [ {'vlan_or_intf': 'VLAN2021'},
  {'vlan_or_intf': 'Interface'},
  {'vlan_or_intf': 'Po1'},
  {'vlan_or_intf': 'Po306'},
  {'vlan_or_intf': 'VLAN2022'},
  {'vlan_or_intf': 'Interface'},
  {'vlan_or_intf': 'Gi7/33'},
  {'vlan_or_intf': 'Po1'},
  {'vlan_or_intf': 'Po306'},
  {'vlan_or_intf': 'VLAN2051'},
  {'vlan_or_intf': 'Interface'},
  {'vlan_or_intf': 'Gi9/6'},
  {'vlan_or_intf': 'VLAN2052'},
  {'vlan_or_intf': 'Interface'},
  {'vlan_or_intf': 'Gi9/6'},]

 I want it to be converted to:

 [{'2021':['Po1','Po306']},{'2022':['Gi7/33','Po1','Po306']},etc etc]

 I was going to write a def to loop through and look for certain pre-
 compiled regexs, and then put them in a new dictionary and append to a
 list, but I'm having trouble thinking of a good way to capture each
 dictionary.  Each dictionary will have a key that is the vlan and the
 value will be a list of interfaces that participate in that vlan.
 Each list will be variable, many containing only one interface and
 some containing many interfaces.

 I thought about using itertools, but i only use that for fixed data.
 I don't know of a good way to loop over variably sized data.  I was
 wondering if anyone had any ideas about a good way to convert this
 list or dictionary into the right format that I need.  The solution I
 come up with will most likely be ugly and error prone, so I thought
 i'd ask this python list while I work.  Hopefully I learn a better way
 to solve this problem.

 Thanks!

 I also have the data in a list,

 [ 'VLAN4065',
  'Interface',
  'Gi9/6',
  'Po2',
  'Po3',
  'Po306',
  'VLAN4068',
  'Interface',
  'Gi9/6',
  'VLAN4069',
  'Interface',
  'Gi9/6',]



===

from itertools import groupby

data = \
[ {'vlan_or_intf': 'VLAN2021'},
 {'vlan_or_intf': 'Interface'},
 {'vlan_or_intf': 'Po1'},
 {'vlan_or_intf': 'Po306'},
 {'vlan_or_intf': 'VLAN2022'},
 {'vlan_or_intf': 'Interface'},
 {'vlan_or_intf': 'Gi7/33'},
 {'vlan_or_intf': 'Po1'},
 {'vlan_or_intf': 'Po306'},
 {'vlan_or_intf': 'VLAN2051'},
 {'vlan_or_intf': 'Interface'},
 {'vlan_or_intf': 'Gi9/6'},
 {'vlan_or_intf': 'VLAN2052'},
 {'vlan_or_intf': 'Interface'},
 {'vlan_or_intf': 'Gi9/6'},]

def clean_up(lst):
return [d.values()[0] for d in data if d.values()[0] != 'Interface']

out = {}
for k, g in groupby(clean_up(data) , key=lambda s:
s.startswith('VLAN')):
if k:
key = list(g)[0].replace('VLAN','')
else:
out[key] = list(g)

print out
===

hth,
L.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: memory usage, temporary and otherwise

2010-03-04 Thread Terry Reedy

On 3/4/2010 6:56 AM, mk wrote:

Bruno Desthuilliers wrote:



Huh? I was under impression that some time after 2.0 range was made to
work under the covers like xrange when used in a loop? Or is it 3.0
that does that?


3.0.


--
http://mail.python.org/mailman/listinfo/python-list


Re: A scopeguard for Python

2010-03-04 Thread Michael Rudolf

Am 04.03.2010 17:32, schrieb Jean-Michel Pichavant:

It looks like to me that 'with' statements are like decorators: overrated.


Oh no, you just insulted my favourite two python features, followed 
immediately by generators, iterators and list comprehensions / generator 
expressions :p


No, really: they *are* great ;D

Regards,
Michael
--
http://mail.python.org/mailman/listinfo/python-list


Re: A scopeguard for Python

2010-03-04 Thread Michael Rudolf

Am 04.03.2010 18:20, schrieb Robert Kern:


What I'm trying to explain is that the with: statement has a use even if
Cleanup doesn't. Arguing that Cleanup doesn't improve on try: finally:
does not mean that the with: statement doesn't improve on try: finally:.


Yes, the with-statement rocks :)

I suggested such a thing a few days ago in another thread, where OP 
wanted a silent keyword like:


silent:
do_stuff()

would be equivalent to:
try: do_stuff() except: pass

Of course catching *all* exceptions was a bad idea, so I came up with 
the code below and now I actually like it and use it myself :)


-snip-
To your first question about a silenced keyword: you could emulate 
this with context managers I guess.


Something like (untested, just a quick mockup how it could look):



class silenced:
def __init__(self, *silenced):
self.exceptions=tuple(silenced) #just to be explicit
def __enter__(self):
return self#dito
def __exit__(self, type, value, traceback):
for ex in self.exceptions:
if isinstance(value, ex):
return True #supresses exception


So:

with silenced(os.Error):
os.remove(somefile)

Would translate to:

try:
os.remove(somefile)
except os.Error:
pass

One nice thing about this approach would be that you can alias a set of 
exceptions with this:


idontcareabouttheseerrors=silenced(TypeError, ValueError, PEBCAKError, 
SyntaxError, EndOfWorldError, 1D10T_Error)


with idontcareabouttheseerrors:
do_stuff()

Regards,
Michael
--
http://mail.python.org/mailman/listinfo/python-list


Re: memory usage, temporary and otherwise

2010-03-04 Thread Steve Holden
Duncan Booth wrote:
 mk mrk...@gmail.com wrote:
 
 Hm, apparently Python didn't spot that 'spam'*10 in a's values is really 
 the same string, right?
 
 If you want it to spot that then give it a hint that it should be looking 
 for identical strings:
 
   a={}
   for i in range(1000):
 ... a[i]=intern('spam'*10)
 
 should reduce your memory use somewhat.
 
Better still, hoist the constant value out of the loop:

  a={}
  const = 'spam'*10
  for i in range(1000):
 ... a[i] = const


regards
 Steve
-- 
Steve Holden   +1 571 484 6266   +1 800 494 3119
PyCon is coming! Atlanta, Feb 2010  http://us.pycon.org/
Holden Web LLC http://www.holdenweb.com/
UPCOMING EVENTS:http://holdenweb.eventbrite.com/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Working group for Python CPAN-equivalence?

2010-03-04 Thread John Bokma
Ben Finney ben+pyt...@benfinney.id.au writes:

 Terry Reedy tjre...@udel.edu writes:

 On 3/3/2010 12:05 PM, John Nagle wrote:
  CPAN is a repository. PyPi is an collection of links.

 As Ben said, PyPI currently is also a respository and not just links
 to other repositories.

[..]

  CPAN enforces standard organization on packages. PyPi does not.

 This is, I think, something we don't need as much in Python; there is a
 fundamental difference between Perl's deeply nested namespace hierarchy
 and Python's inherently flat hierarchy.

Perl's hierarchy is 2 (e.g. Text::Balanced) or 3 levels deep
(e.g. Text::Ngram::LanguageDetermine), rarely deeper
(Text::Editor::Vip::Buffer::Plugins::Display). Which is in general most
likely just one level deeper than Python. Perl is not Java.

And I think that one additional level (or more) is a /must/ when you
have 17,530 (at the time of writing) modules.

If Python wants it's own CPAN it might be a good first move to not have
modules named xmlrpclib, wave, keyword, zlib, etc. But I don't
see that happen soon :-).

-- 
John Bokma   j3b

Hacking  Hiking in Mexico -  http://johnbokma.com/
http://castleamber.com/ - Perl  Python Development
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: WANTED: A good name for the pair (args, kwargs)

2010-03-04 Thread Steve Holden
Jonathan Fine wrote:
 Hi
 
 We can call a function fn using
 val = fn(*args, **kwargs)
 
 I'm looking for a good name for the pair (args, kwargs).  Any suggestions?
 
 Here's my use case:
 def doit(fn , wibble, expect):
 args, kwargs = wibble
 actual = fn(*args, **kwargs)
 if actual != expect:
 # Something has gone wrong.
 pass
 
 This is part of a test runner.
 
 For now I'll use argpair, but if anyone has a better idea, I'll use it.
 
Not being able to find any existing names I called *args the
sequence-parameter and **kwarg the dict-parameter.

For your use, though, you might choose something like the generic
parameter pair).

regards
 Steve
-- 
Steve Holden   +1 571 484 6266   +1 800 494 3119
PyCon is coming! Atlanta, Feb 2010  http://us.pycon.org/
Holden Web LLC http://www.holdenweb.com/
UPCOMING EVENTS:http://holdenweb.eventbrite.com/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: A scopeguard for Python

2010-03-04 Thread Jean-Michel Pichavant

Michael Rudolf wrote:

Am 04.03.2010 17:32, schrieb Jean-Michel Pichavant:
It looks like to me that 'with' statements are like decorators: 
overrated.


Oh no, you just insulted my favourite two python features, followed 
immediately by generators, iterators and list comprehensions / 
generator expressions :p


No, really: they *are* great ;D

Regards,
Michael
They are great, and because of that greatness some of us, including me, 
tend to use them where there's no point doing so.
I would never state that decorators are useless anyway, not in this 
list, I value my life too much :-) (I'll give it a try in the perl list)


JM
--
http://mail.python.org/mailman/listinfo/python-list


Re: A scopeguard for Python

2010-03-04 Thread Alf P. Steinbach

* Robert Kern:

On 2010-03-04 10:56 AM, Alf P. Steinbach wrote:

* Robert Kern:

On 2010-03-03 18:49 PM, Alf P. Steinbach wrote:

[snippety]



If you call the possibly failing operation A, then that systematic
approach goes like this: if A fails, then it has cleaned up its own
mess, but if A succeeds, then it's the responsibility of the calling
code to clean up if the higher level (multiple statements) operation
that A is embedded in, fails.

And that's what Marginean's original C++ ScopeGuard was designed for,
and what the corresponding Python Cleanup class is designed for.


And try: finally:, for that matter.


Not to mention with.

Some other poster made the same error recently in this thread; it is a
common fallacy in discussions about programming, to assume that since
the same can be expressed using lower level constructs, those are all
that are required.

If adopted as true it ultimately means the removal of all control
structures above the level of if and goto (except Python doesn't
have goto).


What I'm trying to explain is that the with: statement has a use even if 
Cleanup doesn't. Arguing that Cleanup doesn't improve on try: finally: 
does not mean that the with: statement doesn't improve on try: finally:.


That's a different argument, essentially that you see no advantage for your 
current coding patterns.


It's unconnected to the argument I responded to.

The argument that I responded to, that the possibility of expressing things at 
the level of try:finally: means that a higher level construct is superfluous, is 
still meaningless.





Both formulations can be correct (and both work perfectly fine with
the chdir() example being used). Sometimes one is better than the
other, and sometimes not. You can achieve both ways with either your
Cleanup class or with try: finally:.

I am still of the opinion that Cleanup is not an improvement over try:
finally: and has the significant ugliness of forcing cleanup code into
callables. This significantly limits what you can do in your cleanup
code.


Uhm, not really. :-) As I see it.


Well, not being able to affect the namespace is a significant
limitation. Sometimes you need to delete objects from the namespace in
order to ensure that their refcounts go to zero and their cleanup code
gets executed.


Just a nit (I agree that a lambda can't do this, but as to what's
required): assigning None is sufficient for that[1].


Yes, but no callable is going to allow you to assign None to names in 
that namespace, either. Not without sys._getframe() hackery, in any case.



However, note that the current language doesn't guarantee such cleanup,
at least as far as I know.

So while it's good practice to support it, to do everything to let it
happen, it's presumably bad practice to rely on it happening.



Tracebacks will keep the namespace alive and all objects in it.


Thanks!, I hadn't thought of connecting that to general cleanup actions.

It limits the use of general with in the same way.


Not really.


Sorry, it limits general 'with' in /exactly/ the same way.



It's easy to write context managers that do that [delete objects from the 
namespace].


Sorry, no can do, as far as I know; your following example quoted below is an 
example of /something else/.


And adding on top of irrelevancy, for the pure technical aspect it can be 
accomplished in the same way using Cleanup (I provide an example below).


However, doing that would generally be worse than pointless since with good 
coding practices the objects would become unreferenced anyway.



You put 
the initialization code in the __enter__() method, assign whatever 
objects you want to keep around through the with: clause as attributes 
on the manager, then delete those attributes in the __exit__().


Analogously, if one were to do this thing, then it could be accomplished using a 
Cleanup context manager as follows:


  foo = lambda: None
  foo.x = create_some_object()
  at_cleanup.call( lambda o = foo: delattr( o, x ) )

... except that

  1) for a once-only case this is less code :-)

  2) it is a usage that I wouldn't recommend; instead I recommend adopting good
 coding practices where object references aren't kept around.


Or, you 
use the @contextmanager decorator to turn a generator into a context 
manager, and you just assign to local variables and del them in the 
finally: clause.


Uhm, you don't need a 'finally' clause when you define a context manager.

Additionally, you don't need to 'del' the local variables in @contextmanager 
decorated generator.


The local variables cease to exist automatically.


What you can't do is write a generic context manager where the 
initialization happens inside the with: clause and the cleanup actions 
are registered callables. That does not allow you to affect the namespace.


If you mean that you can't introduce direct local variables and have them 
deleted by registered callables in a portable way, then right.


But I can't think of any 

Evaluate my first python script, please

2010-03-04 Thread Pete Emerson
I've written my first python program, and would love suggestions for
improvement.

I'm a perl programmer and used a perl version of this program to guide
me. So in that sense, the python is perlesque

This script parses /etc/hosts for hostnames, and based on terms given
on the command line (argv), either prints the list of hostnames that
match all the criteria, or uses ssh to connect to the host if the
number of matches is unique.

I am looking for advice along the lines of an easier way to do this
or a more python way (I'm sure that's asking for trouble!) or
people commonly do this instead or here's a slick trick or oh,
interesting, here's my version to do the same thing.

I am aware that there are performance improvements and error checking
that could be made, such as making sure the file exists and is
readable and precompiling the regular expressions and not calculating
how many sys.argv arguments there are more than once. I'm not hyper
concerned with performance or idiot proofing for this particular
script.

Thanks in advance.


#!/usr/bin/python

import sys, fileinput, re, os

filename = '/etc/hosts'

hosts = []

for line in open(filename, 'r'):
match = re.search('\d+\.\d+\.\d+\.\d+\s+(\S+)', line)
if match is None or re.search('^(?:float|localhost)\.', line):
continue
hostname = match.group(1)
count = 0
for arg in sys.argv[1:]:
for section in hostname.split('.'):
if section == arg:
count = count + 1
break
if count == len(sys.argv) - 1:
hosts.append(hostname)

if len(hosts) == 1:
os.system(ssh -A  + hosts[0])
else:
print '\n'.join(hosts)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pyao makes the right sound but why?

2010-03-04 Thread Anssi Saari
'2+ electriclighthe...@gmail.com writes:

 dev = ao.AudioDevice('alsa')
 dev.play(x)

 could launch me a semi realtime dj kinda sys
 luckily .. it does seem to be making the right sound
 but why?
 the default of the samplerate and that 16bit happened to match with my thing 
 x?

Yes, that seems to be the case from help(ao). But I couldn't run 
tofu = soy.Bean() since it seems to want tit01.wav. Do you make that
available somewhere?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Partly erratic wrong behaviour, Python 3, lxml

2010-03-04 Thread Stefan Behnel

Jussi Piitulainen, 04.03.2010 11:46:

I am observing weird semi-erratic behaviour that involves Python 3 and
lxml, is extremely sensitive to changes in the input data, and only
occurs when I name a partial result. I would like some help with this,
please. (Python 3.1.1; GNU/Linux; how do I find lxml version?)


Here's how to find the version:

http://codespeak.net/lxml/FAQ.html#i-think-i-have-found-a-bug-in-lxml-what-should-i-do

I'll give your test code a try when I get to it. However, note that the 
best place to discuss this is the lxml mailing list:


http://codespeak.net/mailman/listinfo/lxml-dev

Stefan

--
http://mail.python.org/mailman/listinfo/python-list


Re: NoSQL Movement?

2010-03-04 Thread Duncan Booth
mk mrk...@gmail.com wrote:

 Duncan Booth wrote:
 
 If you look at some of the uses of bigtable you may begin to
 understand the tradeoffs that are made with sql. When you use
 bigtable you have records with fields, and you have indices, but
 there are limitations on the kinds of queries you can perform: in
 particular you cannot do joins, but more subtly there is no guarantee
 that the index is up to date (so you might miss recent updates or
 even get data back from a query when the data no longer matches the
 query). 
 
 Hmm, I do understand that bigtable is used outside of traditional 
 'enterprisey' contexts, but suppose you did want to do an equivalent
 of join; is it at all practical or even possible?
 
 I guess when you're forced to use denormalized data, you have to 
 simultaneously update equivalent columns across many tables yourself, 
 right? Or is there some machinery to assist in that?

Or you avoid having to do that sort of update at all.

There are many applications which simply wouldn't be applicable to 
bigtable. My point was that to make best use of bigtable you may have to 
make different decisions when designing the software.

 
 By sacrificing some of SQL's power, Google get big benefits: namely 
 updating data is a much more localised option. Instead of an update 
 having to lock the indices while they are updated, updates to
 different records can happen simultaneously possibly on servers on
 the opposite sides of the world. You can have many, many servers all
 using the same data although they may not have identical or
 completely consistent views of that data.
 
 And you still have the global view of the table spread across, say, 2 
 servers, one located in Australia, second in US?
 
More likely spread across a few thousand servers. The data migrates round 
the servers as required. As I understand it records are organised in 
groups: when you create a record you can either make it a root record or 
you can give it a parent record. So for example you might make all the data 
associated with a specific user live with the user record as a parent (or 
ancestor). When you access any of that data then all of it is copied onto a 
server near the application as all records under a common root are always 
stored together.

 Bigtable impacts on how you store the data: for example you need to 
 avoid reducing data to normal form (no joins!), its much better and 
 cheaper just to store all the data you need directly in each record. 
 Also aggregate values need to be at least partly pre-computed and
 stored in the database.
 
 So you basically end up with a few big tables or just one big table
 really? 

One. Did I mention that bigtable doesn't require you to have the same 
columns in every record? The main use of bigtable (outside of Google's 
internal use) is Google App Engine and that apparently uses one table.

Not one table per application, one table total. It's a big table.

 
 Suppose on top of 'tweets' table you have 'dweebs' table, and tweets
 and dweebs sometimes do interact. How would you find such interacting
 pairs? Would you say give me some tweets to tweets table, extract
 all the dweeb_id keys from tweets and then retrieve all dweebs from
 dweebs table? 

If it is one tweet to many dweebs?

  Dweeb.all().filter(tweet =, tweet.key())
or:
  GqlQuery(SELECT * FROM Dweeb WHERE tweet = :tweet, tweet=tweet)

or just make the tweet the ancestor of all its dweebs.

Columns may be scalars or lists, so if it is some tweets to many dweebs you 
can do basically the same thing.

  Dweeb.all().filter(tweets =, tweet.key())

but if there are too many tweets in the list that could be a problem.

If you want dweebs for several tweets you could select with tweet IN  the 
list of tweets or do a separate query for each (not much difference, as I 
understand it the IN operator just expands into several queries internally 
anyway).
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: loop over list and process into groups

2010-03-04 Thread nn


lbolla wrote:
 On Mar 4, 3:57 pm, Sneaky Wombat joe.hr...@gmail.com wrote:
  [ {'vlan_or_intf': 'VLAN2021'},
   {'vlan_or_intf': 'Interface'},
   {'vlan_or_intf': 'Po1'},
   {'vlan_or_intf': 'Po306'},
   {'vlan_or_intf': 'VLAN2022'},
   {'vlan_or_intf': 'Interface'},
   {'vlan_or_intf': 'Gi7/33'},
   {'vlan_or_intf': 'Po1'},
   {'vlan_or_intf': 'Po306'},
   {'vlan_or_intf': 'VLAN2051'},
   {'vlan_or_intf': 'Interface'},
   {'vlan_or_intf': 'Gi9/6'},
   {'vlan_or_intf': 'VLAN2052'},
   {'vlan_or_intf': 'Interface'},
   {'vlan_or_intf': 'Gi9/6'},]
 
  I want it to be converted to:
 
  [{'2021':['Po1','Po306']},{'2022':['Gi7/33','Po1','Po306']},etc etc]
 
  I was going to write a def to loop through and look for certain pre-
  compiled regexs, and then put them in a new dictionary and append to a
  list, but I'm having trouble thinking of a good way to capture each
  dictionary.  Each dictionary will have a key that is the vlan and the
  value will be a list of interfaces that participate in that vlan.
  Each list will be variable, many containing only one interface and
  some containing many interfaces.
 
  I thought about using itertools, but i only use that for fixed data.
  I don't know of a good way to loop over variably sized data.  I was
  wondering if anyone had any ideas about a good way to convert this
  list or dictionary into the right format that I need.  The solution I
  come up with will most likely be ugly and error prone, so I thought
  i'd ask this python list while I work.  Hopefully I learn a better way
  to solve this problem.
 
  Thanks!
 
  I also have the data in a list,
 
  [ 'VLAN4065',
   'Interface',
   'Gi9/6',
   'Po2',
   'Po3',
   'Po306',
   'VLAN4068',
   'Interface',
   'Gi9/6',
   'VLAN4069',
   'Interface',
   'Gi9/6',]



 ===

 from itertools import groupby

 data = \
 [ {'vlan_or_intf': 'VLAN2021'},
  {'vlan_or_intf': 'Interface'},
  {'vlan_or_intf': 'Po1'},
  {'vlan_or_intf': 'Po306'},
  {'vlan_or_intf': 'VLAN2022'},
  {'vlan_or_intf': 'Interface'},
  {'vlan_or_intf': 'Gi7/33'},
  {'vlan_or_intf': 'Po1'},
  {'vlan_or_intf': 'Po306'},
  {'vlan_or_intf': 'VLAN2051'},
  {'vlan_or_intf': 'Interface'},
  {'vlan_or_intf': 'Gi9/6'},
  {'vlan_or_intf': 'VLAN2052'},
  {'vlan_or_intf': 'Interface'},
  {'vlan_or_intf': 'Gi9/6'},]

 def clean_up(lst):
   return [d.values()[0] for d in data if d.values()[0] != 'Interface']

 out = {}
 for k, g in groupby(clean_up(data) , key=lambda s:
 s.startswith('VLAN')):
   if k:
   key = list(g)[0].replace('VLAN','')
   else:
   out[key] = list(g)

 print out
 ===

 hth,
 L.

Good use of groupby. Here is what I ended up coming up:

from itertools import groupby
laninfo=[ 'VLAN4065',
 'Interface',
 'Gi9/6',
 'Po2',
 'Po3',
 'Po306',
 'VLAN4068',
 'Interface',
 'Gi9/6',
 'VLAN4069',
 'Interface',
 'Gi9/6',]

def splitgrp(s, f=[False]):
   f[0]^=s.startswith('VLAN')
   return f[0]
lanlst=(list(g) for k,g in groupby(laninfo,key=splitgrp))
out={item[0][4:]:item[2:] for item in lanlst}
print(out)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Generic singleton

2010-03-04 Thread David Bolen
Duncan Booth duncan.bo...@invalid.invalid writes:

 It is also *everywhere* in the Python world. Unlike Java and C++, Python 
 even has its own built-in type for singletons.

 If you want a singleton in Python use a module.

 So the OP's original examples become:

 --- file singleton.py ---
 foo = {}
 bar = []

 --- other.py ---
 from singleton import foo as s1
 from singleton import foo as s2
 from singleton import bar as s3
 from singleton import bar as s4

 ... and then use them as you wish.

In the event you do use a module as a singleton container, I would
advocate sticking with fully qualified names, avoiding the use of
from imports or any other local namespace caching of references.

Other code sharing the module may not update things as expected, e.g.:

import singleton

singleton.foo = {}

at which point you've got two objects around - one in the singleton.py
module namespace, and the s1/s2 referenced object in other.py.

If you're confident of the usage pattern of all the using code, it may
not be critical.  But consistently using singleton.foo (or an import
alias like s.foo) is a bit more robust, sticking with only one
namespace to reach the singleton.

-- David


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Evaluate my first python script, please

2010-03-04 Thread MRAB

Pete Emerson wrote:

I've written my first python program, and would love suggestions for
improvement.

I'm a perl programmer and used a perl version of this program to guide
me. So in that sense, the python is perlesque

This script parses /etc/hosts for hostnames, and based on terms given
on the command line (argv), either prints the list of hostnames that
match all the criteria, or uses ssh to connect to the host if the
number of matches is unique.

I am looking for advice along the lines of an easier way to do this
or a more python way (I'm sure that's asking for trouble!) or
people commonly do this instead or here's a slick trick or oh,
interesting, here's my version to do the same thing.

I am aware that there are performance improvements and error checking
that could be made, such as making sure the file exists and is
readable and precompiling the regular expressions and not calculating
how many sys.argv arguments there are more than once. I'm not hyper
concerned with performance or idiot proofing for this particular
script.

Thanks in advance.


#!/usr/bin/python

import sys, fileinput, re, os

filename = '/etc/hosts'

hosts = []

for line in open(filename, 'r'):
match = re.search('\d+\.\d+\.\d+\.\d+\s+(\S+)', line)


Use 'raw' strings for regular expressions.

'Normal' Python string literals use backslashes in escape sequences (eg,
'\n'), but so do regular expressions, and regular expressions are passed
to the 're' module in strings, so that can lead to a profusion of
backslashes!

You could either escape the backslashes:

match = re.search('\\d+\\.\\d+\\.\\d+\\.\\d+\s+(\\S+)', line)

or use a raw string:

match = re.search(r'\d+\.\d+\.\d+\.\d+\s+(\S+)', line)

A raw string literal is just like a normal string literal, except that
backslashes aren't special.


if match is None or re.search('^(?:float|localhost)\.', line):
continue


It would be more 'Pythonic' to say not match.

if not match or re.search(r'^(?:float|localhost)\.', line):


hostname = match.group(1)
count = 0
for arg in sys.argv[1:]:
for section in hostname.split('.'):
if section == arg:
count = count + 1


Shorter is:

count += 1


break
if count == len(sys.argv) - 1:
hosts.append(hostname)

if len(hosts) == 1:
os.system(ssh -A  + hosts[0])
else:
print '\n'.join(hosts)


You're splitting the hostname repeatedly. You could split it just once,
outside the outer loop, and maybe make it into a set. You could then
have:

host_parts = set(hostname.split('.'))
count = 0
for arg in sys.argv[1:]:
if arg in host_parts:
count += 1

Incidentally, the re module contains a cache, so the regular expressions
won't be recompiled every time (unless you have a lot of them and the re
module flushes the cache).
--
http://mail.python.org/mailman/listinfo/python-list


Re: Evaluate my first python script, please

2010-03-04 Thread nn
On Mar 4, 2:30 pm, MRAB pyt...@mrabarnett.plus.com wrote:
 Pete Emerson wrote:
  I've written my first python program, and would love suggestions for
  improvement.

  I'm a perl programmer and used a perl version of this program to guide
  me. So in that sense, the python is perlesque

  This script parses /etc/hosts for hostnames, and based on terms given
  on the command line (argv), either prints the list of hostnames that
  match all the criteria, or uses ssh to connect to the host if the
  number of matches is unique.

  I am looking for advice along the lines of an easier way to do this
  or a more python way (I'm sure that's asking for trouble!) or
  people commonly do this instead or here's a slick trick or oh,
  interesting, here's my version to do the same thing.

  I am aware that there are performance improvements and error checking
  that could be made, such as making sure the file exists and is
  readable and precompiling the regular expressions and not calculating
  how many sys.argv arguments there are more than once. I'm not hyper
  concerned with performance or idiot proofing for this particular
  script.

  Thanks in advance.

  
  #!/usr/bin/python

  import sys, fileinput, re, os

  filename = '/etc/hosts'

  hosts = []

  for line in open(filename, 'r'):
     match = re.search('\d+\.\d+\.\d+\.\d+\s+(\S+)', line)

 Use 'raw' strings for regular expressions.

 'Normal' Python string literals use backslashes in escape sequences (eg,
 '\n'), but so do regular expressions, and regular expressions are passed
 to the 're' module in strings, so that can lead to a profusion of
 backslashes!

 You could either escape the backslashes:

         match = re.search('\\d+\\.\\d+\\.\\d+\\.\\d+\s+(\\S+)', line)

 or use a raw string:

         match = re.search(r'\d+\.\d+\.\d+\.\d+\s+(\S+)', line)

 A raw string literal is just like a normal string literal, except that
 backslashes aren't special.

     if match is None or re.search('^(?:float|localhost)\.', line):
  continue

 It would be more 'Pythonic' to say not match.

         if not match or re.search(r'^(?:float|localhost)\.', line):

     hostname = match.group(1)
     count = 0
     for arg in sys.argv[1:]:
             for section in hostname.split('.'):
                     if section == arg:
                             count = count + 1

 Shorter is:

                                 count += 1

                             break
     if count == len(sys.argv) - 1:
             hosts.append(hostname)

  if len(hosts) == 1:
     os.system(ssh -A  + hosts[0])
  else:
     print '\n'.join(hosts)

 You're splitting the hostname repeatedly. You could split it just once,
 outside the outer loop, and maybe make it into a set. You could then
 have:

          host_parts = set(hostname.split('.'))
         count = 0
         for arg in sys.argv[1:]:
                 if arg in host_parts:
                         count += 1

 Incidentally, the re module contains a cache, so the regular expressions
 won't be recompiled every time (unless you have a lot of them and the re
 module flushes the cache).

I think that you really just need a set intersection for the count
part and this should work:

host_parts = set(hostname.split('.'))
count = len(host_parts.intersection(sys.argv[1:]))

-- 
http://mail.python.org/mailman/listinfo/python-list


ANNOUNCE: Exscript 2.0

2010-03-04 Thread knipknap
Introduction
-
Exscript is a Python module and template processor for automating
Telnet or SSH sessions. Exscript supports a wide range of features,
such as parallelization, AAA authentication methods, TACACS, and a
very simple template language. Please refer to the project page for
updated documentation (see the links at the bottom of this
announcement).

New since 0.9.16
--
Exscript 2.0 is without a doubt the most awesome release of Exscript
ever. It is well tested and has already proven to be robust, so use in
a production environment is encouraged.

Changes in this release:

* Exscript's Python API has been completely overhauled and made a lot
simpler. The new API is well documented [1][2].
* Countless utilities for network administrators using Python were
added. For example, it is now trivial to generate reports and emails,
logging is more configurable, new functions for reading and writing
input and output files were added, and user interaction was
simplified.
* The number of dependencies was reduced to make the installation
easier.
* Exscript is now almost 100% unit tested. That means greatly enhanced
stability and reliability.
* Support for threading was improved, leading to better performance
when using a large number of parallel connections.
* The SSH support was greatly improved.
* Exscript now includes some useful Tkinter widgets for monitoring the
Exscript Queue.
* Support for Pseudo devices: It is now possible to emulate a fake
device that behaves like a router. This can come in handy for testing.
* Of course, a huge number of bugs was also fixed. I expect this
release to be the most stable one to date. It was also well tested in
production and has handled hundreds of thousands of Telnet connections
in the past few months.

Dependencies
-
* Python 2.2 or greater
* Python-crypto
* paramiko (optional, for SSH2 support)
* Python-pexpect (optional, for SSH1 support)
* OpenSSH (optional, for SSH1 support)

Download Exscript
--
Release: http://github.com/knipknap/exscript/downloads
Git: http://code.google.com/p/exscript/source

Links
--
[1] Exscript Documentation: 
http://wiki.github.com/knipknap/exscript/documentation
[2] Exscript Python API: http://knipknap.github.com/exscript/api/
[3] Installation Guide: 
http://wiki.github.com/knipknap/exscript/installation-guide
[4] Exscript home page: http://wiki.github.com/knipknap/exscript/
[5] Mailing list: http://groups.google.com/group/exscript
[6] Bug tracker: http://github.com/knipknap/exscript/issues
[7] Browse the source: http://github.com/knipknap/exscript
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: A scopeguard for Python

2010-03-04 Thread Mike Kent
On Mar 3, 10:56 am, Alf P. Steinbach al...@start.no wrote:
 * Mike Kent:

  What's the compelling use case for this vs. a simple try/finally?

 if you thought about it you would mean a simple try/else. finally is 
 always
 executed. which is incorrect for cleanup

 by the way, that's one advantage:

 a with Cleanup is difficult to get wrong, while a try is easy to get 
 wrong,
 as you did here

    ---

 another general advantage is as for the 'with' statement generally

     original_dir = os.getcwd()
     try:
         os.chdir(somewhere)
         # Do other stuff

 also, the do other stuff can be a lot of code

 and also, with more than one action the try-else introduces a lot of nesting

     finally:
         os.chdir(original_dir)
         # Do other cleanup

 cheers  hth.,

 - alf

Wrong?  In what way is my example wrong?  It cleanly makes sure that
the current working directory is the same after the try/finally as it
was before it.  Suboptimal, perhaps, in that the chdir in the finally
part is always executed, even if the chdir in the try part failed to
change the working directory.

That is a clear advantage to the code you presented, in that you have
the ability to register an 'undo' function only if the 'do' code
succeeded.  Your code also avoids a problem with many nested try/
finally blocks.  But for the simple chdir example you gave, I think
'wrong' isn't the word you were looking for regarding the try/finally
example I gave.

Anyway, I'll keep your code in mind the next time I want to avoid a
bunch of nested try/finally blocks.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Pylint Argument number differs from overridden method

2010-03-04 Thread Wanderer
On Mar 4, 5:45 am, Jean-Michel Pichavant jeanmic...@sequans.com
wrote:
 Wanderer wrote:
  On Mar 3, 2:33 pm, Robert Kern robert.k...@gmail.com wrote:

  On 2010-03-03 11:39 AM, Wanderer wrote:

  Pylint W0221 gives the warning
  Argument number differs from overridden method.

  Why is this a problem? I'm overriding the method to add additional
  functionality.

  There are exceptions to every guideline. Doing this could easily be a 
  mistake,
  so it's one of the many things that Pylint checks for. Silence the warning 
  if
  you like.

  --
  Robert Kern

  I have come to believe that the whole world is an enigma, a harmless 
  enigma
    that is made terrible by our own mad attempt to interpret it as though 
  it had
    an underlying truth.
     -- Umberto Eco

  Thanks I was just wondering if I was overlooking something about
  inheritance.

 This is only my opinion but you get this warning because of 2 disctinct
 issues:
 1/ you just made a basic mistake in your signature and need to correct it
 2/ you did not make any mistake in the signature, but this warning may
 reveal a (small) flaw in your class design.

 I don't know the exact context for your code, but it's better to have a
 consistant interface over your methods and mask the implementation
 details from the user.
 In your case, the getRays method may always ask for the lambda
 parameters and just ignore it for one of its implementation.

 And don't write empty doctrings to trick pylint. Either write them, or
 remove this rule, you are loosing all the tool benefits.

 JM

Okay different example. I'm not really using inheritance here but its
the same idea.
The wx.SpinCtrl is annoyingly integer. I want decimal values. so I
created dSpinCtrl.
It adds the argument places. It's a drop in replacement for SpinCtrl.
(okay its not, because I didn't create all the members). If you
replace SpinCtrl with dSpinCtrl the program should work the same
because the new argument places has a default value. I don't see a
problem with more arguments. Less arguments would be a problem.
Expanding functionality from a base class is what I thought
inheritance is about.

class dSpinCtrl():
Adds decimal values to SpinCtrl. Almost a drop in replacement
for SpinCtrl.
Adds 'places' variable for number of places after decimal point


def __init__ (self,
  parent,
  iD = ID_SPIN,
  value = wx.EmptyString,
  pos = wx.DefaultPosition,
  size = wx.DefaultSize,
  style = wx.SP_ARROW_KEYS,
  dmin = -100.0,
  dmax = 100.0,
  dinitial = 0.0,
  places = 0):


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: loop over list and process into groups

2010-03-04 Thread Chris Colbert
Man, deja-vu, I could have sworn I read this thread months ago...

On Thu, Mar 4, 2010 at 2:18 PM, nn prueba...@latinmail.com wrote:



 lbolla wrote:
  On Mar 4, 3:57 pm, Sneaky Wombat joe.hr...@gmail.com wrote:
   [ {'vlan_or_intf': 'VLAN2021'},
{'vlan_or_intf': 'Interface'},
{'vlan_or_intf': 'Po1'},
{'vlan_or_intf': 'Po306'},
{'vlan_or_intf': 'VLAN2022'},
{'vlan_or_intf': 'Interface'},
{'vlan_or_intf': 'Gi7/33'},
{'vlan_or_intf': 'Po1'},
{'vlan_or_intf': 'Po306'},
{'vlan_or_intf': 'VLAN2051'},
{'vlan_or_intf': 'Interface'},
{'vlan_or_intf': 'Gi9/6'},
{'vlan_or_intf': 'VLAN2052'},
{'vlan_or_intf': 'Interface'},
{'vlan_or_intf': 'Gi9/6'},]
  
   I want it to be converted to:
  
   [{'2021':['Po1','Po306']},{'2022':['Gi7/33','Po1','Po306']},etc etc]
  
   I was going to write a def to loop through and look for certain pre-
   compiled regexs, and then put them in a new dictionary and append to a
   list, but I'm having trouble thinking of a good way to capture each
   dictionary.  Each dictionary will have a key that is the vlan and the
   value will be a list of interfaces that participate in that vlan.
   Each list will be variable, many containing only one interface and
   some containing many interfaces.
  
   I thought about using itertools, but i only use that for fixed data.
   I don't know of a good way to loop over variably sized data.  I was
   wondering if anyone had any ideas about a good way to convert this
   list or dictionary into the right format that I need.  The solution I
   come up with will most likely be ugly and error prone, so I thought
   i'd ask this python list while I work.  Hopefully I learn a better way
   to solve this problem.
  
   Thanks!
  
   I also have the data in a list,
  
   [ 'VLAN4065',
'Interface',
'Gi9/6',
'Po2',
'Po3',
'Po306',
'VLAN4068',
'Interface',
'Gi9/6',
'VLAN4069',
'Interface',
'Gi9/6',]
 
 
 
  ===
 
  from itertools import groupby
 
  data = \
  [ {'vlan_or_intf': 'VLAN2021'},
   {'vlan_or_intf': 'Interface'},
   {'vlan_or_intf': 'Po1'},
   {'vlan_or_intf': 'Po306'},
   {'vlan_or_intf': 'VLAN2022'},
   {'vlan_or_intf': 'Interface'},
   {'vlan_or_intf': 'Gi7/33'},
   {'vlan_or_intf': 'Po1'},
   {'vlan_or_intf': 'Po306'},
   {'vlan_or_intf': 'VLAN2051'},
   {'vlan_or_intf': 'Interface'},
   {'vlan_or_intf': 'Gi9/6'},
   {'vlan_or_intf': 'VLAN2052'},
   {'vlan_or_intf': 'Interface'},
   {'vlan_or_intf': 'Gi9/6'},]
 
  def clean_up(lst):
return [d.values()[0] for d in data if d.values()[0] !=
 'Interface']
 
  out = {}
  for k, g in groupby(clean_up(data) , key=lambda s:
  s.startswith('VLAN')):
if k:
key = list(g)[0].replace('VLAN','')
else:
out[key] = list(g)
 
  print out
  ===
 
  hth,
  L.

 Good use of groupby. Here is what I ended up coming up:

 from itertools import groupby
 laninfo=[ 'VLAN4065',
  'Interface',
  'Gi9/6',
  'Po2',
  'Po3',
  'Po306',
  'VLAN4068',
  'Interface',
  'Gi9/6',
  'VLAN4069',
  'Interface',
  'Gi9/6',]

 def splitgrp(s, f=[False]):
   f[0]^=s.startswith('VLAN')
   return f[0]
 lanlst=(list(g) for k,g in groupby(laninfo,key=splitgrp))
 out={item[0][4:]:item[2:] for item in lanlst}
 print(out)
 --
 http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: A scopeguard for Python

2010-03-04 Thread Mike Kent
On Mar 4, 12:30 pm, Robert Kern robert.k...@gmail.com wrote:

 He's ignorant of the use cases of the with: statement, true.

humor Ouch!  Ignorant of the use cases of the with statement, am I?
Odd, I use it all the time. /humor

 Given only your
 example of the with: statement, it is hard to fault him for thinking that try:
 finally: wouldn't suffice.

humor Damn me with faint praise, will you? /humor

I'm kinda amazed at the drama my innocent request for the use case
elicited.  From what I've gotten so far from this thread, for the
actual example Mr. Steinbach used, the only disadvantage to my counter-
example using try/finally is that the chdir in the finally part will
always be executed, even if the chdir in the try part did not
succeed.  I concede that, and was aware of it when I wrote it.  For
the simple example given, I did not consider it compelling.  A more
complex example, that would have required multiple, nested try/finally
blocks, would show the advantages of Mr Steinbach's recipe more
clearly.

However, I fail to understand his response that I must have meant try/
else instead, as this, as Mr. Kern pointed out, is invalid syntax.
Perhaps Mr. Steinbach would like to give an example?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Method / Functions - What are the differences?

2010-03-04 Thread John Posner

On 3/3/2010 6:56 PM, John Posner wrote:


... I was thinking
today about doing a Bruno, and producing similar pieces on:

* properties created with the @property decorator

* the descriptor protocol

I'll try to produce something over the next couple of days.



Starting to think about a writeup on Python properties, I've discovered 
that the official Glossary [1] lacks an entry for property -- it's 
missing in both Py2 and Py3!


Here's a somewhat long-winded definition -- comments, please:

-
An attribute, *a*, of an object, *obj*, is said to be implemented as a 
property if the standard ways of accessing the attribute:


 * evaluation:  print obj.a
 * assignment:  obj.a = 42
 * deletion:del obj.a

... cause methods of a user-defined *property object* to be invoked. The 
attribute is created as a class attribute, not an instance attribute. 
Example:


  class Widget:
  # create color as class attribute, not within __init__()
  color = property-object

  def __init__(self, ...):
  # do not define self.color instance attribute

The property object can be created with the built-in function 
property(), which in some cases can be coded as a decorator: @property. 
The property object can also be an instance of a class that implements 
the descriptor protocol.

-

Tx,
John

[1]  http://docs.python.org/glossary.html
--
http://mail.python.org/mailman/listinfo/python-list


Re: A scopeguard for Python

2010-03-04 Thread Mike Kent
On Mar 3, 12:00 pm, Robert Kern robert.k...@gmail.com wrote:
 On 2010-03-03 09:39 AM, Mike Kent wrote:

  What's the compelling use case for this vs. a simple try/finally?

      original_dir = os.getcwd()
      try:
          os.chdir(somewhere)
          # Do other stuff
      finally:
          os.chdir(original_dir)
          # Do other cleanup

 A custom-written context manager looks nicer and can be more readable.

 from contextlib import contextmanager
 import os

 @contextmanager
 def pushd(path):
      original_dir = os.getcwd()
      os.chdir(path)
      try:
          yield
      finally:
          os.chdir(original_dir)

 with pushd(somewhere):
      ...

Robert, I like the way you think.  That's a perfect name for that
context manager!  However, you can clear one thing up for me... isn't
the inner try/finally superfluous?  My understanding was that there
was an implicit try/finally already done which will insure that
everything after the yield statement was always executed.
-- 
http://mail.python.org/mailman/listinfo/python-list


_winreg and access registry settings of another user

2010-03-04 Thread News123
Hi,

I habe administrator privilege  on a window host and would like to write
a script setting some registry entries for other users.




There are potentially at least two wo ways of doing this:

1.) start a subprocess as other user and change the regitrey for
CURRENT_USER

However I don't know how to start a process (or ideally just a thread)
as another user with python.


2.) Load the 'hive' of the othe user and chnage the registry.

It seems, that one has to load the 'hive' of a different user in order
to have access to somebody eleses registry entries.

I did not find any documetnation of how to load a 'hive' wit the library
_winreg or another python library/


Did anybody else try already something similiar?


thanks in advance for pointers



bye


N



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Partly erratic wrong behaviour, Python 3, lxml

2010-03-04 Thread Jussi Piitulainen
Stefan Behnel writes:
 Jussi Piitulainen, 04.03.2010 11:46:
  I am observing weird semi-erratic behaviour that involves Python 3
  and lxml, is extremely sensitive to changes in the input data, and
  only occurs when I name a partial result. I would like some help
  with this, please. (Python 3.1.1; GNU/Linux; how do I find lxml
  version?)
 
 Here's how to find the version:
 
 http://codespeak.net/lxml/FAQ.html#i-think-i-have-found-a-bug-in-lxml-what-should-i-do

Ok, thank you. Here's the results:

 print(et.LXML_VERSION, et.LIBXML_VERSION,
...   et.LIBXML_COMPILED_VERSION, et.LIBXSLT_VERSION,
...   et.LIBXSLT_COMPILED_VERSION)
(2, 2, 4, 0) (2, 6, 26) (2, 6, 26) (1, 1, 17) (1, 1, 17)

 I'll give your test code a try when I get to it. However, note that
 the best place to discuss this is the lxml mailing list:
 
 http://codespeak.net/mailman/listinfo/lxml-dev

Thank you. Two things, however. First, I snipped out most of the XML
document in that post, so it won't be runnable as is. As I think I
said, my attempts to edit it down to size made the bug hide
itself. Second, it's very sensitive to any changes in that XML.

Oh, and a third thing. I'm not at all sure yet that the bug is in
lxml. It seems to me that Python itself does impossible things - I
hope I'm just being blind to something obvious, really.

But if you like to try it out, I'll post the full test data as a
followup to this. It's just bogus test data.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python SUDS library

2010-03-04 Thread Diez B. Roggisch

Am 04.03.10 06:23, schrieb yamamoto:

Hi,
I tried to make a simple script with SUD library for caching torrent
files. it doenst work!

[versions]
suds: 0.4, python: 2.6.4

[code]
from suds.client import Client
import base64

path = 'sample.torrent'
doc = open(path, 'rb').read()
encoded_doc = base64.b64encode(doc)
url = 'http://torrage.com/api/torrage.wsdl'
client = Client(url, cache=None)
print client
hash = client.service.cacheTorrent(encoded_doc)
print hash

[result]

Suds ( https://fedorahosted.org/suds/ )  version: 0.4 (beta)  build:
R663-20100303

Service ( CacheService ) tns=urn:Torrage
Prefixes (0)
Ports (1):
   (CachePort)
  Methods (1):
 cacheTorrent(xs:string torrent, )
  Types (0):

Traceback (most recent call last):
   File C:\Documents and Settings\yamamoto\workspace\python\console
\extract.py, line 13, inmodule
 result = client.service.cacheTorrent(encoded_doc)
   File C:\Python26\lib\site-packages\suds\client.py, line 539, in
__call__
 return client.invoke(args, kwargs)
   File C:\Python26\lib\site-packages\suds\client.py, line 598, in
invoke
 result = self.send(msg)
   File C:\Python26\lib\site-packages\suds\client.py, line 627, in
send
 result = self.succeeded(binding, reply.message)
   File C:\Python26\lib\site-packages\suds\client.py, line 659, in
succeeded
 r, p = binding.get_reply(self.method, reply)
   File C:\Python26\lib\site-packages\suds\bindings\binding.py, line
143, in get_reply
 replyroot = sax.parse(string=reply)
   File C:\Python26\lib\site-packages\suds\sax\parser.py, line 136,
in parse
 sax.parse(source)
   File C:\Python26\lib\xml\sax\expatreader.py, line 107, in parse
 xmlreader.IncrementalParser.parse(self, source)
   File C:\Python26\lib\xml\sax\xmlreader.py, line 123, in parse
 self.feed(buffer)
   File C:\Python26\lib\xml\sax\expatreader.py, line 211, in feed
 self._err_handler.fatalError(exc)
   File C:\Python26\lib\xml\sax\handler.py, line 38, in fatalError
 raise exception
xml.sax._exceptions.SAXParseException:unknown:2:0: junk after
document element


this is a php code provided by torrage
?
 $client = new SoapClient(http://torrage.com/api/
torrage.wsdl);
 $infoHash = $client-

cacheTorrent(base64_encode(file_get_contents(my.torrent)));

?

any suggestions?


Looks like the XML is malformed. Can you post it? You can insert 
print-statements into the library to do so, eg in line 143 of 
suds.bindings.binding.


Diez

Diez

--
http://mail.python.org/mailman/listinfo/python-list


Re: Partly erratic wrong behaviour, Python 3, lxml

2010-03-04 Thread Jussi Piitulainen
This is the full data file on which my regress/Tribug exhibits the
behaviour that I find incomprehensible, described in the first post in
this thread. The comment in the beginning of the file below was
written before I commented out some records in the data, so the actual
numbers now are not ten expected, thirty sometimes observed, but the
wrong number is always the correct number tripled (5 and 15, I think).

---regress/tridata.py follows---
# Exercise lxml.etree.parse(body).xpath(title)
# which I think should always return a list of
# ten elements but sometimes returns thirty,
# with each of the ten in triplicate. And this
# seems impossible to me. Yet I see it happening.

body = b'''OAI-PMH xmlns=http://www.openarchives.org/OAI/2.0/; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation=http://www.openarchives.org/OAI/2.0/  
http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd;
responseDate2010-03-02T09:38:47Z/responseDate
request verb=ListRecords from=2004-01-01T00:00:00Z 
until=2004-12-31T23:59:59Z 
metadataPrefix=oai_dchttp://localhost/pmh/que/request
ListRecords
record
header!-- x --!-- --
   identifierjrc32003R0055-pl.xml/2/0/identifier
   datestamp2004-08-15T19:45:00Z/datestamp
   setSpecpl/setSpec
/header
metadata
   oai_dc:dc xmlns:oai_dc=http://www.openarchives.org/OAI/2.0/oai_dc/; 
xmlns:dc=http://purl.org/dc/elements/1.1; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation=http://www.openarchives.org/OAI/2.0/oai_dc
http://www.openarchives.org/OAI/2.0/oai_dc.xsd;
  dc:titleRozporz#261;dzenie/dc:title
   /oai_dc:dc
/metadata
/record
!-- record
header
   identifierjrc32003R0055-pl.xml/2/1/identifier
   datestamp2004-08-15T19:45:00Z/datestamp
   setSpecpl/setSpec
/header
metadata
   oai_dc:dc xmlns:oai_dc=http://www.openarchives.org/OAI/2.0/oai_dc/; 
xmlns:dc=http://purl.org/dc/elements/1.1; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation=http://www.openarchives.org/OAI/2.0/oai_dc
http://www.openarchives.org/OAI/2.0/oai_dc.xsd;
  dc:titleKomisji/dc:title
   /oai_dc:dc
/metadata
/record
record
header
   identifierjrc32003R0055-pl.xml/2/2/identifier
   datestamp2004-08-15T19:45:00Z/datestamp
   setSpecpl/setSpec
/header
metadata
   oai_dc:dc xmlns:oai_dc=http://www.openarchives.org/OAI/2.0/oai_dc/; 
xmlns:dc=http://purl.org/dc/elements/1.1; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation=http://www.openarchives.org/OAI/2.0/oai_dc
http://www.openarchives.org/OAI/2.0/oai_dc.xsd;
  dc:title(WE)/dc:title
   /oai_dc:dc
/metadata
/record
record
header
   identifierjrc32003R0055-pl.xml/2/3/identifier
   datestamp2004-08-15T19:45:00Z/datestamp
   setSpecpl/setSpec
/header
metadata
   oai_dc:dc xmlns:oai_dc=http://www.openarchives.org/OAI/2.0/oai_dc/; 
xmlns:dc=http://purl.org/dc/elements/1.1; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation=http://www.openarchives.org/OAI/2.0/oai_dc
http://www.openarchives.org/OAI/2.0/oai_dc.xsd;
  dc:titlenr/dc:title
   /oai_dc:dc
/metadata
/record
record
header
   identifierjrc32003R0055-pl.xml/2/4/identifier
   datestamp2004-08-15T19:45:00Z/datestamp
   setSpecpl/setSpec
/header
metadata
   oai_dc:dc xmlns:oai_dc=http://www.openarchives.org/OAI/2.0/oai_dc/; 
xmlns:dc=http://purl.org/dc/elements/1.1; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation=http://www.openarchives.org/OAI/2.0/oai_dc
http://www.openarchives.org/OAI/2.0/oai_dc.xsd;
  dc:title55/2003/dc:title
   /oai_dc:dc
/metadata
/record
record
header
   identifierjrc32003R0055-pl.xml/3/0/identifier
   datestamp2004-08-15T19:45:00Z/datestamp
   setSpecpl/setSpec
/header
metadata
   oai_dc:dc xmlns:oai_dc=http://www.openarchives.org/OAI/2.0/oai_dc/; 
xmlns:dc=http://purl.org/dc/elements/1.1; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation=http://www.openarchives.org/OAI/2.0/oai_dc
http://www.openarchives.org/OAI/2.0/oai_dc.xsd;
  dc:titlez/dc:title
   /oai_dc:dc
/metadata
/record --
record
header
   identifierjrc32003R0055-pl.xml/3/1/identifier
   datestamp2004-08-15T19:45:00Z/datestamp
   setSpecpl/setSpec
/header
metadata
   oai_dc:dc xmlns:oai_dc=http://www.openarchives.org/OAI/2.0/oai_dc/; 
xmlns:dc=http://purl.org/dc/elements/1.1; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation=http://www.openarchives.org/OAI/2.0/oai_dc
http://www.openarchives.org/OAI/2.0/oai_dc.xsd;
  dc:titlednia/dc:title
   /oai_dc:dc
/metadata
/record
record
header
   identifierjrc32003R0055-pl.xml/3/2/identifier
   datestamp2004-08-15T19:45:00Z/datestamp
   setSpecpl/setSpec
/header
metadata
   oai_dc:dc xmlns:oai_dc=http://www.openarchives.org/OAI/2.0/oai_dc/; 
xmlns:dc=http://purl.org/dc/elements/1.1; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation=http://www.openarchives.org/OAI/2.0/oai_dc

Re: Evaluate my first python script, please

2010-03-04 Thread Jonathan Gardner
On Thu, Mar 4, 2010 at 10:39 AM, Pete Emerson pemer...@gmail.com wrote:

 #!/usr/bin/python


More common:

#!/usr/bin/env python

 import sys, fileinput, re, os

 filename = '/etc/hosts'

 hosts = []

 for line in open(filename, 'r'):
        match = re.search('\d+\.\d+\.\d+\.\d+\s+(\S+)', line)
        if match is None or re.search('^(?:float|localhost)\.', line):
 continue
        hostname = match.group(1)

In Python, we use REs as a last resort. See dir() or help()
for why we don't use REs if we can avoid them.

The above is better represented with:

try:
  ipaddr, hostnames = line.split(None, 1)
except IndexError:
  continue

if line.find('float') =0 or line.find('localhost') = 0:
  continue


        count = 0
        for arg in sys.argv[1:]:
                for section in hostname.split('.'):
                        if section == arg:
                                count = count + 1
                                break
        if count == len(sys.argv) - 1:
                hosts.append(hostname)


You can use the else in a for. as well.

for arg in sys.argv[1:]:
 for section in hostname.split('.'):
 if section == arg:
 break
else: # Run only if for completes naturally
  hosts.append(hostname)


It may be clearer to do set arithmetic as well.

 if len(hosts) == 1:
        os.system(ssh -A  + hosts[0])

You probably want one of the os.exec* methods instead, since you
aren't going to do anything after this.

 else:
        print '\n'.join(hosts)

Rather, the idiom is usually:

for host in hosts:
  print host

-- 
Jonathan Gardner
jgard...@jonathangardner.net
-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   >