ANN: pydf 9

2010-04-07 Thread garabik-news-2005-05
pydf displays the amount of used and available space on your
filesystems, just like df, but in colours. The output format is
completely customizable.

Pydf was written and works on Linux, but should work also on other
modern UNIX systems (including MacOSX).

URL:
http://kassiopeia.juls.savba.sk/~garabik/software/pydf/

License:
public domain

Changes since the last version:

 * remove stray ANSI escape sequence when using --bw mode
 * convert to run with python3 (thanks to Dror Levin), while python2 
   remains supported.
 

-- 
 ---
| Radovan Garabík http://kassiopeia.juls.savba.sk/~garabik/ |
| __..--^^^--..__garabik @ kassiopeia.juls.savba.sk |
 ---
Antivirus alert: file .signature infected by signature virus.
Hi! I'm a signature virus! Copy me into your signature file to help me spread!
-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


[ANN] Greenlet 0.3.1 released

2010-04-07 Thread Kyle Ambroff
Announcing the release of greenlet 0.3.1:

  http://pypi.python.org/pypi/greenlet/0.3.1

0.3.1 is a bugfix release that fixes a critical reference leak bug. The 0.3
release introduced support for passing keyword arguments to the switch method.
There was an edge case where an empty keyword argument dictionary would not have
its reference count decremented, which would cause a memory leak.

Thanks to Marcin Bachry for reporting the bug and providing a patch.

What is Greenlet?
-
The greenlet package is a spin-off of Stackless, a version of CPython
that supports micro-threads called tasklets. Tasklets run
pseudo-concurrently (typically in a single or a few OS-level threads)
and are synchronized with data exchanges on channels. A greenlet,
on the other hand, is a still more primitive notion of micro-thread
with no implicit scheduling; coroutines, in other words.

greenlet is used by several non-blocking IO packages as a more flexible
alternative to Python's built in coroutines.

 * concurrence
 * eventlet
 * gevent

Links
-
Mercurial repository:
http://bitbucket.org/ambroff/greenlet

Documentation:
http://packages.python.org/greenlet/

-Kyle Ambroff
-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


Grease 0.2 Released

2010-04-07 Thread Casey Duncan
Grease is a pluggable and highly extensible 2D game engine and framework for
Python.

The intent of this project is to provide a fresh approach to Python game
development. The component-based architecture allows games to be constructed
bit by bit with built-in separation of concerns. The engine acknowledges that
all game projects are unique and have different requirements. Thus Grease does
not attempt to provide one-size-fits-all solutions. Instead it provides
pluggable components and systems than can be configured, adapted and extended
to fits the particular needs at hand.

This early release has only basic functionality, but it already
demonstrates the power
of the underlying architecture for rapid development.

The goals of the project include:

* Making Python game development faster and more fun by allowing the developer
  to focus on creating their game rather than getting bogged down in
  architecture, low-level apis and adapting ill-fitting tools together.

* To provide pluggable and extensible parts that implement first-class
  techniques and algorithms that can be leveraged for many projects.

* To fully document the engine and provide examples that demonstrate best
  practices for others to base their projects on.

* To facilitate outside contribution of parts and ideas into the framework
  that have proven useful in the wild.

* To provide optional native-code optimized parts for maximum performance,
  but also provide equivalent parts coded in pure Python for ease
  of installation and distribution.

Not all of these goals have been realized yet, but I feel the project is well
on their path.

License
---

Grease is distributed under a permissive MIT-style open source license. This
license permits you to use grease for commercial or non-commercial purposes
free of charge. It makes no demands on how, or whether, you license, or
release the code derived from or built upon Grease, other than preservation of
copyright notice.

For a complete text of the license see the ``LICENSE.txt`` file in the source
distrbution.

Requirements


Grease is platform-independent and should run on any operating system
supporting Python and Pyglet.

The following are required to build and install Grease:

* Python 2.6 (http://www.python.org/)
* Pyglet 1.1 (http://www.pyglet.org/)

Downloading Grease
--

You can download Grease from the Python package index (pypi):

* http://pypi.python.org/pypi/grease/

Documentation
-

You can browse the documentation online at:

* http://pygamesf.org/~casey/grease/doc/

The documentation is also available for offline viewing in the
``doc/build/html`` subdirectory for the source distribution.

Development Status
--

Grease is alpha software under active development. The APIs may change in
future releases, however efforts will be made to minimize breakage between
releases.
-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


virtualenvwrapper 2.0

2010-04-07 Thread Doug Hellmann

What is virtualenvwrapper
=

virtualenvwrapper_ is a set of extensions to Ian Bicking's virtualenv_
tool.  The extensions include wrappers for creating and deleting
virtual environments and otherwise managing your development workflow,
making it easier to work on more than one project at a time without
introducing conflicts in their dependencies.

What's New in 2.0
=

This new version uses a significantly rewritten version of the
hook/callback subsystem to make it easier to share extensions.  For
example, released at the same time is virtualenvwrapper-emacs-desktop_,
a plugin to switch emacs project files when you switch virtualenvs.

Existing user scripts should continue to work as-written. Any failures
are probably a bug, so please report them on the bitbucket
tracker. Documentation for the new plugin system is available in the
virtualenvwrapper docs_.

I also took this opportunity to change the name of the shell script
containing most of the virtualenvwrapper functionality from
virtualenvwrapper_bashrc to virtualenvwrapper.sh. This reflects the
fact that several shells other than bash are supported (bash, sh, ksh,
and zsh are all reported to work). You'll want to update your shell
startup file after upgrading to 2.0.

The work to create the plugin system was triggered by a couple of
recent feature requests for environment templates and for a new
command to create a sub-shell instead of simply changing the settings
of the current shell. The new, more powerful, plugin capabilities will
make it easier to develop these and similar features.

I'm looking forward to seeing what the community comes up with. I
especially want someone to write a plugin to start a copy of a
development server for a Django project if one is found in a
virtualenv. You'll get bonus points if it opens the home page of the
server in a web browser.



.. _virtualenv: http://pypi.python.org/pypi/virtualenv

.. _virtualenvwrapper: http://www.doughellmann.com/projects/virtualenvwrapper/

.. _virtualenvwrapper-emacs-desktop: 
http://www.doughellmann.com/projects/virtualenvwrapper-emacs-desktop/

.. _docs: http://www.doughellmann.com/docs/virtualenvwrapper/

--
http://mail.python.org/mailman/listinfo/python-announce-list

   Support the Python Software Foundation:
   http://www.python.org/psf/donations/


WxPython's Py Suite (PyCrust, etc.) updated with new magic features and new notebook interface shell, PySlices

2010-04-07 Thread David Mashburn

WxPython's Py Suite (PyCrust, etc.) updated with new magic features and new 
notebook interface shell, PySlices.

WxPython has, for a long time, included PyCrust, one of the most popular
Python shells.  PyCrust has found uses in a number of projects, including
Stani's Python Editor and some projects at Enthought.  PyCrust, part of the
larger Py suite of tools, had been dormant for some time now, but it is
now under a new maintainer and has recently been updated!  Py Suite 0.9.8.3
can be found in the wxPython 2.9 svn branch (import path is wx.py),
on a Google code page (http://code.google.com/p/wxpysuite/)
and in PyPI (package name is wx_py).
Py Suite 0.9.8.3 requires wxPython 2.8 or later.
A summary of the major changes follows:

The biggest change is certainly the inclusion of a new notebook interface
version of PyCrust called PySlices into the Py suite!  It features
multi-line execution in re-runnable code blocks called slices and the
ability to save to a simple .pyslices format that when converted to .py
is still valid python code!  PySlices is a great lightweight alternative
to the excellent SAGE and Reinteract projects.

Both PyCrust and PySlices now include some ipython style magic features:

Unix-style path functions:
   cd, ls, and pwd all work as expected
Space based function calls:
   f 1 will automatically convert to f(1)
? character to call help:
   ?dir is equivalent to help(dir)
! character to call operating system shell commands:
   !foobar is automatically converted to commands.getoutput('foobar')

There is also a highly experimental shell (not included in the wxPython svn
version) called SymPySlices that uses sympy for for automatic symbol
creation, and allows for the use of unicode charcters directly in python.
Mathematical operators (including some infix operators via ast parsing) are
supported.  SymPySlices additionally requires Python 2.6 or later and sympy.
This is still very experimental, so please let me know if you have
questions, problems, or ideas (david.n.mashb...@gmail.com)!

You can read more about these projects on the google code page:
   http://code.google.com/p/wxpysuite/
at the PyPI page:
   http://pypi.python.org/pypi/wx_py/0.9.8.3
and in the original wxTrac ticket:
   http://trac.wxwidgets.org/ticket/10959

Feel free to email me with questions, bugs, and feature requests!

PA HREF=http://code.google.com/p/wxpysuite/;Py Suite 0.9.8.3/A - WxPython's Py 
Suite (PyCrust, etc.) updated with new magic features and new notebook interface shell, PySlices.   
(06-April-2010)

--
http://mail.python.org/mailman/listinfo/python-announce-list

   Support the Python Software Foundation:
   http://www.python.org/psf/donations/


ANN: oejskit 0.8.8 JavaScript in-browser testing with py.test plugin and unittest.py glue

2010-04-07 Thread Samuele Pedroni

I'm happy to announce a new release of OE jskit 0.8.8 available on PyPI.

Main points of interest:

* the code to check for the presence of browsers locally has been
improved, browser specifications can now with much more confidence list 
absent browsers and the respective runs/tests will be skipped


* added a workaround to a bug in Firefox 3.5 that interferes with the
global variable leak detection code, this is simply turned off for
FF3.5. The bug itself is fixed in FF3.6


About OE jskit:

jskit contains infrastructure and in particular a py.test plugin to
enable running unit tests for JavaScript code inside browsers.
It contains also glue code to run JavaScript tests from unittest.py
based test suites.

The approach also enables to write integration tests such that the
JavaScript code is tested against server-side Python code mocked as
necessary. Any server-side framework that can already be exposed through
WSGI can play.

The plugin requires py.test 1.1.1 and should also work with current trunk.

More information and downloading at:

http://pypi.python.org/pypi/oejskit

including a changelog, documentation and the talk I gave at Europython 2009.

jskit was initially developed by Open End AB and is released under the
MIT license.

In various incarnations it has been in use and useful at Open End for
more than two years, we are quite happy to share it.

Samuele Pedroni for Open End















___
py-dev mailing list
py-...@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev

___
py-dev mailing list
py-...@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev

--
http://mail.python.org/mailman/listinfo/python-announce-list

   Support the Python Software Foundation:
   http://www.python.org/psf/donations/


PyCon Australia Call For Proposals

2010-04-07 Thread Richard Jones
Hi everyone,

I'm happy to announce that on the 26th and 27th of June we are running PyCon
Australia in Sydney!

 http://pycon-au.org/

We are looking for proposals for Talks on all aspects of Python programming
from novice to advanced levels; applications and frameworks, or how you
have been involved in introducing Python into your organisation.

We welcome first-time speakers; we are a community conference and we are
eager to hear about your experience. If you have friends or colleagues
who have something valuable to contribute, twist their arms to tell us
about it! Please also forward this Call for Proposals to anyone that you
feel may be interested.

To find out more go to the official Call for Proposals page here:

  http://pycon-au.org/2010/conference/proposals/

The deadline for proposal submission is the 29th of April. Proposal
acceptance will be announced on the 12th of May.


See you in Sydney in June!

Richard Jones
PyCon AU Program Chair
-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


python Barcamp in Cologne - regional unconference

2010-04-07 Thread Reimar Bauer
The german python usergroup pyCologne announces a barcamp at 17.4 in
cologne.
For further details see http://python-barcamp.de  (Sorry, this page is
in German only)

An unconference is a facilitated, participant-driven conference
centered on a theme or purpose (http://en.wikipedia.org/wiki/
Unconference).

Reimar





-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


Re: Getting Local MAC Address

2010-04-07 Thread Rebelo

Lawrence D'Oliveiro wrote:

In message
ec6d247c-a6b0-4f33-a36b-1d33eace6...@k19g2000yqn.googlegroups.com, Booter 
wrote:



I am new to python ans was wondering if there was a way to get the mac
address from the local NIC?


What if you have more than one?



you can try with netifaces :
http://pypi.python.org/pypi/netifaces/0.3
I use them on both Windows and Linux
--
http://mail.python.org/mailman/listinfo/python-list


Re: (a==b) ? 'Yes' : 'No'

2010-04-07 Thread Duncan Booth
Steven D'Aprano ste...@remove.this.cybersource.com.au wrote:

 On Tue, 06 Apr 2010 16:54:18 +, Duncan Booth wrote:
 
 Albert van der Horst alb...@spenarnc.xs4all.nl wrote:
 
 Old hands would have ...
 stamp =( weight=1000 and  120 or
  weight=500  and  100 or
  weight=250  and  80  or
  weight=100  and  60  or
44  )
 
 (Kind of a brain twister, I think, inferior to C, once the c-construct
 is accepted as idiomatic.)
 
 I doubt many old hands would try to join multiple and/or operators that
 way. Most old hands would (IMHO) write the if statements out in full,
 though some might remember that Python comes 'batteries included':
 
  from bisect import bisect
  WEIGHTS = [100, 250, 500, 1000]
  STAMPS = [44, 60, 80, 100, 120]
 
  ...
  stamp = STAMPS[bisect(WEIGHTS,weight)]
 
 
 Isn't that an awfully heavyweight and obfuscated solution for choosing 
 between five options? Fifty-five options, absolutely, but five?
 
I did say most people would simply write out an if statement.

However, since you ask, using bisect here allows you to separate the data 
from the code and even with only 5 values that may be worthwhile. 
Especially if there's any risk it could become 6 next week.


-- 
Duncan Booth http://kupuguy.blogspot.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pass object or use self.object?

2010-04-07 Thread Bruno Desthuilliers

Lie Ryan a écrit :
(snip)


Since in function in python is a first-class object, you can instead do
something like:

def process(document):
# note: document should encapsulate its own logic
document.do_one_thing()


Obvious case of encapsulation abuse here. Should a file object 
encapsulate all the csv parsing logic ? (and the html parsing, xml 
parsing, image manipulation etc...) ? Should a model object 
encapsulate the presentation logic ? I could go on for hours here...




and I think for your purpose, the mixin pattern could cleanly separate
manipulation and document while still obeying object-oriented pattern
that document is self-sufficient:

# language with only single-inheritance can only dream to do this



class Appendable(object):
def append(self, text):
self.text += text
class Savable(object):
def save(self, fileobj):
fileobj.write(self.text)
class Openable(object):
def open(self, fileobj):
self.text = fileobj.read()
class Document(Appendable, Savable, Openable):
def __init__(self):
self.text = ''


Anyone having enough experience with Zope2 knows why this sucks big time.
--
http://mail.python.org/mailman/listinfo/python-list


Python and Regular Expressions

2010-04-07 Thread Richard Lamboj

Hello,

i want to parse this String:

version 3.5.1 {

$pid_dir = /opt/samba-3.5.1/var/locks/
$bin_dir = /opt/samba-3.5.1/bin/

service smbd {
bin = ${bin_dir}smbd -D
pid = ${pid_dir}smbd.pid
}
service nmbd {
bin = ${bin_dir}nmbd -D
pid = ${pid_dir}nmbd.pid
}
service winbindd {
bin = ${bin_dir}winbindd -D
pid = ${pid_dir}winbindd.pid
}
}

version 3.2.14 {

$pid_dir = /opt/samba-3.5.1/var/locks/
$bin_dir = /opt/samba-3.5.1/bin/

service smbd {
bin = ${bin_dir}smbd -D
pid = ${pid_dir}smbd.pid
}
service nmbd {
bin = ${bin_dir}nmbd -D
pid = ${pid_dir}nmbd.pid
}
service winbindd {
bin = ${bin_dir}winbindd -D
pid = ${pid_dir}winbindd.pid
}
} 

Step 1:

version 3.2.14 {

$pid_dir = /opt/samba-3.5.1/var/locks/
$bin_dir = /opt/samba-3.5.1/bin/

service smbd {
bin = ${bin_dir}smbd -D
pid = ${pid_dir}smbd.pid
}
service nmbd {
bin = ${bin_dir}nmbd -D
pid = ${pid_dir}nmbd.pid
}
service winbindd {
bin = ${bin_dir}winbindd -D
pid = ${pid_dir}winbindd.pid
}
} 

Step 2:
service smbd {
bin = ${bin_dir}smbd -D
pid = ${pid_dir}smbd.pid
}
Step 3:
$pid_dir = /opt/samba-3.5.1/var/locks/
$bin_dir = /opt/samba-3.5.1/bin/

Step 4:
bin = ${bin_dir}smbd -D
pid = ${pid_dir}smbd.pid

My Regular Expressions:
version[\s]*[\w\.]*[\s]*\{[\w\s\n\t\{\}=\$\.\-_\/]*\}
service[\s]*[\w]*[\s]*\{([\n\s\w\=]*(\$\{[\w_]*\})*[\w\s\-=\.]*)*\}

I think it was no good Solution. I'am trying with Groups:
(service[\s\w]*)\{([\n\w\s=\$\-_\.]*)
but this part makes Problems: ${bin_dir}

Kind Regards

Richi
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: plotting in python 3

2010-04-07 Thread egl...@gmail.com
On Apr 6, 11:52 pm, Rolf Camps rolf_ca...@fsfe.org wrote:
 Op dinsdag 06-04-2010 om 14:55 uur [tijdzone -0500], schreef Christopher
 Choi:

 It was after the homework I asked my question. All plot solutions i
 found where for python2.x. gnuplot_py states on its homepage you need a
 'working copy of numpy'. I don't think numpy is ported to python 3.x. Or
 is it?

Google charts could be quick and dirty solution -- 
http://pygooglechart.slowchop.com/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Regular Expressions

2010-04-07 Thread Chris Rebert
On Wed, Apr 7, 2010 at 1:37 AM, Richard Lamboj richard.lam...@bilcom.at wrote:
 i want to parse this String:

 version 3.5.1 {

        $pid_dir = /opt/samba-3.5.1/var/locks/
        $bin_dir = /opt/samba-3.5.1/bin/

        service smbd {
                bin = ${bin_dir}smbd -D
                pid = ${pid_dir}smbd.pid
        }
        service nmbd {
                bin = ${bin_dir}nmbd -D
                pid = ${pid_dir}nmbd.pid
        }
        service winbindd {
                bin = ${bin_dir}winbindd -D
                pid = ${pid_dir}winbindd.pid
        }
 }

 version 3.2.14 {

        $pid_dir = /opt/samba-3.5.1/var/locks/
        $bin_dir = /opt/samba-3.5.1/bin/

        service smbd {
                bin = ${bin_dir}smbd -D
                pid = ${pid_dir}smbd.pid
        }
        service nmbd {
                bin = ${bin_dir}nmbd -D
                pid = ${pid_dir}nmbd.pid
        }
        service winbindd {
                bin = ${bin_dir}winbindd -D
                pid = ${pid_dir}winbindd.pid
        }
 }

 Step 1:

 version 3.2.14 {

        $pid_dir = /opt/samba-3.5.1/var/locks/
        $bin_dir = /opt/samba-3.5.1/bin/

        service smbd {
                bin = ${bin_dir}smbd -D
                pid = ${pid_dir}smbd.pid
        }
        service nmbd {
                bin = ${bin_dir}nmbd -D
                pid = ${pid_dir}nmbd.pid
        }
        service winbindd {
                bin = ${bin_dir}winbindd -D
                pid = ${pid_dir}winbindd.pid
        }
 }

 Step 2:
        service smbd {
                bin = ${bin_dir}smbd -D
                pid = ${pid_dir}smbd.pid
        }
 Step 3:
        $pid_dir = /opt/samba-3.5.1/var/locks/
        $bin_dir = /opt/samba-3.5.1/bin/

 Step 4:
                bin = ${bin_dir}smbd -D
                pid = ${pid_dir}smbd.pid

 My Regular Expressions:
 version[\s]*[\w\.]*[\s]*\{[\w\s\n\t\{\}=\$\.\-_\/]*\}
 service[\s]*[\w]*[\s]*\{([\n\s\w\=]*(\$\{[\w_]*\})*[\w\s\-=\.]*)*\}

 I think it was no good Solution. I'am trying with Groups:
 (service[\s\w]*)\{([\n\w\s=\$\-_\.]*)
 but this part makes Problems: ${bin_dir}

Regular expressions != Parsers

Every time someone tries to parse nested structures using regular
expressions, Jamie Zawinski kills a puppy.

Try using an *actual* parser, such as Pyparsing:
http://pyparsing.wikispaces.com/

Cheers,
Chris
--
Some people, when confronted with a problem, think:
I know, I'll use regular expressions. Now they have two problems.
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: converting a timezone-less datetime to seconds since the epoch

2010-04-07 Thread Chris Withers

Hi Chris,

Chris Rebert wrote:

from calendar import timegm

def timestamp(dttm):
return timegm(dttm.utctimetuple())
#the *utc*timetuple change is just for extra consistency
#it shouldn't actually make a difference here

And problem solved. As for what the problem was:

Paraphrasing the table I got added to the time module docs:
(http://docs.python.org/library/time.html)


That table is not obvious :-/
Could likely do with its own section...


To convert from struct_time in ***UTC***
to seconds since the epoch
use calendar.timegm()


...and really, wtf is timegm doing in calendar rather than in time? ;-)


I'd be *more* interested in knowing either why the timestamp function or the
tests are wrong and how to correct them...


You used a function intended for local times on UTC time data, and
therefore got incorrect results.


Thanks for the info, I don't think I'd ever have gotten to the bottom of 
this on my own! :-)


Chris

--
Simplistix - Content Management, Batch Processing  Python Consulting
- http://www.simplistix.co.uk
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Regular Expressions

2010-04-07 Thread Bruno Desthuilliers

Richard Lamboj a écrit :

Hello,

i want to parse this String:

version 3.5.1 {

$pid_dir = /opt/samba-3.5.1/var/locks/
$bin_dir = /opt/samba-3.5.1/bin/

service smbd {
bin = ${bin_dir}smbd -D
pid = ${pid_dir}smbd.pid
}
service nmbd {
bin = ${bin_dir}nmbd -D
pid = ${pid_dir}nmbd.pid
}
service winbindd {
bin = ${bin_dir}winbindd -D
pid = ${pid_dir}winbindd.pid
}
}


(snip)

I think you'd be better writing a specific parser here. Paul McGuire's 
PyParsing package might help:


http://pyparsing.wikispaces.com/

My 2 cents.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Regular Expressions

2010-04-07 Thread Richard Lamboj
Am Wednesday 07 April 2010 10:52:14 schrieb Chris Rebert:
 On Wed, Apr 7, 2010 at 1:37 AM, Richard Lamboj richard.lam...@bilcom.at 
wrote:
  i want to parse this String:
 
  version 3.5.1 {
 
         $pid_dir = /opt/samba-3.5.1/var/locks/
         $bin_dir = /opt/samba-3.5.1/bin/
 
         service smbd {
                 bin = ${bin_dir}smbd -D
                 pid = ${pid_dir}smbd.pid
         }
         service nmbd {
                 bin = ${bin_dir}nmbd -D
                 pid = ${pid_dir}nmbd.pid
         }
         service winbindd {
                 bin = ${bin_dir}winbindd -D
                 pid = ${pid_dir}winbindd.pid
         }
  }
 
  version 3.2.14 {
 
         $pid_dir = /opt/samba-3.5.1/var/locks/
         $bin_dir = /opt/samba-3.5.1/bin/
 
         service smbd {
                 bin = ${bin_dir}smbd -D
                 pid = ${pid_dir}smbd.pid
         }
         service nmbd {
                 bin = ${bin_dir}nmbd -D
                 pid = ${pid_dir}nmbd.pid
         }
         service winbindd {
                 bin = ${bin_dir}winbindd -D
                 pid = ${pid_dir}winbindd.pid
         }
  }
 
  Step 1:
 
  version 3.2.14 {
 
         $pid_dir = /opt/samba-3.5.1/var/locks/
         $bin_dir = /opt/samba-3.5.1/bin/
 
         service smbd {
                 bin = ${bin_dir}smbd -D
                 pid = ${pid_dir}smbd.pid
         }
         service nmbd {
                 bin = ${bin_dir}nmbd -D
                 pid = ${pid_dir}nmbd.pid
         }
         service winbindd {
                 bin = ${bin_dir}winbindd -D
                 pid = ${pid_dir}winbindd.pid
         }
  }
 
  Step 2:
         service smbd {
                 bin = ${bin_dir}smbd -D
                 pid = ${pid_dir}smbd.pid
         }
  Step 3:
         $pid_dir = /opt/samba-3.5.1/var/locks/
         $bin_dir = /opt/samba-3.5.1/bin/
 
  Step 4:
                 bin = ${bin_dir}smbd -D
                 pid = ${pid_dir}smbd.pid
 
  My Regular Expressions:
  version[\s]*[\w\.]*[\s]*\{[\w\s\n\t\{\}=\$\.\-_\/]*\}
  service[\s]*[\w]*[\s]*\{([\n\s\w\=]*(\$\{[\w_]*\})*[\w\s\-=\.]*)*\}
 
  I think it was no good Solution. I'am trying with Groups:
  (service[\s\w]*)\{([\n\w\s=\$\-_\.]*)
  but this part makes Problems: ${bin_dir}

 Regular expressions != Parsers

 Every time someone tries to parse nested structures using regular
 expressions, Jamie Zawinski kills a puppy.

 Try using an *actual* parser, such as Pyparsing:
 http://pyparsing.wikispaces.com/

 Cheers,
 Chris
 --
 Some people, when confronted with a problem, think:
 I know, I'll use regular expressions. Now they have two problems.
 http://blog.rebertia.com

Well, after some trying with regex, your both right. I will use pyparse it 
seems to be the better solution.

Kind Regards
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Q about assignment and references

2010-04-07 Thread jdbosmaus
Thanks to all for the informative answers.
You made me realize this is a wxPython issue. I have to say, wxPython
seems useful, and I'm glad it is available - but it doesn't have the
gentlest of learning curves.
-- 
http://mail.python.org/mailman/listinfo/python-list


PyCon Australia Call For Proposals

2010-04-07 Thread Richard Jones
Hi everyone,

I'm happy to announce that on the 26th and 27th of June we are running PyCon
Australia in Sydney!

 http://pycon-au.org/

We are looking for proposals for Talks on all aspects of Python programming
from novice to advanced levels; applications and frameworks, or how you
have been involved in introducing Python into your organisation.

We welcome first-time speakers; we are a community conference and we are
eager to hear about your experience. If you have friends or colleagues
who have something valuable to contribute, twist their arms to tell us
about it! Please also forward this Call for Proposals to anyone that you
feel may be interested.

To find out more go to the official Call for Proposals page here:

  http://pycon-au.org/2010/conference/proposals/

The deadline for proposal submission is the 29th of April. Proposal
acceptance will be announced on the 12th of May.


See you in Sydney in June!

Richard Jones
PyCon AU Program Chair
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: staticmethod and namespaces

2010-04-07 Thread Дамјан Георгиевски


 Having an odd problem that I solved, but wondering if its the best
 solution (seems like a bit of a hack).
 
 First off, I'm using an external DLL that requires static callbacks,
 but because of this, I'm losing instance info. It could be import
 related? It will make more sense after I diagram it:

 -
 So basically I added a list of instances to the base class so I can
 get at them from the staticmethod.

Have you tried using a closure, something like this:

class A:
def call(self, args):
def callback(a, b): # normal function
# but I can access self here too
call_the_dll_function(callback, args1, args2...)


 What's bothering me the most is I can't use the global app instance in
 the A.py module.
 
 How can I get at the app instance (currently I'm storing that along
 with the class instance in the constructor)?
 Is there another way to do this that's not such a hack?
 
 Sorry for the double / partial post :(

-- 
дамјан ((( http://damjan.softver.org.mk/ )))

Q: What's tiny and yellow and very, very, dangerous?
A: A canary with the super-user password.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: converting a timezone-less datetime to seconds since the epoch

2010-04-07 Thread Floris Bruynooghe
On Apr 7, 9:57 am, Chris Withers ch...@simplistix.co.uk wrote:
 Chris Rebert wrote:
  To convert from struct_time in ***UTC***
  to seconds since the epoch
  use calendar.timegm()

 ...and really, wtf is timegm doing in calendar rather than in time? ;-)

You're not alone in finding this strange: http://bugs.python.org/issue6280

(the short apologetic reason is that timegm is written in python
rather the C)

Regards
Floris
-- 
http://mail.python.org/mailman/listinfo/python-list


jobs in california jobs in california los angeles jobs in california for uk residents jobs in california san diego jobs in california orange county jobs in california for british http

2010-04-07 Thread Naeem
jobs in california jobs in california los angeles jobs in
california for uk residents jobs in california san diego jobs in
california orange county jobs in california for british
http://jobsincalifornia-usa.blogspot.com/ jobs in
california jobs in california los angeles jobs in california for
uk residents jobs in california san diego jobs in california
orange county jobs in california for british   
http://jobsincalifornia-usa.blogspot.com/
jobs in california jobs in california los angeles jobs in
california for uk residents jobs in california san diego jobs in
california orange county jobs in california for british
http://jobsincalifornia-usa.blogspot.com/ jobs in
california jobs in california los angeles jobs in california for
uk residents jobs in california san diego jobs in california
orange county jobs in california for british   
http://jobsincalifornia-usa.blogspot.com/
jobs in california jobs in california los angeles jobs in
california for uk residents jobs in california san diego jobs in
california orange county jobs in california for british
http://jobsincalifornia-usa.blogspot.com/ jobs in
california jobs in california los angeles jobs in california for
uk residents jobs in california san diego jobs in california
orange county jobs in california for british   
http://jobsincalifornia-usa.blogspot.com/
jobs in california jobs in california los angeles jobs in
california for uk residents jobs in california san diego jobs in
california orange county jobs in california for british
http://jobsincalifornia-usa.blogspot.com/ jobs in
california jobs in california los angeles jobs in california for
uk residents jobs in california san diego jobs in california
orange county jobs in california for british   
http://jobsincalifornia-usa.blogspot.com/
jobs in california jobs in california los angeles jobs in
california for uk residents jobs in california san diego jobs in
california orange county jobs in california for british
http://jobsincalifornia-usa.blogspot.com/ jobs in
california jobs in california los angeles jobs in california for
uk residents jobs in california san diego jobs in california
orange county jobs in california for british   
http://jobsincalifornia-usa.blogspot.com/
jobs in california jobs in california los angeles jobs in
california for uk residents jobs in california san diego jobs in
california orange county jobs in california for british
http://jobsincalifornia-usa.blogspot.com/ jobs in
california jobs in california los angeles jobs in california for
uk residents jobs in california san diego jobs in california
orange county jobs in california for british   
http://jobsincalifornia-usa.blogspot.com/
jobs in california jobs in california los angeles jobs in
california for uk residents jobs in california san diego jobs in
california orange county jobs in california for british
http://jobsincalifornia-usa.blogspot.com/ jobs in
california jobs in california los angeles jobs in california for
uk residents jobs in california san diego jobs in california
orange county jobs in california for british   
http://jobsincalifornia-usa.blogspot.com/
jobs in california jobs in california los angeles jobs in
california for uk residents jobs in california san diego jobs in
california orange county jobs in california for british
http://jobsincalifornia-usa.blogspot.com/ jobs in
california jobs in california los angeles jobs in california for
uk residents jobs in california san diego jobs in california
orange county jobs in california for british   
http://jobsincalifornia-usa.blogspot.com/
jobs in california jobs in california los angeles jobs in
california for uk residents jobs in california san diego jobs in
california orange county jobs in california for british
http://jobsincalifornia-usa.blogspot.com/ jobs in
california jobs in california los angeles jobs in california for
uk residents jobs in california san diego jobs in california
orange county jobs in california for british   
http://jobsincalifornia-usa.blogspot.com/
jobs in california jobs in california los angeles jobs in
california for uk residents jobs in california san diego jobs in
california orange county jobs in california for british
http://jobsincalifornia-usa.blogspot.com/ jobs in
california jobs in california los angeles jobs in california for
uk residents jobs in california san diego jobs in california
orange county jobs in california for british   
http://jobsincalifornia-usa.blogspot.com/
jobs in california jobs in california los angeles jobs in
california for uk residents jobs in california san diego jobs in
california orange county jobs in california for british
http://jobsincalifornia-usa.blogspot.com/ jobs in
california jobs in california los angeles jobs in california for
uk residents jobs in california san diego jobs in california
orange county jobs in california for british   

Re: Impersonating a Different Logon

2010-04-07 Thread Kevin Holleran
On Tue, Apr 6, 2010 at 4:11 PM, Tim Golden m...@timgolden.me.uk wrote:
 On 06/04/2010 20:26, Kevin Holleran wrote:

 Hello,

 I am sweeping some of our networks to find devices.  When I find a
 device I try to connect to the registry using _winreg and then query a
 specific key that I am interested in.  This works great for machines
 that are on our domain, but there are left over machines that are
 stand alone and the credentials fail.  I understand you cannot pass in
 credentials with _winreg but is there a way to simulate a logon of
 another user (the machine's local admin) to query the registry?

 The simplest may well be to use WMI (example from here):

 http://timgolden.me.uk/python/wmi/cookbook.html#list-registry-keys

 code - untested
 import wmi

 reg = wmi.WMI (
  machine,
  user=machine\admin,
  password=Secret,
  namespace=DEFAULT
 ).StdRegProv

 result, names = reg.EnumKey (
  hDefKey=_winreg.HKEY_LOCAL_MACHINE,
  sSubKeyName=Software
 )
 for name in names:
  print name

 /code

 I can't try it out at the moment but in principle it should work.

 TJG
 --
 http://mail.python.org/mailman/listinfo/python-list



Thanks, I was able to connect to the remote machine.  However, how do
I query for a very specific key value?  I have to scan hundreds of
machines and need want to reduce what I am querying.  I would like to
be able to scan a very specific key and report on its value.

With _winreg I could just do:
keyPath = _winreg.ConnectRegistry(r\\ + ip_a,_winreg.HKEY_LOCAL_MACHINE)
try:
  hKey = _winreg.OpenKey (keyPath,
rSYSTEM\CurrentControlSet\services\Tcpip\Parameters, 0,
_winreg.KEY_READ)
  value,type = _winreg.QueryValueEx(hKey,Domain)

Also, is there a performance hit with WMI where perhaps I want to try
to connect with the inherited credentials using _winreg first and then
use the MWI if that fails?

Thanks for your help!
Kevin
-- 
http://mail.python.org/mailman/listinfo/python-list


Striving for PEP-8 compliance

2010-04-07 Thread Tom Evans
[ Please keep me cc'ed, I'm not subscribed ]

Hi all

I've written a bunch of internal libraries for my company, and they
all use two space indents, and I'd like to be more consistent and
conform to PEP-8 as much as I can.

My problem is I would like to be certain that any changes do not alter
the logic of the libraries. When doing this in C, I would simply
compile each module to an object file, calculate the MD5 of the object
file, then make the whitespace changes, recompile the object file and
compare the checksums. If the checksums match, then the files are
equivalent.

Is there any way to do something semantically the same as this with python?

Cheers

Tom
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Striving for PEP-8 compliance

2010-04-07 Thread geremy condra
On Wed, Apr 7, 2010 at 10:53 AM, Tom Evans tevans...@googlemail.com wrote:
 [ Please keep me cc'ed, I'm not subscribed ]

 Hi all

 I've written a bunch of internal libraries for my company, and they
 all use two space indents, and I'd like to be more consistent and
 conform to PEP-8 as much as I can.

 My problem is I would like to be certain that any changes do not alter
 the logic of the libraries. When doing this in C, I would simply
 compile each module to an object file, calculate the MD5 of the object
 file, then make the whitespace changes, recompile the object file and
 compare the checksums. If the checksums match, then the files are
 equivalent.

 Is there any way to do something semantically the same as this with python?

Probably the logical thing would be to run your test suite against
it, but assuming that's not an option, you could run the whole
thing through dis and check that the bytecode is identical. There's
probably an easier way to do this though.

Geremy Condra
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: imports again

2010-04-07 Thread Gabriel Genellina

En Tue, 06 Apr 2010 14:25:38 -0300, Alex Hall mehg...@gmail.com escribió:


Sorry this is a forward (long story involving a braille notetaker's
bad copy/paste and GMail's annoying mobile site). Basically, I am
getting errors when I run the project at
http://www.gateway2somewhere.com/sw.zip


Error 404

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Impersonating a Different Logon

2010-04-07 Thread Tim Golden

On 07/04/2010 14:57, Kevin Holleran wrote:

Thanks, I was able to connect to the remote machine.  However, how do
I query for a very specific key value?  I have to scan hundreds of
machines and need want to reduce what I am querying.  I would like to
be able to scan a very specific key and report on its value.


The docs for the WMI Registry provider are here:

  http://msdn.microsoft.com/en-us/library/aa393664%28VS.85%29.aspx

and you probably want this:

  http://msdn.microsoft.com/en-us/library/aa390788%28v=VS.85%29.aspx



With _winreg I could just do:
keyPath = _winreg.ConnectRegistry(r\\ + ip_a,_winreg.HKEY_LOCAL_MACHINE)
try:
   hKey = _winreg.OpenKey (keyPath,
rSYSTEM\CurrentControlSet\services\Tcpip\Parameters, 0,
_winreg.KEY_READ)
   value,type = _winreg.QueryValueEx(hKey,Domain)

Also, is there a performance hit with WMI where perhaps I want to try
to connect with the inherited credentials using _winreg first and then
use the MWI if that fails?


Certainly a consideration. Generally WMI isn't the fastest thing in the
world either to connect nor to query. I suspect a try/except with
_winreg is worth a go, falling through to WMI.

TJG
--
http://mail.python.org/mailman/listinfo/python-list


Re: Striving for PEP-8 compliance

2010-04-07 Thread Grant Edwards
On 2010-04-07, Tom Evans tevans...@googlemail.com wrote:
 [ Please keep me cc'ed, I'm not subscribed ]

Sorry.  I post via gmane.org, so cc'ing you would require some extra
work, and I'm too lazy.

 I've written a bunch of internal libraries for my company, and they
 all use two space indents, and I'd like to be more consistent and
 conform to PEP-8 as much as I can.

 My problem is I would like to be certain that any changes do not
 alter the logic of the libraries. When doing this in C, I would
 simply compile each module to an object file, calculate the MD5 of
 the object file, then make the whitespace changes, recompile the
 object file and compare the checksums. If the checksums match, then
 the files are equivalent.

In my experience, that doesn't work.  Whitespace changes can effect
line numbers, so object files containing debug info will differ.  Many
object format also contain other meta-data about date, time, path of
source file, etc. that can differ between semantically equivalent
files.

 Is there any way to do something semantically the same as this with python?

Have you tried compiling the python files and compare the resulting
.pyc files?

-- 
Grant Edwards   grant.b.edwardsYow! I selected E5 ... but
  at   I didn't hear Sam the Sham
  gmail.comand the Pharoahs!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Simplify Python

2010-04-07 Thread AlienBaby
On 6 Apr, 20:04, ja1lbr3ak superheroco...@gmail.com wrote:
 I'm trying to teach myself Python, and so have been simplifying a
 calculator program that I wrote. The original was 77 lines for the
 same functionality. Problem is, I've hit a wall. Can anyone help?

 loop = input(Enter 1 for the calculator, 2 for the Fibonacci
 sequence, or something else to quit: )
 while loop  3 and loop  0:
     if loop == 1:
         print input(\nPut in an equation: )
     if loop == 2:
         a, b, n = 1, 1, (input(\nWhat Fibonacci number do you want to
 go to? ))
         while n  0:
             print a
             a, b, n = b, a+b, n-1
     loop = input(\nEnter 1 for the calculator, 2 for the Fibonacci
 sequence, or something else to quit: )


To replicate what you have above, I would do something like;

quit=False
while not quit:
choice=input('1 for calc, 2 for fib, any other to quit')
if choice==1:
print input('Enter expression: ')
elif choice==2:
a, b, n = 1, 1, input(\nWhat Fibonacci number do you want to 
go to?
)
 while n  0:
print a
a, b, n = b, a+b, n-1
else:
quit=True



?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Striving for PEP-8 compliance

2010-04-07 Thread Gabriel Genellina
En Wed, 07 Apr 2010 11:53:58 -0300, Tom Evans tevans...@googlemail.com  
escribió:



[ Please keep me cc'ed, I'm not subscribed ]


Sorry; you may read this at  
http://groups.google.com/group/comp.lang.python/



I've written a bunch of internal libraries for my company, and they
all use two space indents, and I'd like to be more consistent and
conform to PEP-8 as much as I can.


reindent.py (in the Tools directory of your Python installation) does  
exactly that.



My problem is I would like to be certain that any changes do not alter
the logic of the libraries. When doing this in C, I would simply
compile each module to an object file, calculate the MD5 of the object
file, then make the whitespace changes, recompile the object file and
compare the checksums. If the checksums match, then the files are
equivalent.


If you only reindent the code (without adding/removing lines) then you can  
compare the compiled .pyc files (excluding the first 8 bytes that contain  
a magic number and the source file timestamp). Remember that code objects  
contain line number information.


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Striving for PEP-8 compliance

2010-04-07 Thread Gabriel Genellina
En Wed, 07 Apr 2010 11:53:58 -0300, Tom Evans tevans...@googlemail.com  
escribió:



[ Please keep me cc'ed, I'm not subscribed ]


Sorry; you may read this at  
http://groups.google.com/group/comp.lang.python/



I've written a bunch of internal libraries for my company, and they
all use two space indents, and I'd like to be more consistent and
conform to PEP-8 as much as I can.


reindent.py (in the Tools directory of your Python installation) does  
exactly that.



My problem is I would like to be certain that any changes do not alter
the logic of the libraries. When doing this in C, I would simply
compile each module to an object file, calculate the MD5 of the object
file, then make the whitespace changes, recompile the object file and
compare the checksums. If the checksums match, then the files are
equivalent.


If you only reindent the code (without adding/removing lines) then you can  
compare the compiled .pyc files (excluding the first 8 bytes that contain  
a magic number and the source file timestamp). Remember that code objects  
contain line number information.


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Striving for PEP-8 compliance

2010-04-07 Thread Tom Evans
On Wed, Apr 7, 2010 at 4:10 PM, geremy condra debat...@gmail.com wrote:
 On Wed, Apr 7, 2010 at 10:53 AM, Tom Evans tevans...@googlemail.com wrote:
 [ Please keep me cc'ed, I'm not subscribed ]

 Hi all

 I've written a bunch of internal libraries for my company, and they
 all use two space indents, and I'd like to be more consistent and
 conform to PEP-8 as much as I can.

 My problem is I would like to be certain that any changes do not alter
 the logic of the libraries. When doing this in C, I would simply
 compile each module to an object file, calculate the MD5 of the object
 file, then make the whitespace changes, recompile the object file and
 compare the checksums. If the checksums match, then the files are
 equivalent.

 Is there any way to do something semantically the same as this with python?

 Probably the logical thing would be to run your test suite against
 it, but assuming that's not an option, you could run the whole
 thing through dis and check that the bytecode is identical. There's
 probably an easier way to do this though.

 Geremy Condra


dis looks like it may be interesting.

I had looked a little at the bytecode, but only enough to rule out md5
sums as a solution. Looking closer at the bytecode for a simple
module, it seems like only a few bytes change (see below for hexdumps
of the pyc).

So in this case, only bytes 5 and 6 changed, the rest of the file
remains exactly the same. Looks like I need to do some digging to find
out what those bytes mean.

Cheers

Tom

2 space indents:

  d1 f2 0d 0a 51 a7 bc 4b  63 00 00 00 00 00 00 00  |Q..Kc...|
0010  00 02 00 00 00 40 00 00  00 73 28 00 00 00 64 00  |.@...s(...d.|
0020  00 84 00 00 5a 00 00 65  01 00 64 01 00 6a 02 00  |Z..e..d..j..|
0030  6f 0e 00 01 65 00 00 65  02 00 83 01 00 01 6e 01  |o...e..e..n.|
0040  00 01 64 02 00 53 28 03  00 00 00 63 01 00 00 00  |..d..S(c|
0050  01 00 00 00 03 00 00 00  43 00 00 00 73 20 00 00  |C...s ..|
0060  00 64 01 00 47 48 7c 00  00 6f 10 00 01 68 01 00  |.d..GH|..o...h..|
0070  64 02 00 64 01 00 36 47  48 6e 01 00 01 64 00 00  |d..d..6GHn...d..|
0080  53 28 03 00 00 00 4e 74  05 00 00 00 68 65 6c 6c  |S(Nthell|
0090  6f 74 05 00 00 00 77 6f  72 6c 64 28 00 00 00 00  |otworld(|
00a0  28 01 00 00 00 74 03 00  00 00 62 61 72 28 00 00  |(tbar(..|
00b0  00 00 28 00 00 00 00 73  0e 00 00 00 74 65 73 74  |..(stest|
00c0  6c 69 62 2f 66 6f 6f 2e  70 79 74 03 00 00 00 66  |lib/foo.pytf|
00d0  6f 6f 01 00 00 00 73 08  00 00 00 00 01 05 01 07  |oos.|
00e0  01 03 01 74 08 00 00 00  5f 5f 6d 61 69 6e 5f 5f  |...t__main__|
00f0  4e 28 03 00 00 00 52 03  00 00 00 74 08 00 00 00  |N(Rt|
0100  5f 5f 6e 61 6d 65 5f 5f  74 04 00 00 00 54 72 75  |__name__tTru|
0110  65 28 00 00 00 00 28 00  00 00 00 28 00 00 00 00  |e(((|
0120  73 0e 00 00 00 74 65 73  74 6c 69 62 2f 66 6f 6f  |stestlib/foo|
0130  2e 70 79 74 08 00 00 00  3c 6d 6f 64 75 6c 65 3e  |.pytmodule|
0140  01 00 00 00 73 04 00 00  00 09 07 0d 01   |s|
014d


4 space indents:

  d1 f2 0d 0a 51 a7 bc 4b  63 00 00 00 00 00 00 00  |Q..Kc...|
0010  00 02 00 00 00 40 00 00  00 73 28 00 00 00 64 00  |.@...s(...d.|
0020  00 84 00 00 5a 00 00 65  01 00 64 01 00 6a 02 00  |Z..e..d..j..|
0030  6f 0e 00 01 65 00 00 65  02 00 83 01 00 01 6e 01  |o...e..e..n.|
0040  00 01 64 02 00 53 28 03  00 00 00 63 01 00 00 00  |..d..S(c|
0050  01 00 00 00 03 00 00 00  43 00 00 00 73 20 00 00  |C...s ..|
0060  00 64 01 00 47 48 7c 00  00 6f 10 00 01 68 01 00  |.d..GH|..o...h..|
0070  64 02 00 64 01 00 36 47  48 6e 01 00 01 64 00 00  |d..d..6GHn...d..|
0080  53 28 03 00 00 00 4e 74  05 00 00 00 68 65 6c 6c  |S(Nthell|
0090  6f 74 05 00 00 00 77 6f  72 6c 64 28 00 00 00 00  |otworld(|
00a0  28 01 00 00 00 74 03 00  00 00 62 61 72 28 00 00  |(tbar(..|
00b0  00 00 28 00 00 00 00 73  0e 00 00 00 74 65 73 74  |..(stest|
00c0  6c 69 62 2f 66 6f 6f 2e  70 79 74 03 00 00 00 66  |lib/foo.pytf|
00d0  6f 6f 01 00 00 00 73 08  00 00 00 00 01 05 01 07  |oos.|
00e0  01 03 01 74 08 00 00 00  5f 5f 6d 61 69 6e 5f 5f  |...t__main__|
00f0  4e 28 03 00 00 00 52 03  00 00 00 74 08 00 00 00  |N(Rt|
0100  5f 5f 6e 61 6d 65 5f 5f  74 04 00 00 00 54 72 75  |__name__tTru|
0110  65 28 00 00 00 00 28 00  00 00 00 28 00 00 00 00  |e(((|
0120  73 0e 00 00 00 74 65 73  74 6c 69 62 2f 66 6f 6f  |stestlib/foo|
0130  2e 70 79 74 08 00 00 00  3c 6d 6f 64 75 6c 65 3e  |.pytmodule|
0140  01 00 00 00 73 04 00 00  00 09 07 0d 01   |s|
014d

python code: testlib/foo.py

def foo(bar):
  print hello
  if bar:
print {
'hello': 'world'
  }

if 

Re: Striving for PEP-8 compliance

2010-04-07 Thread Robert Kern

On 2010-04-07 11:06 AM, Tom Evans wrote:

On Wed, Apr 7, 2010 at 4:10 PM, geremy condradebat...@gmail.com  wrote:

On Wed, Apr 7, 2010 at 10:53 AM, Tom Evanstevans...@googlemail.com  wrote:

[ Please keep me cc'ed, I'm not subscribed ]

Hi all

I've written a bunch of internal libraries for my company, and they
all use two space indents, and I'd like to be more consistent and
conform to PEP-8 as much as I can.

My problem is I would like to be certain that any changes do not alter
the logic of the libraries. When doing this in C, I would simply
compile each module to an object file, calculate the MD5 of the object
file, then make the whitespace changes, recompile the object file and
compare the checksums. If the checksums match, then the files are
equivalent.

Is there any way to do something semantically the same as this with python?


Probably the logical thing would be to run your test suite against
it, but assuming that's not an option, you could run the whole
thing through dis and check that the bytecode is identical. There's
probably an easier way to do this though.

Geremy Condra



dis looks like it may be interesting.

I had looked a little at the bytecode, but only enough to rule out md5
sums as a solution. Looking closer at the bytecode for a simple
module, it seems like only a few bytes change (see below for hexdumps
of the pyc).

So in this case, only bytes 5 and 6 changed, the rest of the file
remains exactly the same. Looks like I need to do some digging to find
out what those bytes mean.


You will also have to be careful about docstrings. If you are cleaning up for 
style reasons, you will also end up indenting the triple-quoted docstrings and 
thus change their contents. This will be reflected in the bytecode.


In [1]: def f():
   ...: This is
   ...: a docstring.
   ...: 
   ...:
   ...:

In [2]: def g():
   ...:   This is
   ...:   a docstring.
   ...:   
   ...:
   ...:

In [3]: f.__doc__
Out[3]: 'This is \na docstring.\n'

In [4]: g.__doc__
Out[4]: 'This is\n  a docstring.\n  '


--
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco

--
http://mail.python.org/mailman/listinfo/python-list


jobs in newyork hotel jobs in newyork city jobs in newyork newyork jobs in newyork usa newyork jobs newyork jobbank newyork job exchange http://jobsinnewyork-usa.blogspot.com/

2010-04-07 Thread saima81
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork city jobs in newyork
newyork jobs in newyork usa newyork jobs newyork jobbank
newyork job exchange http://jobsinnewyork-usa.blogspot.com/
jobs in newyork  hotel jobs in newyork 

Re: (a==b) ? 'Yes' : 'No'

2010-04-07 Thread Emile van Sebille

On 4/6/2010 9:20 PM Steven D'Aprano said...

On Tue, 06 Apr 2010 16:54:18 +, Duncan Booth wrote:

Most old hands would (IMHO) write the if statements out in full,
though some might remember that Python comes 'batteries included':

  from bisect import bisect
  WEIGHTS = [100, 250, 500, 1000]
  STAMPS = [44, 60, 80, 100, 120]

  ...
  stamp = STAMPS[bisect(WEIGHTS,weight)]



Isn't that an awfully heavyweight and obfuscated solution for choosing
between five options? Fifty-five options, absolutely, but five?



Would it be easier to digest as:

from bisect import bisect as selectindex #

WEIGHTLIMITS = [100, 250, 500, 1000]
POSTAGEAMOUNTS = [44, 60, 80, 100, 120]

postage = POSTAGEAMOUNTS[selectindex(WEIGHTLIMITS, weight)]

---

I've used bisect this way for some time -- I think Tim may have pointed 
it out -- and it's been handy ever since.


Emile


--
http://mail.python.org/mailman/listinfo/python-list


Re: pass object or use self.object?

2010-04-07 Thread Tim Arnold
On Apr 6, 11:19 am, Jean-Michel Pichavant jeanmic...@sequans.com
wrote:
 Tim Arnold wrote:
  Hi,
  I have a few classes that manipulate documents. One is really a
  process that I use a class for just to bundle a bunch of functions
  together (and to keep my call signatures the same for each of my
  manipulator classes).

  So my question is whether it's bad practice to set things up so each
  method operates on self.document or should I pass document around from
  one function to the next?
  pseudo code:

  class ManipulatorA(object):
      def process(self, document):
          document = self.do_one_thing(document)
          document = self.do_another_thing(document)
          # bunch of similar lines
          return document

  or

  class ManipulatorA(object):
      def process(self, document):
          self.document = document
          self.do_one_thing() # operates on self.document
          self.do_another_thing()
          # bunch of similar lines
          return self.document

  I ask because I've been told that the first case is easier to
  understand. I never thought of it before, so I'd appreciate any
  comments.
  thanks,
  --Tim

 Usually, when using classes as namespace, functions are declared as
 static (or as classmethod if required).
 e.g.

 class Foo:
     @classmethod
     def process(cls, document):
         print 'process of'
         cls.foo(document)

     @staticmethod
     def foo(document):
         print document

 In [5]: Foo.process('my document')
 process of
 my document

 There is no more question about self, 'cause there is no more self. You
 don't need to create any instance of Foo neither.

 JM

Thanks for the input. I had always wondered about static methods; I'd
ask myself why don't they just write a function in the first place?

Now I see why. My situation poses a problem that I guess static
methods were invented to solve. And it settles the question about
using self.document since there is no longer any self. And as Bruno
says, it's easier to understand and refactor.

thanks,
--Tim
-- 
http://mail.python.org/mailman/listinfo/python-list


[2.5.1/cookielib] How to display specific cookie?

2010-04-07 Thread Gilles Ganault
Hello

I'm using ActivePython 2.5.1 and the cookielib package to retrieve web
pages.

I'd like to display a given cookie from the cookiejar instead of the
whole thing:


#OK
for index, cookie in enumerate(cj):
print index, '  :  ', cookie

#How to display just PHPSESSID?
#AttributeError: CookieJar instance has no attribute '__getitem__'
print PHPSESSID: %s % cj['PHPSESSID']


I'm sure it's very simple but googling for this didn't return samples.

Thank you.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: lambda with floats

2010-04-07 Thread Peter Pearson
On Tue, 06 Apr 2010 23:16:18 -0400, monkeys paw mon...@joemoney.net wrote:
 I have the following acre meter which works for integers,
 how do i convert this to float? I tried

 return float ((208.0 * 208.0) * n)

  def s(n):
 ...   return lambda x: (208 * 208) * n
 ...
  f = s(1)
  f(1)
 43264
  208 * 208
 43264
  f(.25)
 43264

The expression lambda x: (208 * 208) * n is independent of x.
Is that what you intended?


-- 
To email me, substitute nowhere-spamcop, invalid-net.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python as pen and paper substitute

2010-04-07 Thread Manuel Graune
Hello Johan,

thanks to you (and everyone else who answered) for your effort.

Johan Grönqvist johan.gronqv...@gmail.com writes:

 Manuel Graune skrev:
 Manuel Graune manuel.gra...@koeln.de writes:

 Just as an additional example, let's assume I'd want to add the area of
 to circles.
 [...]
 which can be explained to anyone who knows
 basic math and is not at all interested in
 python.


 Third attempt. The markup now includes tagging of different parts of
 the code, and printing parts of the source based on a tag.


after playing around for a while, this is what I finally ended up with:

88 source ---8
#! /usr/bin/python
## Show
# List of all imports:
from __future__ import with_statement, print_function
from math import pi as PI
import sys
##

class Source_Printer(object):
def __init__(self):
self.is_printing= False
with open(sys.argv[0]) as file:
self.lines=(line for line in file.readlines())
for line in self.lines:
if line.startswith(print_source):
break 
elif line == ##\n:
self.is_printing= False
elif line.startswith(## Show):
print(\n)
self.is_printing= True
elif self.is_printing:
print(line,end=)
def __call__(self):
for line in self.lines:
if line == ##\n or line.startswith(print_source):
if self.is_printing:
self.is_printing= False
break
else:
self.is_printing= False
elif line.startswith(## Show):
print(\n)
self.is_printing= True
elif self.is_printing:
print(line, end=)


print_source= Source_Printer()
## Show
#Calculation of first Area:
d1= 3.0
A1= d1**2 * PI / 4.0
##
print_source()

print (Area of Circle 1:\t, A1)

## Show
#Calculation of second area:
d2= 5.0
A2= d2**2 * PI / 4.0
##
# This is a comment that won't be printed

print_source()
print (Area of Circle 2:\t, A2)

# This is another one
Sum_Of_Areas= A1 + A2
print (Sum of areas:\t, Sum_Of_Areas) 

88 result: ---8

# List of all imports:
from __future__ import with_statement, print_function
from math import pi as PI
import sys


#Calculation of first Area:
d1= 3.0
A1= d1**2 * PI / 4.0
Area of Circle 1:7.06858347058


#Calculation of second area:
d2= 5.0
A2= d2**2 * PI / 4.0
Area of Circle 2:19.6349540849
Sum of areas:26.703537

88 result: ---8

Regards,

Manuel



-- 
A hundred men did the rational thing. The sum of those rational choices was
called panic. Neal Stephenson -- System of the world
http://www.graune.org/GnuPG_pubkey.asc
Key fingerprint = 1E44 9CBD DEE4 9E07 5E0A  5828 5476 7E92 2DB4 3C99
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of list vs. set equality operations

2010-04-07 Thread Raymond Hettinger
[Gustavo Nare]
 In other words: The more different elements two collections have, the
 faster it is to compare them as sets. And as a consequence, the more
 equivalent elements two collections have, the faster it is to compare
 them as lists.

 Is this correct?

If two collections are equal, then comparing them as a set is always
slower than comparing them as a list.  Both have to call __eq__ for
every element, but sets have to search for each element while lists
can just iterate over consecutive pointers.

If the two collections have unequal sizes, then both ways immediately
return unequal.

If the two collections are unequal but have the same size, then
the comparison time is data dependent (when the first mismatch
is found).


Raymond
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python as pen and paper substitute

2010-04-07 Thread Manuel Graune
Hello Johan,

thanks to you (and everyone else who answered) for your effort.

Johan Grönqvist johan.gronqv...@gmail.com writes:

 Manuel Graune skrev:
 Manuel Graune manuel.gra...@koeln.de writes:

 Just as an additional example, let's assume I'd want to add the area of
 to circles.
 [...]
 which can be explained to anyone who knows
 basic math and is not at all interested in
 python.


 Third attempt. The markup now includes tagging of different parts of
 the code, and printing parts of the source based on a tag.


after playing around for a while, this is what I finally ended up with:

88 source ---8
#! /usr/bin/python
## Show
# List of all imports:
from __future__ import with_statement, print_function
from math import pi as PI
import sys
##

class Source_Printer(object):
def __init__(self):
self.is_printing= False
with open(sys.argv[0]) as file:
self.lines= iter(file.readlines())
for line in self.lines:
if line.startswith(print_source):
break 
elif line == ##\n:
self.is_printing= False
elif line.startswith(## Show):
print(\n)
self.is_printing= True
elif self.is_printing:
print(line,end=)
def __call__(self):
for line in self.lines:
if line == ##\n or line.startswith(print_source):
if self.is_printing:
self.is_printing= False
break
else:
self.is_printing= False
elif line.startswith(## Show):
print(\n)
self.is_printing= True
elif self.is_printing:
print(line, end=)


print_source= Source_Printer()
## Show
#Calculation of first Area:
d1= 3.0
A1= d1**2 * PI / 4.0
##
print_source()

print (Area of Circle 1:\t, A1)

## Show
#Calculation of second area:
d2= 5.0
A2= d2**2 * PI / 4.0
##
# This is a comment that won't be printed

print_source()
print (Area of Circle 2:\t, A2)

# This is another one
Sum_Of_Areas= A1 + A2
print (Sum of areas:\t, Sum_Of_Areas) 

88 result: ---8

# List of all imports:
from __future__ import with_statement, print_function
from math import pi as PI
import sys


#Calculation of first Area:
d1= 3.0
A1= d1**2 * PI / 4.0
Area of Circle 1:7.06858347058


#Calculation of second area:
d2= 5.0
A2= d2**2 * PI / 4.0
Area of Circle 2:19.6349540849
Sum of areas:26.703537

88 result: ---8

Regards,

Manuel



-- 
A hundred men did the rational thing. The sum of those rational choices was
called panic. Neal Stephenson -- System of the world
http://www.graune.org/GnuPG_pubkey.asc
Key fingerprint = 1E44 9CBD DEE4 9E07 5E0A  5828 5476 7E92 2DB4 3C99
-- 
http://mail.python.org/mailman/listinfo/python-list


+Hi+

2010-04-07 Thread Matt Burson
http://sites.google.com/site/fgu45ythjg/rfea8i

-- 
Matt
-- 
http://mail.python.org/mailman/listinfo/python-list


[Q] raise exception with fake filename and linenumber

2010-04-07 Thread kwatch
Hi all,

Is it possible to raise exception with custom traceback to specify
file and line?

Situation
=

I'm creating a certain parser.
I want to report syntax error with the same format as other exception.

Example
===

parser.py:
-
1: def parse(filename):
2: if something_is_wrong():
3: linenum = 123
4: raise Exception(syntax error on %s, line %s % (filename,
linenum))
5:
6: parse('example.file')
-

current result:
-
Traceback (most recent call last):
  File /tmp/parser.py, line 6, in module
parse('example.file')
  File /tmp/parser.py, line 4, in parse
raise Exception(syntax error on %s, line %s % (filename,
linenum))
Exception: syntax error on example.file, line 123
-

my hope is:
-
Traceback (most recent call last):
  File /tmp/parser.py, line 6, in module
parse('example.file')
  File /tmp/parser.py, line 4, in parse
raise Exception(syntax error on %s, line %s % (filename,
linenum))
  File /tmp/example.file, line 123
foreach item in items   # wrong syntax line
Exception: syntax error
-

I guess I must create dummy traceback data, but I don't know how to do
it.
Could you give me an advice?

Thank you.

--
regards,
makoto kuwata
-- 
http://mail.python.org/mailman/listinfo/python-list


fcntl, serial ports and serial signals on RS232.

2010-04-07 Thread Max Kotasek
Hello to all out there,

I'm trying to figure out how to parse the responses from fcntl.ioctl()
calls that modify the serial lines in a way that asserts that the line
is now changed.  For example I may want to drop RTS explicitly, and
assert that the line has been dropped before returning.

Here is a brief snippet of code that I've been using to do that, but
not sure what to do with the returned response:

def set_RTS(self, state=True):
  if self.fd is None:
return 0

  p = struct.pack('I', termios.TIOCM_RTS)
  if state:
return fcntl.ioctl(self.fd, termios.TIOCMBIS, p)
  else:
return fcntl.ioctl(self.fd, termios.TIOCMBIC, p)

The problem is I get responses like '\x01\x00\x00\x00', or
'\x02\x00\x00\x00'  and I'm not sure what they mean.  I tried doing
illogical things like settings CTS using the TIOCM_CTS flag and I end
up just getting back a slightly different binary packed 32 bit integer
(in that case '\x20\x00\x00\x00').  The above example has self.fd
being defined as os.open('/dev/ttyS0', os.O_RDWR | os.O_NONBLOCK).

Is someone familiar with manipulating serial signals like this in
python?  Am I even taking the right approach by using the fcntl.ioctl
call?  The environment is a ubuntu 8.04 distribution.  Unfortunately
due to other limitations, I can't use/extend pyserial, though I would
like to.

I appreciate any advice on this matter,
Max
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Recommend Commercial graphing library

2010-04-07 Thread David Bolen
AlienBaby matt.j.war...@gmail.com writes:

 I'd be grateful for any suggestions / pointers to something useful,

Ignoring the commercial vs. open source discussion, although it was a
few years ago, I found Chart Director (http://www.advsofteng.com/) to
work very well, with plenty of platform and language support,
including Python.

-- David
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Impersonating a Different Logon

2010-04-07 Thread David Bolen
Kevin Holleran kdaw...@gmail.com writes:

 Thanks, I was able to connect to the remote machine.  However, how do
 I query for a very specific key value?  I have to scan hundreds of
 machines and need want to reduce what I am querying.  I would like to
 be able to scan a very specific key and report on its value.

Any remote machine connection should automatically used any cached
credentials for that machine, since Windows always uses the same
credentials for a given target machine.

So if you were to access a share with the appropriate credentials,
using _winreg after that point should work.  I normally use
\\machine\ipc$ (even from the command line) which should always exist.

You can use the wrappers in the PyWin32 library (win32net) to access
and then release the share with NetUseAdd and NetUseDel.

Of course, the extra step of accessing the share might or might not be
any faster than WMI, but it would have a small advantage of not
needing WMI support on the target machine - though that may be a
non-issue nowadays.

-- David
-- 
http://mail.python.org/mailman/listinfo/python-list


remote multiprocessing, shared object

2010-04-07 Thread Norm Matloff
Should be a simple question, but I can't seem to make it work from my
understanding of the docs.

I want to use the multiprocessing module with remote clients, accessing
shared lists.  I gather one is supposed to use register(), but I don't
see exactly how.  I'd like to have the clients read and write the shared
list directly, not via some kind of get() and set() functions.  It's
clear how to do this in a shared-memory setting, but how can one do it
across a network, i.e. with serve_forever(), connect() etc.?

Any help, especially with a concrete example, would be much appreciated.
Thanks.

Norm

-- 
http://mail.python.org/mailman/listinfo/python-list


Regex driving me crazy...

2010-04-07 Thread J
Can someone make me un-crazy?

I have a bit of code that right now, looks like this:

status = getoutput('smartctl -l selftest /dev/sda').splitlines()[6]
status = re.sub(' (?= )(?=([^]*[^]*)*[^]*$)', :,status)
print status

Basically, it pulls the first actual line of data from the return you
get when you use smartctl to look at a hard disk's selftest log.

The raw data looks like this:

# 1  Short offline   Completed without error   00%   679 -

Unfortunately, all that whitespace is arbitrary single space
characters.  And I am interested in the string that appears in the
third column, which changes as the test runs and then completes.  So
in the example, Completed without error

The regex I have up there doesn't quite work, as it seems to be
subbing EVERY space (or at least in instances of more than one space)
to a ':' like this:

# 1: Short offline:: Completed without error:: 00%:: 679 -

Ultimately, what I'm trying to do is either replace any space that is
 one space wiht a delimiter, then split the result into a list and
get the third item.

OR, if there's a smarter, shorter, or better way of doing it, I'd love to know.

The end result should pull the whole string in the middle of that
output line, and then I can use that to compare to a list of possible
output strings to determine if the test is still running, has
completed successfully, or failed.

Unfortunately, my google-fu fails right now, and my Regex powers were
always rather weak anyway...

So any ideas on what the best way to proceed with this would be?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Grant Edwards
On 2010-04-07, J dreadpiratej...@gmail.com wrote:

 Can someone make me un-crazy?

Definitely.  Regex is driving you crazy, so don't use a regex.

  inputString = # 1  Short offline   Completed without error 00%   
679 -
  
  print ' '.join(inputString.split()[4:-3])

 So any ideas on what the best way to proceed with this would be?

Anytime you have a problem with a regex, the first thing you should
ask yourself:  do I really, _really_ need a regex?

Hint: the answer is usually no.

-- 
Grant Edwards   grant.b.edwardsYow! I'm continually AMAZED
  at   at th'breathtaking effects
  gmail.comof WIND EROSION!!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python as pen and paper substitute

2010-04-07 Thread Michael Torrie
On 04/06/2010 12:40 PM, Manuel Graune wrote:
 Hello everyone,
 
 I am looking for ways to use a python file as a substitute for simple
 pen and paper calculations. At the moment I mainly use a combination
 of triple-quoted strings, exec and print (Yes, I know it's not exactly
 elegant). 

This isn't quite along the lines that this thread is going, but it seems
to me that a program like reinteract is about what I want to replace a
pen and paper with a python-based thing.  Last time I used it, it was
buggy, but if this concept was developed, it would totally rock:

http://fishsoup.net/software/reinteract/

-- 
http://mail.python.org/mailman/listinfo/python-list


order that destructors get called?

2010-04-07 Thread Brendan Miller
I'm used to C++ where destrcutors get called in reverse order of construction
like this:

{
Foo foo;
Bar bar;

// calls Bar::~Bar()
// calls Foo::~Foo()
}

I'm writing a ctypes wrapper for some native code, and I need to manage some
memory. I'm wrapping the memory in a python class that deletes the underlying
 memory when the python class's reference count hits zero.

When doing this, I noticed some odd behaviour. I had code like this:

def delete_my_resource(res):
# deletes res

class MyClass(object):
def __del__(self):
delete_my_resource(self.res)

o = MyClass()

What happens is that as the program shuts down, delete_my_resource is released
*before* o is released. So when __del__ get called, delete_my_resource is now
None.

Obviously, MyClass needs to hang onto a reference to delete_my_resource.

What I'm wondering is if there's any documented order that reference counts
get decremented when a module is released or when a program terminates.

What I would expect is reverse order of definition but obviously that's not
the case.

Brendan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: order that destructors get called?

2010-04-07 Thread Stephen Hansen

On 2010-04-07 15:08:14 -0700, Brendan Miller said:

When doing this, I noticed some odd behaviour. I had code like this:

def delete_my_resource(res):
# deletes res

class MyClass(object):
def __del__(self):
delete_my_resource(self.res)

o = MyClass()

What happens is that as the program shuts down, delete_my_resource is released
*before* o is released. So when __del__ get called, delete_my_resource is now
None.


The first thing Python does when shutting down is go and set the 
module-level value of any names to None; this may or may not cause 
those objects which were previously named such to be destroyed, 
depending on if it drops their reference count to 0.


So if you need to call something in __del__, be sure to save its 
reference for later, so that when __del__ gets called, you can be sure 
the things you need are still alive. Perhaps on MyClass, in its 
__init__, or some such.



What I'm wondering is if there's any documented order that reference counts
get decremented when a module is released or when a program terminates.

What I would expect is reverse order of definition but obviously that's not
the case.


AFAIR, every top level name gets set to None first; this causes many 
things to get recycled. There's no order beyond that, though. 
Namespaces are dictionaries, and dictionaries are unordered. So you 
can't really infer any sort of order to the destruction: if you need 
something to be alive when a certain __del__ is called, you have to 
keep a reference to it.


--
--S

... p.s: change the .invalid to .com in email address to reply privately.

--
http://mail.python.org/mailman/listinfo/python-list


Profiling: Interpreting tottime

2010-04-07 Thread Nikolaus Rath
Hello,

Consider the following function:

def check_s3_refcounts():
Check s3 object reference counts

global found_errors
log.info('Checking S3 object reference counts...')

for (key, refcount) in conn.query(SELECT id, refcount FROM s3_objects):

refcount2 = conn.get_val(SELECT COUNT(inode) FROM blocks WHERE 
s3key=?,
 (key,))
if refcount != refcount2:
log_error(S3 object %s has invalid refcount, setting from %d to 
%d,
  key, refcount, refcount2)
found_errors = True
if refcount2 != 0:
conn.execute(UPDATE s3_objects SET refcount=? WHERE id=?,
 (refcount2, key))
else:
# Orphaned object will be picked up by check_keylist
conn.execute('DELETE FROM s3_objects WHERE id=?', (key,))

When I ran cProfile.Profile().runcall() on it, I got the following
result:

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
1 7639.962 7639.962 7640.269 7640.269 fsck.py:270(check_s3_refcounts)

So according to the profiler, the entire 7639 seconds where spent
executing the function itself.

How is this possible? I really don't see how the above function can
consume any CPU time without spending it in one of the called
sub-functions.


Puzzled,

   -Nikolaus

-- 
 »Time flies like an arrow, fruit flies like a Banana.«

  PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pass object or use self.object?

2010-04-07 Thread Lie Ryan
On 04/07/10 18:34, Bruno Desthuilliers wrote:
 Lie Ryan a écrit :
 (snip)
 
 Since in function in python is a first-class object, you can instead do
 something like:

 def process(document):
 # note: document should encapsulate its own logic
 document.do_one_thing()
 
 Obvious case of encapsulation abuse here. Should a file object
 encapsulate all the csv parsing logic ? (and the html parsing, xml
 parsing, image manipulation etc...) ? Should a model object
 encapsulate the presentation logic ? I could go on for hours here...

Yes, but no; you're taking it out of context. Is {csv|html|xml|image}
parsing logic a document's logic? Is presentation a document's logic? If
they're not, then they do not belong in document.
-- 
http://mail.python.org/mailman/listinfo/python-list


help req: installing debugging symbols

2010-04-07 Thread sanam singh

Hi,I am using ununtu 9.10. I want to  install  a version of Python that was 
compiled with debug symbols.But if I delete python from ubuntu it would 
definitely stop working . And python comes preintalled in ubuntu without 
debugging symbols.How can i install python with debugging symbols 
?Thanks.Regards,Sanam  
_
Hotmail: Trusted email with Microsoft’s powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969-- 
http://mail.python.org/mailman/listinfo/python-list


ftp and python

2010-04-07 Thread Matjaz Pfefferer

Hi,
I'm Py newbie and I have some beginners problems with ftp handling.
What would be the easiest way to copy files from one ftp folder to another 
without downloading them to local system?
Are there any snippets for this task (I couldnt find example like this)

Thx
  
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Striving for PEP-8 compliance

2010-04-07 Thread Lawrence D'Oliveiro
In message mailman.1599.1270652040.23598.python-l...@python.org, Tom Evans 
wrote:

 I've written a bunch of internal libraries for my company, and they
 all use two space indents, and I'd like to be more consistent and
 conform to PEP-8 as much as I can.

“A foolish consistency is the hobgoblin of little minds”
— Ralph Waldo Emerson
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Striving for PEP-8 compliance

2010-04-07 Thread Lawrence D'Oliveiro
In message mailman.1610.1270655932.23598.python-l...@python.org, Gabriel 
Genellina wrote:

 If you only reindent the code (without adding/removing lines) then you can
 compare the compiled .pyc files (excluding the first 8 bytes that contain
 a magic number and the source file timestamp). Remember that code objects
 contain line number information.

Anybody who ever creates another indentation-controlled language should be 
beaten to death with a Guido van Rossum voodoo doll.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Patrick Maupin
On Apr 7, 4:40 pm, J dreadpiratej...@gmail.com wrote:
 Can someone make me un-crazy?

 I have a bit of code that right now, looks like this:

 status = getoutput('smartctl -l selftest /dev/sda').splitlines()[6]
         status = re.sub(' (?= )(?=([^]*[^]*)*[^]*$)', :,status)
         print status

 Basically, it pulls the first actual line of data from the return you
 get when you use smartctl to look at a hard disk's selftest log.

 The raw data looks like this:

 # 1  Short offline       Completed without error       00%       679         -

 Unfortunately, all that whitespace is arbitrary single space
 characters.  And I am interested in the string that appears in the
 third column, which changes as the test runs and then completes.  So
 in the example, Completed without error

 The regex I have up there doesn't quite work, as it seems to be
 subbing EVERY space (or at least in instances of more than one space)
 to a ':' like this:

 # 1: Short offline:: Completed without error:: 00%:: 679 -

 Ultimately, what I'm trying to do is either replace any space that is one 
 space wiht a delimiter, then split the result into a list and

 get the third item.

 OR, if there's a smarter, shorter, or better way of doing it, I'd love to 
 know.

 The end result should pull the whole string in the middle of that
 output line, and then I can use that to compare to a list of possible
 output strings to determine if the test is still running, has
 completed successfully, or failed.

 Unfortunately, my google-fu fails right now, and my Regex powers were
 always rather weak anyway...

 So any ideas on what the best way to proceed with this would be?

You mean like this?

 import re
 re.split(' {2,}', '# 1  Short offline   Completed without error   
 00%')
['# 1', 'Short offline', 'Completed without error', '00%']


Regards,
Pat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Patrick Maupin
On Apr 7, 4:47 pm, Grant Edwards inva...@invalid.invalid wrote:
 On 2010-04-07, J dreadpiratej...@gmail.com wrote:

  Can someone make me un-crazy?

 Definitely.  Regex is driving you crazy, so don't use a regex.

   inputString = # 1  Short offline       Completed without error     00%     
   679         -

   print ' '.join(inputString.split()[4:-3])

  So any ideas on what the best way to proceed with this would be?

 Anytime you have a problem with a regex, the first thing you should
 ask yourself:  do I really, _really_ need a regex?

 Hint: the answer is usually no.

 --
 Grant Edwards               grant.b.edwards        Yow! I'm continually AMAZED
                                   at               at th'breathtaking effects
                               gmail.com            of WIND EROSION!!

OK, fine.  Post a better solution to this problem than:

 import re
 re.split(' {2,}', '# 1  Short offline   Completed without error   
 00%')
['# 1', 'Short offline', 'Completed without error', '00%']


Regards,
Pat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: help req: installing debugging symbols

2010-04-07 Thread Shashwat Anand
Install python in a different directory, use $prefix for that. Change PATH
value accordingly


2010/4/5 sanam singh sanamsi...@hotmail.com

  Hi,

 I am using ununtu 9.10. I want to  install  a version of Python that was
 compiled with debug symbols.

 But if I delete python from ubuntu it would definitely stop working . And
 python comes preintalled in ubuntu without debuggi ng symbols.

 How can i install python with debugging symbols ?

 Thanks.

 Regards,

 Sanam



 --
 Hotmail: Trusted email with Microsoft’s powerful SPAM protection. Sign up
 now. https://signup.live.com/signup.aspx?id=60969

 --
 http://mail.python.org/mailman/listinfo/python-list


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Patrick Maupin
On Apr 7, 7:49 pm, Patrick Maupin pmau...@gmail.com wrote:
 On Apr 7, 4:40 pm, J dreadpiratej...@gmail.com wrote:



  Can someone make me un-crazy?

  I have a bit of code that right now, looks like this:

  status = getoutput('smartctl -l selftest /dev/sda').splitlines()[6]
          status = re.sub(' (?= )(?=([^]*[^]*)*[^]*$)', :,status)
          print status

  Basically, it pulls the first actual line of data from the return you
  get when you use smartctl to look at a hard disk's selftest log.

  The raw data looks like this:

  # 1  Short offline       Completed without error       00%       679        
   -

  Unfortunately, all that whitespace is arbitrary single space
  characters.  And I am interested in the string that appears in the
  third column, which changes as the test runs and then completes.  So
  in the example, Completed without error

  The regex I have up there doesn't quite work, as it seems to be
  subbing EVERY space (or at least in instances of more than one space)
  to a ':' like this:

  # 1: Short offline:: Completed without error:: 00%:: 
  679 -

  Ultimately, what I'm trying to do is either replace any space that is one 
  space wiht a delimiter, then split the result into a list and

  get the third item.

  OR, if there's a smarter, shorter, or better way of doing it, I'd love to 
  know.

  The end result should pull the whole string in the middle of that
  output line, and then I can use that to compare to a list of possible
  output strings to determine if the test is still running, has
  completed successfully, or failed.

  Unfortunately, my google-fu fails right now, and my Regex powers were
  always rather weak anyway...

  So any ideas on what the best way to proceed with this would be?

 You mean like this?

  import re
  re.split(' {2,}', '# 1  Short offline       Completed without error       
  00%')

 ['# 1', 'Short offline', 'Completed without error', '00%']



 Regards,
 Pat

BTW, although I find it annoying when people say don't do that when
that is a perfectly good thing to do, and although I also find it
annoying when people tell you what not to do without telling you what
*to* do, and although I find the regex solution to this problem to be
quite clean, the equivalent non-regex solution is not terrible, so I
will present it as well, for your viewing pleasure:

 [x for x in '# 1  Short offline   Completed without error   
 00%'.split('  ') if x.strip()]
['# 1', 'Short offline', ' Completed without error', ' 00%']

Regards,
Pat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Striving for PEP-8 compliance

2010-04-07 Thread Chris Rebert
On Wed, Apr 7, 2010 at 5:35 PM, Lawrence D'Oliveiro @ wrote:
 In message mailman.1610.1270655932.23598.python-l...@python.org, Gabriel
 Genellina wrote:

 If you only reindent the code (without adding/removing lines) then you can
 compare the compiled .pyc files (excluding the first 8 bytes that contain
 a magic number and the source file timestamp). Remember that code objects
 contain line number information.

 Anybody who ever creates another indentation-controlled language should be
 beaten to death with a Guido van Rossum voodoo doll.

I'll go warn Don Syme. :P  I wonder how Microsoft will react.
http://blogs.msdn.com/dsyme/archive/2006/08/24/715626.aspx

Cheers,
Chris
--
http://blog.rebertia.com/2010/01/24/of-braces-and-semicolons/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ftp and python

2010-04-07 Thread Tim Chase

Matjaz Pfefferer wrote:

What would be the easiest way to copy files from one ftp
folder to another without downloading them to local system?


As best I can tell, this isn't well-supported by FTP[1] which 
doesn't seem to have a native copy this file from 
server-location to server-location bypassing the client. 
There's a pair of RNFR/RNTO commands that allow you to rename (or 
perhaps move as well) a file which ftplib.FTP.rename() supports 
but it sounds like you want too copies.


When I've wanted to do this, I've used a non-FTP method, usually 
SSH'ing into the server and just using cp.  This could work for 
you if you have pycrypto/paramiko installed.


Your last hope would be that your particular FTP server has some 
COPY extension that falls outside of RFC parameters -- something 
that's not a portable solution, but if you're doing a one-off 
script or something in a controlled environment, could work.


Otherwise, you'll likely be stuck slurping the file down just to 
send it back up.


-tkc


[1]
http://en.wikipedia.org/wiki/List_of_FTP_commands




--
http://mail.python.org/mailman/listinfo/python-list


Re: Python and Regular Expressions

2010-04-07 Thread Patrick Maupin
On Apr 7, 3:52 am, Chris Rebert c...@rebertia.com wrote:

 Regular expressions != Parsers

True, but lots of parsers *use* regular expressions in their
tokenizers.  In fact, if you have a pure Python parser, you can often
get huge performance gains by rearranging your code slightly so that
you can use regular expressions in your tokenizer, because that
effectively gives you access to a fast, specialized C library that is
built into practically every Python interpreter on the planet.

 Every time someone tries to parse nested structures using regular
 expressions, Jamie Zawinski kills a puppy.

And yet, if you are parsing stuff in Python, and your parser doesn't
use some specialized C code for tokenization (which will probably be
regular expressions unless you are using mxtexttools or some other
specialized C tokenizer code), your nested structure parser will be
dog slow.

Now, for some applications, the speed just doesn't matter, and for
people who don't yet know the difference between regexps and parsing,
pointing them at PyParsing is certainly doing them a valuable service.

But that's twice today when I've seen people warned off regular
expressions without a cogent explanation that, while the re module is
good at what it does, it really only handles the very lowest level of
a parsing problem.

My 2 cents is that something like PyParsing is absolutely great for
people who want a simple parser without a lot of work.  But if people
use PyParsing, and then find out that (for their particular
application) it isn't fast enough, and then wonder what to do about
it, if all they remember is that somebody told them not to use regular
expressions, they will just come to the false conclusion that pure
Python is too painfully slow for any real world task.

Regards,
Pat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of list vs. set equality operations

2010-04-07 Thread Steven D'Aprano
On Wed, 07 Apr 2010 10:55:10 -0700, Raymond Hettinger wrote:

 [Gustavo Nare]
 In other words: The more different elements two collections have, the
 faster it is to compare them as sets. And as a consequence, the more
 equivalent elements two collections have, the faster it is to compare
 them as lists.

 Is this correct?
 
 If two collections are equal, then comparing them as a set is always
 slower than comparing them as a list.  Both have to call __eq__ for
 every element, but sets have to search for each element while lists can
 just iterate over consecutive pointers.
 
 If the two collections have unequal sizes, then both ways immediately
 return unequal.


Perhaps I'm misinterpreting what you are saying, but I can't confirm that 
behaviour, at least not for subclasses of list:

 class MyList(list):
... def __len__(self):
... return self.n
...
 L1 = MyList(range(10))
 L2 = MyList(range(10))
 L1.n = 9
 L2.n = 10
 L1 == L2
True
 len(L1) == len(L2)
False




-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of list vs. set equality operations

2010-04-07 Thread Patrick Maupin
On Apr 7, 8:41 pm, Steven D'Aprano
ste...@remove.this.cybersource.com.au wrote:
 On Wed, 07 Apr 2010 10:55:10 -0700, Raymond Hettinger wrote:
  [Gustavo Nare]
  In other words: The more different elements two collections have, the
  faster it is to compare them as sets. And as a consequence, the more
  equivalent elements two collections have, the faster it is to compare
  them as lists.

  Is this correct?

  If two collections are equal, then comparing them as a set is always
  slower than comparing them as a list.  Both have to call __eq__ for
  every element, but sets have to search for each element while lists can
  just iterate over consecutive pointers.

  If the two collections have unequal sizes, then both ways immediately
  return unequal.

 Perhaps I'm misinterpreting what you are saying, but I can't confirm that
 behaviour, at least not for subclasses of list:

  class MyList(list):

 ...     def __len__(self):
 ...             return self.n
 ... L1 = MyList(range(10))
  L2 = MyList(range(10))
  L1.n = 9
  L2.n = 10
  L1 == L2
 True
  len(L1) == len(L2)

 False

 --
 Steven

I think what he is saying is that the list __eq__ method will look at
the list lengths first.  This may or may not be considered a subtle
bug for the edge case you are showing.

If I do the following:

 L1 = range(1000)
 L2 = range(1000)
 L3 = range(1001)
 L1 == L2
True
 L1 == L3
False


I don't even need to run timeit -- the True takes awhile to print
out, while the False prints out immediately.

Regards,
Pat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread James Stroud

Patrick Maupin wrote:

BTW, although I find it annoying when people say don't do that when
that is a perfectly good thing to do, and although I also find it
annoying when people tell you what not to do without telling you what
*to* do, and although I find the regex solution to this problem to be
quite clean, the equivalent non-regex solution is not terrible


I propose a new way to answer questions on c.l.python that will (1) give 
respondents the pleasure of vague admonishment and (2) actually answer the 
question. The way I propose utilizes the double negative. For example:

You are doing it wrong! Don't not do codere.split('\s{2,}', s[2])/code.

Please answer this way in the future.

Thank you,
James


--
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Patrick Maupin
On Apr 7, 9:02 pm, James Stroud nospamjstroudmap...@mbi.ucla.edu
wrote:
 Patrick Maupin wrote:
  BTW, although I find it annoying when people say don't do that when
  that is a perfectly good thing to do, and although I also find it
  annoying when people tell you what not to do without telling you what
  *to* do, and although I find the regex solution to this problem to be
  quite clean, the equivalent non-regex solution is not terrible

 I propose a new way to answer questions on c.l.python that will (1) give 
 respondents the pleasure of vague admonishment and (2) actually answer the 
 question. The way I propose utilizes the double negative. For example:

 You are doing it wrong! Don't not do codere.split('\s{2,}', s[2])/code.

 Please answer this way in the future.

I most certainly will not consider when that isn't warranted!

OTOH, in general I am more interested in admonishing the authors of
the pseudo-answers than I am the authors of the questions, despite the
fact that I find this hilarious:

http://despair.com/cluelessness.html

Regards,
Pat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Grant Edwards
On 2010-04-08, Patrick Maupin pmau...@gmail.com wrote:
 On Apr 7, 4:47?pm, Grant Edwards inva...@invalid.invalid wrote:
 On 2010-04-07, J dreadpiratej...@gmail.com wrote:

  Can someone make me un-crazy?

 Definitely. ?Regex is driving you crazy, so don't use a regex.

 ? inputString = # 1 ?Short offline ? ? ? Completed without error ? ? 00% ? 
 ? ? 679 ? ? ? ? -

 ? print ' '.join(inputString.split()[4:-3])
[...]

 OK, fine.  Post a better solution to this problem than:

 import re
 re.split(' {2,}', '# 1  Short offline   Completed without error   
 00%')
 ['# 1', 'Short offline', 'Completed without error', '00%']

OK, I'll bite: what's wrong with the solution I already posted?

-- 
Grant

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Grant Edwards
On 2010-04-08, James Stroud nospamjstroudmap...@mbi.ucla.edu wrote:
 Patrick Maupin wrote:
 BTW, although I find it annoying when people say don't do that when
 that is a perfectly good thing to do, and although I also find it
 annoying when people tell you what not to do without telling you what
 *to* do, and although I find the regex solution to this problem to be
 quite clean, the equivalent non-regex solution is not terrible

 I propose a new way to answer questions on c.l.python that will (1) give 
 respondents the pleasure of vague admonishment and (2) actually answer the 
 question. The way I propose utilizes the double negative. For example:

 You are doing it wrong! Don't not do codere.split('\s{2,}', s[2])/code.

 Please answer this way in the future.

I will certain try to avoid not answering in a manner not unlike that.

-- 
Grant
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Patrick Maupin
On Apr 7, 9:36 pm, Grant Edwards inva...@invalid.invalid wrote:
 On 2010-04-08, Patrick Maupin pmau...@gmail.com wrote: On Apr 7, 4:47?pm, 
 Grant Edwards inva...@invalid.invalid wrote:
  On 2010-04-07, J dreadpiratej...@gmail.com wrote:

   Can someone make me un-crazy?

  Definitely. ?Regex is driving you crazy, so don't use a regex.

  ? inputString = # 1 ?Short offline ? ? ? Completed without error ? ? 00% 
  ? ? ? 679 ? ? ? ? -

  ? print ' '.join(inputString.split()[4:-3])

 [...]

  OK, fine.  Post a better solution to this problem than:

  import re
  re.split(' {2,}', '# 1  Short offline       Completed without error      
   00%')
  ['# 1', 'Short offline', 'Completed without error', '00%']

 OK, I'll bite: what's wrong with the solution I already posted?

 --
 Grant

Sorry, my eyes completely missed your one-liner, so my criticism about
not posting a solution was unwarranted.  I don't think you and I read
the problem the same way (which is probably why I didn't notice your
solution -- because it wasn't solving the problem I thought I saw).

When I saw And I am interested in the string that appears in the
third column, which changes as the test runs and then completes I
assumed that, not only could that string change, but so could the one
before it.

I guess my base assumption that anything with words in it could
change.  I was looking at the OP's attempt at a solution, and he
obviously felt he needed to see two or more spaces as an item
delimiter.

(And I got testy because of seeing other IMO unwarranted denigration
of re on the list lately.)

Regards,
Pat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Steven D'Aprano
On Wed, 07 Apr 2010 18:03:47 -0700, Patrick Maupin wrote:

 BTW, although I find it annoying when people say don't do that when
 that is a perfectly good thing to do, and although I also find it
 annoying when people tell you what not to do without telling you what
 *to* do, 

Grant did give a perfectly good solution.


 and although I find the regex solution to this problem to be
 quite clean, the equivalent non-regex solution is not terrible, so I
 will present it as well, for your viewing pleasure:
 
  [x for x in '# 1  Short offline   Completed without error
   00%'.split('  ') if x.strip()]
 ['# 1', 'Short offline', ' Completed without error', ' 00%']


This is one of the reasons we're so often suspicious of re solutions:


 s = '# 1  Short offline       Completed without error       00%'
 tre = Timer(re.split(' {2,}', s), 
... import re; from __main__ import s)
 tsplit = Timer([x for x in s.split('  ') if x.strip()], 
... from __main__ import s)

 re.split(' {2,}', s) == [x for x in s.split('  ') if x.strip()]
True
 
 
 min(tre.repeat(repeat=5))
6.1224789619445801
 min(tsplit.repeat(repeat=5))
1.8338048458099365


Even when they are correct and not unreadable line-noise, regexes tend to 
be slow. And they get worse as the size of the input increases:

 s *= 1000
 min(tre.repeat(repeat=5, number=1000))
2.3496899604797363
 min(tsplit.repeat(repeat=5, number=1000))
0.41538596153259277

 s *= 10
 min(tre.repeat(repeat=5, number=1000))
23.739185094833374
 min(tsplit.repeat(repeat=5, number=1000))
4.6444299221038818


And this isn't even one of the pathological O(N**2) or O(2**N) regexes.

Don't get me wrong -- regexes are a useful tool. But if your first 
instinct is to write a regex, you're doing it wrong.


[quote]
A related problem is Perl's over-reliance on regular expressions 
that is exaggerated by advocating regex-based solution in almost 
all O'Reilly books. The latter until recently were the most
authoritative source of published information about Perl. 

While simple regular expression is a beautiful thing and can 
simplify operations with string considerably, overcomplexity in
regular expressions is extremly dangerous: it cannot serve a basis
for serious, professional programming, it is fraught with pitfalls,
a big semantic mess as a result of outgrowing its primary purpose. 
Diagnostic for errors in regular expressions is even weaker then 
for the language itself and here many things are just go unnoticed.
[end quote]

http://www.softpanorama.org/Scripting/Perlbook/Ch01/
place_of_perl_among_other_lang.shtml



Even Larry Wall has criticised Perl's regex culture:

http://dev.perl.org/perl6/doc/design/apo/A05.html




-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread J
On Wed, Apr 7, 2010 at 22:45, Patrick Maupin pmau...@gmail.com wrote:

 When I saw And I am interested in the string that appears in the
 third column, which changes as the test runs and then completes I
 assumed that, not only could that string change, but so could the one
 before it.

 I guess my base assumption that anything with words in it could
 change.  I was looking at the OP's attempt at a solution, and he
 obviously felt he needed to see two or more spaces as an item
 delimiter.

I apologize for the confusion, Pat...

I could have worded that better, but at that point I was A:
Frustrated, B: starving, and C: had my wife nagging me to stop working
to come get something to eat ;-)

What I meant was, in that output string, the phrase in the middle
could change in length...
After looking at the source code for smartctl (part of the
smartmontools package for you linux people) I found the switch that
creates those status messages they vary in character length, some
with non-text characters like ( and ) and /, and have either 3 or 4
words...

The spaces between each column, instead of being a fixed number of
spaces each, were seemingly arbitrarily created... there may be 4
spaces between two columns or there may be 9, or 7 or who knows what,
and since they were all being treated as individual spaces instead of
tabs or something, I was having trouble splitting the output into
something that was easy to parse (at least in my mind it seemed that
way).

Anyway, that's that... and I do apologize if my original post was
confusing at all...

Cheers
Jeff
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Patrick Maupin
On Apr 7, 9:51 pm, Steven D'Aprano
ste...@remove.this.cybersource.com.au wrote:
 On Wed, 07 Apr 2010 18:03:47 -0700, Patrick Maupin wrote:
  BTW, although I find it annoying when people say don't do that when
  that is a perfectly good thing to do, and although I also find it
  annoying when people tell you what not to do without telling you what
  *to* do,

 Grant did give a perfectly good solution.

Yeah, I noticed later and apologized for that.  What he gave will work
perfectly if the only data that changes the number of words is the
data the OP is looking for.  This may or may not be true.  I don't
know anything about the program generating the data, but I did notice
that the OP's attempt at an answer indicated that the OP felt (rightly
or wrongly) he needed to split on two or more spaces.


  and although I find the regex solution to this problem to be
  quite clean, the equivalent non-regex solution is not terrible, so I
  will present it as well, for your viewing pleasure:

   [x for x in '# 1  Short offline       Completed without error
        00%'.split('  ') if x.strip()]
  ['# 1', 'Short offline', ' Completed without error', ' 00%']

 This is one of the reasons we're so often suspicious of re solutions:

  s = '# 1  Short offline       Completed without error       00%'
  tre = Timer(re.split(' {2,}', s),

 ... import re; from __main__ import s) tsplit = Timer([x for x in 
 s.split('  ') if x.strip()],

 ... from __main__ import s)

  re.split(' {2,}', s) == [x for x in s.split('  ') if x.strip()]
 True

  min(tre.repeat(repeat=5))
 6.1224789619445801
  min(tsplit.repeat(repeat=5))

 1.8338048458099365

 Even when they are correct and not unreadable line-noise, regexes tend to
 be slow. And they get worse as the size of the input increases:

  s *= 1000
  min(tre.repeat(repeat=5, number=1000))
 2.3496899604797363
  min(tsplit.repeat(repeat=5, number=1000))
 0.41538596153259277

  s *= 10
  min(tre.repeat(repeat=5, number=1000))
 23.739185094833374
  min(tsplit.repeat(repeat=5, number=1000))

 4.6444299221038818

 And this isn't even one of the pathological O(N**2) or O(2**N) regexes.

 Don't get me wrong -- regexes are a useful tool. But if your first
 instinct is to write a regex, you're doing it wrong.

     [quote]
     A related problem is Perl's over-reliance on regular expressions
     that is exaggerated by advocating regex-based solution in almost
     all O'Reilly books. The latter until recently were the most
     authoritative source of published information about Perl.

     While simple regular expression is a beautiful thing and can
     simplify operations with string considerably, overcomplexity in
     regular expressions is extremly dangerous: it cannot serve a basis
     for serious, professional programming, it is fraught with pitfalls,
     a big semantic mess as a result of outgrowing its primary purpose.
     Diagnostic for errors in regular expressions is even weaker then
     for the language itself and here many things are just go unnoticed.
     [end quote]

 http://www.softpanorama.org/Scripting/Perlbook/Ch01/
 place_of_perl_among_other_lang.shtml

 Even Larry Wall has criticised Perl's regex culture:

 http://dev.perl.org/perl6/doc/design/apo/A05.html

Bravo!!! Good data, quotes, references, all good stuff!

I absolutely agree that regex shouldn't always be the first thing you
reach for, but I was reading way too much unsubstantiated this is
bad.  Don't do it. on the subject recently.  In particular, when
people say Don't use regex.  Use PyParsing!  It may be good advice
in the right context, but it's a bit disingenuous not to mention that
PyParsing will use regex under the covers...

Regards,
Pat

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Tkinter inheritance mess?

2010-04-07 Thread ejetzer
On 5 avr, 22:32, Lie Ryan lie.1...@gmail.com wrote:
 On 04/06/10 02:38, ejetzer wrote:



  On 5 avr, 12:36, ejetzer ejet...@gmail.com wrote:
  For a school project, I'm trying to make a minimalist web browser, and
  I chose to use Tk as the rendering toolkit. I made my parser classes
  into Tkinter canvases, so that I would only have to call pack and
  mainloop functions in order to display the rendering. Right now, two
  bugs are affecting the program :
  1) When running the full app¹, which fetches a document and then
  attempts to display it, I get a TclError :
                   _tkinter.TclError: bad window path name {Extensible
  Markup Language (XML) 1.0 (Fifth Edition)}
  2) When running only the parsing and rendering test², I get a big
  window to open, with nothing displayed. I am not quite familiar with
  Tk, so I have no idea of why it acts that way.

  1: webbrowser.py
  2: xmlparser.py

  I just realized I haven't included the Google Code project url :
 http://code.google.com/p/smally-browser/source/browse/#svn/trunk

 Check your indentation xmlparser.py in line 63 to 236, are they supposed
 to be correct?

Yes, these are functions that are used exclusively inside the feed
function, so I decided to restrict their namespace. I just realized it
could be confusing, so I placed them in global namsespace.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Grant Edwards
On 2010-04-08, Patrick Maupin pmau...@gmail.com wrote:

 Sorry, my eyes completely missed your one-liner, so my criticism about
 not posting a solution was unwarranted.  I don't think you and I read
 the problem the same way (which is probably why I didn't notice your
 solution -- because it wasn't solving the problem I thought I saw).

No worries.

 When I saw And I am interested in the string that appears in the
 third column, which changes as the test runs and then completes I
 assumed that, not only could that string change, but so could the one
 before it.

If that's the case, my solution won't work right.

 I guess my base assumption that anything with words in it could
 change.  I was looking at the OP's attempt at a solution, and he
 obviously felt he needed to see two or more spaces as an item
 delimiter.

If the requirement is indeed two or more spaces as a delimiter with
spaces allowed in any field, then a regular expression split is
probably the best solution.

-- 
Grant



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Performance of list vs. set equality operations

2010-04-07 Thread Raymond Hettinger
[Raymond Hettinger]
  If the two collections have unequal sizes, then both ways immediately
  return unequal.

[Steven D'Aprano]
 Perhaps I'm misinterpreting what you are saying, but I can't confirm that
 behaviour, at least not for subclasses of list:

For doubters, see list_richcompare() in
http://svn.python.org/view/python/trunk/Objects/listobject.c?revision=78522view=markup

if (Py_SIZE(vl) != Py_SIZE(wl)  (op == Py_EQ || op == Py_NE)) {
/* Shortcut: if the lengths differ, the lists differ */
PyObject *res;
if (op == Py_EQ)
res = Py_False;
else
res = Py_True;
Py_INCREF(res);
return res;
}

And see set_richcompare() in
http://svn.python.org/view/python/trunk/Objects/setobject.c?revision=78886view=markup

case Py_EQ:
if (PySet_GET_SIZE(v) != PySet_GET_SIZE(w))
Py_RETURN_FALSE;
if (v-hash != -1  
((PySetObject *)w)-hash != -1 
v-hash != ((PySetObject *)w)-hash)
Py_RETURN_FALSE;
return set_issubset(v, w);


Raymond
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Patrick Maupin
On Apr 7, 9:51 pm, Steven D'Aprano
ste...@remove.this.cybersource.com.au wrote:

 This is one of the reasons we're so often suspicious of re solutions:

  s = '# 1  Short offline       Completed without error       00%'
  tre = Timer(re.split(' {2,}', s),

 ... import re; from __main__ import s) tsplit = Timer([x for x in 
 s.split('  ') if x.strip()],

 ... from __main__ import s)

  re.split(' {2,}', s) == [x for x in s.split('  ') if x.strip()]
 True

  min(tre.repeat(repeat=5))
 6.1224789619445801
  min(tsplit.repeat(repeat=5))

 1.8338048458099365

I will confess that, in my zeal to defend re, I gave a simple one-
liner, rather than the more optimized version:

 from timeit import Timer
 s = '# 1  Short offline   Completed without error   00%'
 tre = Timer(splitter(s),
... import re; from __main__ import s; splitter =
re.compile(' {2,}').split)
 tsplit = Timer([x for x in s.split('  ') if x.strip()],
... from __main__ import s)
 min(tre.repeat(repeat=5))
1.893190860748291
 min(tsplit.repeat(repeat=5))
2.0661051273345947

You're right that if you have an 800K byte string, re doesn't perform
as well as split, but the delta is only a few percent.

 s *= 1
 min(tre.repeat(repeat=5, number=1000))
15.331652164459229
 min(tsplit.repeat(repeat=5, number=1000))
14.596404075622559

Regards,
Pat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Q] raise exception with fake filename and linenumber

2010-04-07 Thread Gabriel Genellina

En Wed, 07 Apr 2010 17:23:22 -0300, kwatch kwa...@gmail.com escribió:


Is it possible to raise exception with custom traceback to specify
file and line?
I'm creating a certain parser.
I want to report syntax error with the same format as other exception.
-
1: def parse(filename):
2: if something_is_wrong():
3: linenum = 123
4: raise Exception(syntax error on %s, line %s % (filename,
linenum))
5:
6: parse('example.file')
-

my hope is:
-
Traceback (most recent call last):
  File /tmp/parser.py, line 6, in module
parse('example.file')
  File /tmp/parser.py, line 4, in parse
raise Exception(syntax error on %s, line %s % (filename,
linenum))
  File /tmp/example.file, line 123
foreach item in items   # wrong syntax line
Exception: syntax error
-


The built-in SyntaxError exception does what you want. Constructor  
parameters are undocumented, but they're as follows:


   raise SyntaxError(A descriptive error message, (filename, linenum,  
colnum, source_line))


colnum is used to place the ^ symbol (10 in this fake example). Output:

Traceback (most recent call last):
  File 1.py, line 9, in module
foo()
  File 1.py, line 7, in foo
raise SyntaxError(A descriptive error message, (filename, linenum,  
colnum, this is line 123 in example.file))

  File example.file, line 123
this is line 123 in example.file
 ^
SyntaxError: A descriptive error message

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Profiling: Interpreting tottime

2010-04-07 Thread Gabriel Genellina
En Wed, 07 Apr 2010 18:44:39 -0300, Nikolaus Rath nikol...@rath.org  
escribió:



def check_s3_refcounts():
Check s3 object reference counts

global found_errors
log.info('Checking S3 object reference counts...')

for (key, refcount) in conn.query(SELECT id, refcount FROM  
s3_objects):


refcount2 = conn.get_val(SELECT COUNT(inode) FROM blocks WHERE  
s3key=?,

 (key,))
if refcount != refcount2:
log_error(S3 object %s has invalid refcount, setting from  
%d to %d,

  key, refcount, refcount2)
found_errors = True
if refcount2 != 0:
conn.execute(UPDATE s3_objects SET refcount=? WHERE  
id=?,

 (refcount2, key))
else:
# Orphaned object will be picked up by check_keylist
conn.execute('DELETE FROM s3_objects WHERE id=?', (key,))

When I ran cProfile.Profile().runcall() on it, I got the following
result:

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
1 7639.962 7639.962 7640.269 7640.269  
fsck.py:270(check_s3_refcounts)


So according to the profiler, the entire 7639 seconds where spent
executing the function itself.

How is this possible? I really don't see how the above function can
consume any CPU time without spending it in one of the called
sub-functions.


Is the conn object implemented as a C extension? The profiler does not  
detect calls to C functions, I think.
You may be interested in this package by Robert Kern:  
http://pypi.python.org/pypi/line_profiler

Line-by-line profiler.
line_profiler will profile the time individual lines of code take to  
execute.


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Profiling: Interpreting tottime

2010-04-07 Thread Gabriel Genellina
En Wed, 07 Apr 2010 18:44:39 -0300, Nikolaus Rath nikol...@rath.org  
escribió:



def check_s3_refcounts():
Check s3 object reference counts

global found_errors
log.info('Checking S3 object reference counts...')

for (key, refcount) in conn.query(SELECT id, refcount FROM  
s3_objects):


refcount2 = conn.get_val(SELECT COUNT(inode) FROM blocks WHERE  
s3key=?,

 (key,))
if refcount != refcount2:
log_error(S3 object %s has invalid refcount, setting from  
%d to %d,

  key, refcount, refcount2)
found_errors = True
if refcount2 != 0:
conn.execute(UPDATE s3_objects SET refcount=? WHERE  
id=?,

 (refcount2, key))
else:
# Orphaned object will be picked up by check_keylist
conn.execute('DELETE FROM s3_objects WHERE id=?', (key,))

When I ran cProfile.Profile().runcall() on it, I got the following
result:

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
1 7639.962 7639.962 7640.269 7640.269  
fsck.py:270(check_s3_refcounts)


So according to the profiler, the entire 7639 seconds where spent
executing the function itself.

How is this possible? I really don't see how the above function can
consume any CPU time without spending it in one of the called
sub-functions.


Is the conn object implemented as a C extension? The profiler does not  
detect calls to C functions, I think.
You may be interested in this package by Robert Kern:  
http://pypi.python.org/pypi/line_profiler

Line-by-line profiler.
line_profiler will profile the time individual lines of code take to  
execute.


--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


The Regex Story

2010-04-07 Thread Lie Ryan
On 04/08/10 12:45, Patrick Maupin wrote:
 (And I got testy because of seeing other IMO unwarranted denigration
 of re on the list lately.)


Why am I seeing a lot of this pattern lately:

OP: Got problem with string
+- A: Suggested a regex-based solution
   +- B: Quoted Some people ... regex ... two problems.

or

OP: Writes some regex, found problem
+- A: Quoted Some people ... regex ... two problems.
   +- B: Supplied regex-based solution, clean one
  +- A: Suggested PyParsing (or similar)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Patrick Maupin
On Apr 7, 9:51 pm, Steven D'Aprano
ste...@remove.this.cybersource.com.au wrote:

BTW, I don't know how you got 'True' here.

  re.split(' {2,}', s) == [x for x in s.split('  ') if x.strip()]
 True

You must not have s set up to be the string given by the OP.  I just
realized there was an error in my non-regexp example, that actually
manifests itself with the test data:

 import re
 s = '# 1  Short offline   Completed without error   00%'
 re.split(' {2,}', s)
['# 1', 'Short offline', 'Completed without error', '00%']
 [x for x in s.split('  ') if x.strip()]
['# 1', 'Short offline', ' Completed without error', ' 00%']
 re.split(' {2,}', s) == [x for x in s.split('  ') if x.strip()]
False

To fix it requires something like:

[x.strip() for x in s.split('  ') if x.strip()]

or:

[x for x in [x.strip() for x in s.split('  ')] if x]

I haven't timed either one of these, but given that the broken
original one was slower than the simpler:

splitter = re.compile(' {2,}').split
splitter(s)

on strings of normal length, and given that nobody noticed this bug
right away (even though it was in the printout on my first message,
heh), I think that this shows that (here, let me qualify this
carefully), at least in some cases, the first regexp that comes to my
mind can be prettier, shorter, faster, less bug-prone, etc. than the
first non-regexp that comes to my mind...

Regards,
Pat
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: order that destructors get called?

2010-04-07 Thread Gabriel Genellina
En Wed, 07 Apr 2010 19:08:14 -0300, Brendan Miller catph...@catphive.net  
escribió:


I'm used to C++ where destrcutors get called in reverse order of  
construction

like this:

{
Foo foo;
Bar bar;

// calls Bar::~Bar()
// calls Foo::~Foo()
}


That behavior is explicitly guaranteed by the C++ language. Python does  
not have such guarantees -- destructors may be delayed an arbitrary amount  
of time, or even not called at all.
In contrast, Python does have a `try/finally` construct, and the `with`  
statement. If Foo and Bar implement adequate __enter__ and __exit__  
methods, the above code would become:


with Foo() as foo:
  with Bar() as bar:
# do something

On older Python versions it is more verbose:

foo = Foo()
try:
  bar = Bar()
  try:
# do something
  finally:
bar.release_resources()
finally:
  foo.release_resources()

I'm writing a ctypes wrapper for some native code, and I need to manage  
some
memory. I'm wrapping the memory in a python class that deletes the  
underlying

 memory when the python class's reference count hits zero.


If the Python object lifetime is tied to certain lexical scope (like the  
foo,bar local variables in your C++ example) you may use `with` or  
`finally` as above.

If some other object with a longer lifetime keeps a reference, see below.


When doing this, I noticed some odd behaviour. I had code like this:

def delete_my_resource(res):
# deletes res

class MyClass(object):
def __del__(self):
delete_my_resource(self.res)

o = MyClass()

What happens is that as the program shuts down, delete_my_resource is  
released
*before* o is released. So when __del__ get called, delete_my_resource  
is now

None.


Implementing __del__ is not always a good idea; among other things, the  
garbage collector cannot break a cycle if any involved object contains a  
__del__ method. [1]
If you still want to implement __del__, keep a reference to  
delete_my_resource in the method itself:


 def __del__(self,
   delete_my_resource=delete_my_resource):
 delete_my_resource(self.res)

(and do the same with any global name that delete_my_resource itself may  
reference).


The best approach is to store a weak reference to foo and bar somewhere;  
weak references are notified right before the referent is destroyed. [4]


And last, if you want to release something when the program terminates,  
you may use the atexit module.


What I'm wondering is if there's any documented order that reference  
counts

get decremented when a module is released or when a program terminates.


Not much, as Stephen Hansen already told you; but see the comments in  
PyImport_Cleanup function in import.c [2] and in _PyModule_Clear in  
moduleobject.c [3]
Standard disclaimer: these undocumented details only apply to the current  
version of CPython, may change in future releases, and are not applicable  
at all to other implementations. So it's not a good idea to rely on this  
behavior.


[1] http://docs.python.org/reference/datamodel.html#object.__del__
[2] http://svn.python.org/view/python/trunk/Python/import.c?view=markup
[3]  
http://svn.python.org/view/python/trunk/Objects/moduleobject.c?view=markup

[4] http://docs.python.org/library/weakref.html

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Simple Cookie Script: Not recognising Cookie

2010-04-07 Thread Jimbo
Hi I have a simple Python program that assigns a cookie to a web user
when they open the script the 1st time(in an internet browser). If
they open the script a second time the script should display the line
 You have been here 2 times. , if they open the script agai it
should show on the webpage You have been here 3 times and so on.

But for some reason, my program is not assigning or recognising an
assigned cookie  outputing the line You have been here x times. I
have gone over my code for like 2 hours now I cant figure out what is
going wrong??

Can you help me figure out whats wrong? I have my own cgi server that
just runs on my machine so its not that its the code to recognise/
assign a cookie

[code]#!/usr/bin/env python

import Cookie
import cgi
import os

HTML_template = 
html
  head

  /head
  body
p %s /p
  /body
/html


def main():

# Web Client is new to the site so we need to assign a cookie to
them
cookie = Cookie.SimpleCookie()
cookie['SESSIONID'] = '1'
code = No cookie exists. Welcome, this is your first visit.

if 'HTTP_COOKIE' in os.environ:
cookie = Cookie.SimpleCookie(os.environ['HTTP_COOKIE'])
# If Web client has been here before
if cookie.has_key('SESSIONID'):
cookie['SESSIONID'].value = int(cookie['SESSIONID'].value)
+1
code = You have been here %s times. %
cookie['SESSIONID'].value
else:
cookie = Cookie.SimpleCookie()
cookie['SESSIONID'] = '1'
code = I Have a cookie, but SESSIONID does not exist

print Content-Type: text/html\n
print HTML_template % code


if __name__ == __main__:
main()
[/code]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ftp and python

2010-04-07 Thread John Nagle

Tim Chase wrote:

Matjaz Pfefferer wrote:

What would be the easiest way to copy files from one ftp
folder to another without downloading them to local system?


As best I can tell, this isn't well-supported by FTP[1] which doesn't 
seem to have a native copy this file from server-location to 
server-location bypassing the client. There's a pair of RNFR/RNTO 
commands that allow you to rename (or perhaps move as well) a file which 
ftplib.FTP.rename() supports but it sounds like you want too copies.


   In theory, the FTP spec supports three-way transfers, where the
source, destination, and control can all be on different machines.
But no modern implementation supports that.

John Nagle
--
http://mail.python.org/mailman/listinfo/python-list


Re: remote multiprocessing, shared object

2010-04-07 Thread Kushal Kumaran
On Thu, Apr 8, 2010 at 3:04 AM, Norm Matloff matl...@doe.com wrote:
 Should be a simple question, but I can't seem to make it work from my
 understanding of the docs.

 I want to use the multiprocessing module with remote clients, accessing
 shared lists.  I gather one is supposed to use register(), but I don't
 see exactly how.  I'd like to have the clients read and write the shared
 list directly, not via some kind of get() and set() functions.  It's
 clear how to do this in a shared-memory setting, but how can one do it
 across a network, i.e. with serve_forever(), connect() etc.?

 Any help, especially with a concrete example, would be much appreciated.
 Thanks.


There's an example in the multiprocessing documentation.
http://docs.python.org/library/multiprocessing.html#using-a-remote-manager

It creates a shared queue, but it's easy to modify for lists.

For example, here's your shared list server:
from multiprocessing.managers import BaseManager
shared_list = []
class ListManager(BaseManager): pass
ListManager.register('get_list', callable=lambda:shared_list)
m = ListManager(address=('', 5), authkey='abracadabra')
s = m.get_server()
s.serve_forever()

A client that adds an element to your shared list:
import random
from multiprocessing.managers import BaseManager
class ListManager(BaseManager): pass
ListManager.register('get_list')
m = ListManager(address=('localhost', 5), authkey='abracadabra')
m.connect()
l = m.get_list()
l.append(random.random())

And a client that prints out the shared list:
from multiprocessing.managers import BaseManager
class ListManager(BaseManager): pass
ListManager.register('get_list')
m = ListManager(address=('localhost', 5), authkey='abracadabra')
m.connect()
l = m.get_list()
print str(l)

-- 
regards,
kushal
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regex driving me crazy...

2010-04-07 Thread Kushal Kumaran
On Thu, Apr 8, 2010 at 3:10 AM, J dreadpiratej...@gmail.com wrote:
 Can someone make me un-crazy?

 I have a bit of code that right now, looks like this:

 status = getoutput('smartctl -l selftest /dev/sda').splitlines()[6]
        status = re.sub(' (?= )(?=([^]*[^]*)*[^]*$)', :,status)
        print status

 Basically, it pulls the first actual line of data from the return you
 get when you use smartctl to look at a hard disk's selftest log.

 The raw data looks like this:

 # 1  Short offline       Completed without error       00%       679         -

 Unfortunately, all that whitespace is arbitrary single space
 characters.  And I am interested in the string that appears in the
 third column, which changes as the test runs and then completes.  So
 in the example, Completed without error

 The regex I have up there doesn't quite work, as it seems to be
 subbing EVERY space (or at least in instances of more than one space)
 to a ':' like this:

 # 1: Short offline:: Completed without error:: 00%:: 679 -

 Ultimately, what I'm trying to do is either replace any space that is
 one space wiht a delimiter, then split the result into a list and
 get the third item.

 OR, if there's a smarter, shorter, or better way of doing it, I'd love to 
 know.

 The end result should pull the whole string in the middle of that
 output line, and then I can use that to compare to a list of possible
 output strings to determine if the test is still running, has
 completed successfully, or failed.


Is there any particular reason you absolutely must extract the status
message?  If you already have a list of possible status messages, you
could just test which one of those is present in the line...

 Unfortunately, my google-fu fails right now, and my Regex powers were
 always rather weak anyway...

 So any ideas on what the best way to proceed with this would be?


-- 
regards,
kushal
-- 
http://mail.python.org/mailman/listinfo/python-list


[issue4570] Bad example in set tutorial

2010-04-07 Thread Kelda

Changes by Kelda kel...@gmail.com:


Removed file: http://bugs.python.org/file16790/datastructures.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue4570
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7559] TestLoader.loadTestsFromName swallows import errors

2010-04-07 Thread Chris Jerdonek

Chris Jerdonek chris.jerdo...@gmail.com added the comment:

Rietveld link: http://codereview.appspot.com/810044/show

This patch changes unittest.TestLoader.loadTestsFromName() so that ImportErrors 
will bubble up when importing from a module with a bad import statement.  
Before the method raised an AttributeError.  The unit test code is taken from a 
patch by Salman Haq.  The patch also includes code adapted from 
http://twistedmatrix.com .

(This is my first patch, so any guidance is greatly appreciated.  Thanks.)

--
Added file: http://bugs.python.org/file16796/_patch-7559-3.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7559
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7301] Add environment variable $PYTHONWARNINGS

2010-04-07 Thread Philip Jenvey

Philip Jenvey pjen...@underboss.org added the comment:

I committed a somewhat different version of this patch to py3k to handle the 
warn options now calling for wchars, but this needs more work. Some of the 
buildbots are unhappy

Seems like the py3k version either needs to fully decode the env values to a 
unicode obj via the file system encoding (which I doubt is initialized at this 
point)/surrogateescape, or use something along the lines of char2wchar in 
python.c

--
assignee:  - pjenvey
resolution: fixed - accepted
status: closed - open

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7301
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8327] unintuitive behaviour of logging message propagation

2010-04-07 Thread Pascal Chambon

Pascal Chambon chambon.pas...@gmail.com added the comment:

Thanks for the doc patch, if you don't mind I'd just add the paragraph below 
too, to clarify the fact that logger levels are only entry points levels, 
ignored he rest of the time. There might be slight redundancies with the rest 
of the (long) documentation, but it's all benefit imo. B-)

In addition to any handlers directly associated with a logger, *all handlers 
associated with all ancestors of the logger* are called to dispatch the message 
(unless the *propagate* flag for a logger is set to a false value, at which 
point the passing to ancestor handlers stops).

Note that in this process, the level of ancestor loggers is never considered : 
only their *propagate* attribute and the levels of their loggers canimpact the 
treatment on the message being dispatched. The level of a logger is thus only 
acting as an *entry point barrier*, only able to stop the whole dispatch of a 
message that enters the system through it.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8327
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8271] str.decode('utf8', 'replace') -- conformance with Unicode 5.2.0

2010-04-07 Thread Marc-Andre Lemburg

Marc-Andre Lemburg m...@egenix.com added the comment:

STINNER Victor wrote:
 
 STINNER Victor victor.stin...@haypocalc.com added the comment:
 
 I also found out that, according to RFC 3629, surrogates 
 are considered invalid and they can't be encoded/decoded, 
 but the UTF-8 codec actually does it.
 
 Python2 does, but Python3 raises an error.
 
 Python 2.7a4+ (trunk:79675, Apr  3 2010, 16:11:36)
 u\uDC80.encode(utf8)
 '\xed\xb2\x80'
 
 Python 3.2a0 (py3k:79441, Mar 26 2010, 13:04:55)
 \uDC80.encode(utf8)
 UnicodeEncodeError: 'utf-8' codec can't encode character '\udc80' in position 
 0: surrogates not allowed
 
 Deny encoding surrogates (in utf8) causes a lot of crashs in Python3, because 
 most functions calling suppose that _PyUnicode_AsString() does never fail: 
 see #6687 (and #8195 and a lot of other crashs). It's not a good idea to 
 change it in Python 2.7, because it would require a huge work and we are 
 close to the first beta of 2.7.

I wonder how that change got into the 3.x branch - I would certainly
not have approved it for the reasons given further up on this ticket.

I think we should revert that change for Python 3.2.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8271
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8271] str.decode('utf8', 'replace') -- conformance with Unicode 5.2.0

2010-04-07 Thread STINNER Victor

STINNER Victor victor.stin...@haypocalc.com added the comment:

  I also found out that, according to RFC 3629, surrogates
  are considered invalid and they can't be encoded/decoded,
  but the UTF-8 codec actually does it.
 
  Python2 does, but Python3 raises an error.
  (...)
 
 I wonder how that change got into the 3.x branch - I would certainly
 not have approved it for the reasons given further up on this ticket.
 
 I think we should revert that change for Python 3.2.

See r72208 and issue #3672.

pitrou wrote We could fix it for 3.1, and perhaps leave 2.7 unchanged if some 
people rely on this (for whatever reason).

--
title: str.decode('utf8',   'replace') -- conformance with Unicode 5.2.0 - 
str.decode('utf8', 'replace') -- conformance with Unicode 5.2.0

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8271
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8332] regrtest single TestClass/test_method

2010-04-07 Thread anatoly techtonik

New submission from anatoly techtonik techto...@gmail.com:

It would be convenient for debug to execute single test_method or TestClass. 
Running all tests in file can take a long time.

--
components: Tests
messages: 102524
nosy: techtonik
severity: normal
status: open
title: regrtest single TestClass/test_method
type: feature request
versions: Python 2.7

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8332
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8332] regrtest single TestClass/test_method

2010-04-07 Thread anatoly techtonik

anatoly techtonik techto...@gmail.com added the comment:

regrtest [options] test_file.TestClass
regrtest [options] test_file.test_method

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8332
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4026] fcntl extension fails to build on AIX 6.1

2010-04-07 Thread Michael Haubenwallner

Michael Haubenwallner michael.haubenwall...@salomon.at added the comment:

This very same problem happens (with Python-2.6.2) on AIX5.3 now too, after 
upgrading to:
$ oslevel -s
5300-08-09-1013

Unlike before (comparing with old build logs), this AIX5.3 now provides flock() 
in sys/file.h and libbsd.a[shr.o] like AIX6.1.

Interesting enough, /usr/lib/libbsd.a contains 32bit shared objects only, so 
-lbsd does not help in 64bit mode (don't know if python actually supports 64bit 
on AIX). I don't have an AIX6.1 to check this.

Because of this, upgrading checking for flock from compile- to link-check 
(eventually trying -lbsd a second time) might help?

--
nosy: +haubi

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue4026
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8331] a documentation grammar fix in logging module

2010-04-07 Thread Vinay Sajip

Vinay Sajip vinay_sa...@yahoo.co.uk added the comment:

Fix checked into trunk (r79888).

--
resolution: accepted - fixed
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8331
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8314] test_ctypes fails in test_ulonglong on sparc buildbots

2010-04-07 Thread Mark Dickinson

Mark Dickinson dicki...@gmail.com added the comment:

It's surprising that test_ulonglong fails, while test_longlong passes:  can the 
Linux Sparc ABI really be treating these two types differently?

Maybe more information could be gained by supplying a more interesting test 
value than 42---some 8-byte value with all bytes different, for example.

In any case, this seems likely to be a libffi bug somewhere;  maybe we could 
bring it up on the libffi mailing list.  If we can translate the failing Python 
code into failing C code first that would probably increase the chances of 
getting a good answer.  Without access to Sparc hardware, I don't see much 
other way of making progress here.

--
nosy: +mark.dickinson

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8314
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   >