[ANN] guiqwt v2.1.2

2011-04-26 Thread Pierre.RAYBAUT
Hi all,

I am pleased to announce that `guiqwt` v2.1.2 has been released.
This release mainly fixes a critical bug when running the GUI-based test 
launcher (except from the test launcher itself, this bug had absolutely no 
impact on the library -- it was however considered critical as many regular 
users were not able to run the test launcher).

Main changes since `guiqwt` v2.1.0:
  * added support for NaNs in image plot items (default behaviour: NaN pixels 
are transparents)
  * added oblique averaged cross section feature
  * bugfixes

This version of `guiqwt` includes a demo software, Sift (for Signal and Image 
Filtering Tool), based on `guidata` and `guiqwt`:
http://packages.python.org/guiqwt/sift.html
Windows users may even download the portable version of Sift 0.23 to test it 
without having to install anything:
http://code.google.com/p/guiqwt/downloads/detail?name=sift023_portable.zip

The `guiqwt` documentation with examples, API reference, etc. is available here:
http://packages.python.org/guiqwt/

Based on PyQwt (plotting widgets for PyQt4 graphical user interfaces) and on 
the scientific modules NumPy and SciPy, guiqwt is a Python library providing 
efficient 2D data-plotting features (curve/image visualization and related 
tools) for interactive computing and signal/image processing application 
development.

When compared to the excellent module `matplotlib`, the main advantage of 
`guiqwt` is performance: see 
http://packages.python.org/guiqwt/overview.html#performances.

But `guiqwt` is more than a plotting library; it also provides:

  * Helper functions for data processing: see the example 
http://packages.python.org/guiqwt/examples.html#curve-fitting

  * Framework for signal/image processing application development: see 
http://packages.python.org/guiqwt/examples.html

  * And many other features like making executable Windows programs easily 
(py2exe helpers): see http://packages.python.org/guiqwt/disthelpers.html


guiqwt plotting features are the following:

guiqwt.pyplot: equivalent to matplotlib's pyplot module (pylab)

supported plot items:

* curves, error bar curves and 1-D histograms
* images (RGB images are not supported), images with non-linear x/y 
scales, images with specified pixel size (e.g. loaded from DICOM files), 2-D 
histograms, pseudo-color images (pcolor)
* labels, curve plot legends
* shapes: polygon, polylines, rectangle, circle, ellipse and segment
* annotated shapes (shapes with labels showing position and 
dimensions): rectangle with center position and size, circle with center 
position and diameter, ellipse with center position and diameters (these items 
are very useful to measure things directly on displayed images)

curves, images and shapes:

* multiple object selection for moving objects or editing their 
properties through automatically generated dialog boxes (guidata)
* item list panel: move objects from foreground to background, 
show/hide objects, remove objects, ...
* customizable aspect ratio
* a lot of ready-to-use tools: plot canvas export to image file, image 
snapshot, image rectangular filter, etc.

curves:

* interval selection tools with labels showing results of computing on 
selected area
* curve fitting tool with automatic fit, manual fit with sliders, ...

images:

* contrast adjustment panel: select the LUT by moving a range selection 
object on the image levels histogram, eliminate outliers, ...
* X-axis and Y-axis cross-sections: support for multiple images, 
average cross-section tool on a rectangular area, ...
* apply any affine transform to displayed images in real-time 
(rotation, magnification, translation, horizontal/vertical flip, ...)

application development helpers:

* ready-to-use curve and image plot widgets and dialog boxes
* load/save graphical objects (curves, images, shapes)
* a lot of test scripts which demonstrate guiqwt features


guiqwt has been successfully tested on GNU/Linux and Windows platforms.

Python package index page:
http://pypi.python.org/pypi/guiqwt/

Documentation, screenshots:
http://packages.python.org/guiqwt/

Downloads (source + Python(x,y) plugin):
http://guiqwt.googlecode.com

Cheers,
Pierre

---

Dr. Pierre Raybaut
CEA - Commissariat à l'Energie Atomique et aux Energies Alternatives
-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


[ANN] guidata v1.3.1

2011-04-26 Thread Pierre.RAYBAUT
Hi all,

I am pleased to announce that `guidata` v1.3.1 has been released.
Note that the project has recently been moved to GoogleCode:
http://guidata.googlecode.com

Main changes since `guidata` v1.3.0:
* gettext_helpers module was not working on Linux
* bugfixes

The `guidata` documentation with examples, API reference, etc. is available 
here:
http://packages.python.org/guidata/

Based on the Qt Python binding module PyQt4, guidata is a Python library 
generating graphical user interfaces for easy dataset editing and display. It 
also provides helpers and application development tools for PyQt4.

guidata also provides the following features:

* guidata.qthelpers: PyQt4 helpers
* guidata.disthelpers: py2exe helpers
* guidata.userconfig: .ini configuration management helpers (based on 
Python standard module ConfigParser)
* guidata.configtools: library/application data management
* guidata.gettext_helpers: translation helpers (based on the GNU tool 
gettext)
* guidata.guitest: automatic GUI-based test launcher
* guidata.utils: miscelleneous utilities


guidata has been successfully tested on GNU/Linux and Windows platforms.

Python package index page:
http://pypi.python.org/pypi/guidata/

Documentation, screenshots:
http://packages.python.org/guidata/

Downloads (source + Python(x,y) plugin):
http://guidata.googlecode.com

Cheers,
Pierre

---

Dr. Pierre Raybaut
CEA - Commissariat à l'Energie Atomique et aux Energies Alternatives
-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


Case study: debugging failed assertRaises bug

2011-04-26 Thread Steven D'Aprano
I've just spent two hours banging my head against what I *thought* 
(wrongly!) was a spooky action-at-a-distance bug in unittest, so I 
thought I'd share it with anyone reading.

I have a unit test which I inherit from a mixin class that looks 
something like this:

def testBadArgType(self):
# Test failures with bad argument types.
for d in (None, 23, object(), spam):
self.assertRaises(TypeError, self.func, d)

Seems pretty simple, right? But it was mysteriously failing with one of 
my functions. The first problem is that assertRaises doesn't take a 
custom error message when it fails, so all I was seeing was this:

Traceback (most recent call last):
  File src/test_stats_extras.py, line 1122, in testBadArgType
self.assertRaises(TypeError, self.func, d)
AssertionError: TypeError not raised by lambda

Don't be put off by the lambda. That's just a thin wrapper around the 
function I actually am testing: the mixin test class assumes the test 
function takes a single argument, but my function actually requires two. 
So I use a lambda as a wrapper to provide the second argument.

Since I'm pretty sure that my function really does fail with TypeError, I 
add a line to fail *hard*. Immediately before the assertRaises, I add an 
extra call like this: 

_ = self.func(d)
 
Sure enough, it fails on the direct call to self.func, but not on the 
assertRaises. Now I'm getting seriously weirded out: if I call the test 
function (or its wrapper) directly, I get the expected TypeError, but if 
I call it via assertRaises, it doesn't raise.

So I bang away on the test for a while, replace the assertRaises with an 
assertEqual, and sure enough the assertEqual fails with an exception. At 
this point I'm *convinced* it's some weird bug in unittest, and I'm 
trying to come up with a short example so I can raise a bug report. But I 
can't replicate the bug.

So I ended up putting this in my test code:

# This next line now PASSES (i.e. exception is raised),
# but ONLY if it is followed by another (failing) test!
self.assertRaises(Exception, self.func, d)
self.assertEqual(self.func(d), 100)

and was nearly ready to throw in the towel and just flag the test as 
skipped and deal with it later, when I thought of the good old 
fashioned add some prints inside your code. That's a bit trickier for 
unittest, because it prints its output to stderr rather than stdout, but 
in Python 3 that's easy to deal with. So I ended up with:


func = self.func
print(expecting TypeError , end='', file=sys.stderr)
try:
_ = func(d)
except TypeError:
# I expect TypeError.
print(and got it , end='', file=sys.stderr)
else:
print(but got %r  % _, end='', file=sys.stderr)
# This FAILS (i.e. no exception is raised).
self.assertRaises(TypeError, func, d)


and unit test prints:

. expecting TypeError and got it expecting 
TypeError and got it expecting TypeError and got it expecting TypeError 
but got 'm' F..
==
FAIL: testBadArgType (__main__.QuantileBehaviourTest)
--
Traceback (most recent call last):
  File src/test_stats_extras.py, line 1122, in testBadArgType
self.assertRaises(TypeError, func, d)
AssertionError: TypeError not raised by lambda


/facepalm/


That explains everything, even the spooky action-at-a-distance stuff. It 
is a bug in my own code, not unittest, not the mixin, not the lambda, but 
my function should be failing when called with a string but doesn't.

So, lessons learned...

(1) assertRaises REALLY needs a better error message. If not a custom 
message, at least it should show the result it got instead of an 
exception.

(2) Never underestimate the power of print. Sticking a print statement in 
the middle of problematic code is a powerful debugging technique.

(3) If you think you've found a bug in Python, it almost certainly is a 
bug in your own code. I knew that already, but now I have the scars to 
prove it.

(4) Unit tests should test one thing, not four. I thought my loop over 
four different bad input arguments was one conceptual test, and that 
ended up biting me. If the failing test was testBadArgTypeStr I would 
have realised what was going on much faster. 

(5) There's only so many tiny little tests you can write before your head 
explodes, so I'm going to keep putting for arg in spam loops in my 
tests. But now I know to unroll those suckers into individual tests on 
the first sign of trouble.




-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


oauth 2 libraries.

2011-04-26 Thread David Vicente
Hi,

 

I´m looking for a library to use oauth 2. I have found several libraries
very similar among them. I´d like to know if someone use another one or some
of these libraries and what do you advise me. 

-   https://github.com/simplegeo/python-oauth2
https://github.com/simplegeo/python-oauth2:   This looks the base
implementation

-  The other two are very similar to the simplegeo with a Client
class added (Client2). https://github.com/dgouldin/python-oauth2,
https://github.com/zbowling/python-oauth2

 

Thanks for your help.

Regards.

 

 

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: sockets: bind to external interface

2011-04-26 Thread Jean-Michel Pichavant

Hans Georg Schaathun wrote:

Is there a simple way to find the external interface and bind a
socket to it, when the hostname returned by socket.gethostname()
maps to localhost?

What seems to be the standard ubuntu configuration lists the local
hostname with 127.0.0.1 in /etc/hosts.  (I checked this on two ubuntu
boxen, on only one of which I am root.)  Thus, the standard solution
of binding to whatever socket.gethostname() returns does not work.

Has anyone found a simple solution that can be administered without
root privileges?  I mean simpler than passing the ip address 
manually :-)


TIA
  

Hi,

Use the address 0.0.0.0

JM
--
http://mail.python.org/mailman/listinfo/python-list


Re: Case study: debugging failed assertRaises bug

2011-04-26 Thread Duncan Booth
Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote:

 (1) assertRaises REALLY needs a better error message. If not a custom 
 message, at least it should show the result it got instead of an 
 exception.
 
If a different exception was thrown then you get an error instead of a 
failure and you are shown the Exception that was thrown, so I think you 
only have an issue if no exception was thrown at all.

If so, here's an alternative way to write your test which might be easier 
to debug:

def testBadArgType(self):
# Test failures with bad argument types.
for d in (None, 23, object(), spam):
with self.assertRaises(TypeError) as cm:
self.func(d)
print(d, didn't throw the exception)

It is worth remembering that print output doesn't appear unless the test 
fails, so you can leave the tracing statement in the test. Also, 
assertRaises has a weird dual life as a context manager: I never knew that 
before today.


-- 
Duncan Booth http://kupuguy.blogspot.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Py_INCREF() incomprehension

2011-04-26 Thread Ervin Hegedüs
Hello Python users,

I'm working on a Python module in C - that's a cryptographic module,
which uses a 3rd-party lib from a provider (a bank).
This module will encrypt and decrypt the messages for the provider web service.

Here is a part of source:

static PyObject*
mycrypt_encrypt(PyObject *self, PyObject *args)
{
int cRes = 0;
int OutLen = 0;

char *  url;
char *  path;

if (!PyArg_ParseTuple(args, ss, url, path)) {
return NULL;
}

OutLen = strlen(url)*4;
outdata=calloc(OutLen, sizeof(char));

if (!outdata) {
handle_err(UER_NOMEM);
return NULL;
}
cRes = ekiEncodeUrl (url, strlen(url)+1, outdata, OutLen, 1, path);

if (cRes == 0) {
return Py_BuildValue(s, outdata);
} else {
handle_err(cRes);
return NULL;
}

return Py_None;
}

where ekiEncodeUrl is in a 3rd-party library.
I should call this function from Python like this:

import mycrypt

message = PID=IEB0001MSGT=10TRID=00012345678
crypted = mycrypt(mymessage, /path/to/key);

Everything works fine, but sorry for the recurrent question: where
should I use the Py_INCREF()/Py_DECREF() in code above?


Thank you,

cheers:

a.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Py_INCREF() incomprehension

2011-04-26 Thread Stefan Behnel

Ervin Hegedüs, 26.04.2011 11:48:

Hello Python users,

I'm working on a Python module in C - that's a cryptographic module,
which uses a 3rd-party lib from a provider (a bank).
This module will encrypt and decrypt the messages for the provider web service.

Here is a part of source:

static PyObject*
mycrypt_encrypt(PyObject *self, PyObject *args)
{
 int cRes = 0;
 int OutLen = 0;

 char *  url;
 char *  path;

 if (!PyArg_ParseTuple(args, ss,url,path)) {


Use the s# format instead to get the length as well.



 return NULL;
 }

 OutLen = strlen(url)*4;
 outdata=calloc(OutLen, sizeof(char));

 if (!outdata) {
 handle_err(UER_NOMEM);


I assume this raises MemoryError?



 return NULL;
 }
 cRes = ekiEncodeUrl (url, strlen(url)+1, outdata,OutLen, 1, path);

 if (cRes == 0) {
 return Py_BuildValue(s, outdata);


You are leaking the memory allocated for outdata here.

And, again, use the s# format.



 } else {
 handle_err(cRes);
 return NULL;
 }


I assume this raises an appropriate exception?



 return Py_None;


This is unreachable code.



where ekiEncodeUrl is in a 3rd-party library.
I should call this function from Python like this:

import mycrypt

message = PID=IEB0001MSGT=10TRID=00012345678
crypted = mycrypt(mymessage, /path/to/key);

Everything works fine, but sorry for the recurrent question: where
should I use the Py_INCREF()/Py_DECREF() in code above?


These two functions are not your problem here.

In any case, I recommend using Cython instead of plain C. It keeps you from 
getting distracted too much by reference counting and other CPython C-API 
details, so that you can think more about the functionality you want to 
implement.


Stefan

--
http://mail.python.org/mailman/listinfo/python-list


Re: Py_INCREF() incomprehension

2011-04-26 Thread Hegedüs Ervin
Hello,

thanks for the reply,

 static PyObject*
 mycrypt_encrypt(PyObject *self, PyObject *args)
 {
  int cRes = 0;
  int OutLen = 0;
 
  char *  url;
  char *  path;
 
  if (!PyArg_ParseTuple(args, ss,url,path)) {
 
 Use the s# format instead to get the length as well.

oh, thanks, I'll check it out,
 
  return NULL;
  }
 
  OutLen = strlen(url)*4;
  outdata=calloc(OutLen, sizeof(char));
 
  if (!outdata) {
  handle_err(UER_NOMEM);
 
 I assume this raises MemoryError?

yes,

another question should be: do I need to use
Py_INCREF()/Py_DECREF() at exception objects?

There are several error types, which has defined in 3rd-party
header file, based on these I've created my exceptions:

...
static PyObject *cibcrypt_error_badparm;
static PyObject *cibcrypt_error_nomem;
static PyObject *cibcrypt_error_badsize;
...

void handle_err(int errcode) {
switch(errcode) {
...
case -4:PyErr_SetString(cibcrypt_error_badparm, Bad parameter);
break;

case -5:PyErr_SetString(cibcrypt_error_nomem, Not enough memory);
break;

case -6:PyErr_SetString(cibcrypt_error_badsize, Invalid buffer 
size);
break;
...
}


PyMODINIT_FUNC
initcibcrypt(void)
{
PyObject *o;
o = Py_InitModule3(mycrypt, mycrypt_methods, mycrypt_doc);
...
cibcrypt_error_badparm = PyErr_NewException(cibcrypt.error_badparm, NULL, 
NULL);
cibcrypt_error_nomem = PyErr_NewException(cibcrypt.error_nomem, NULL, 
NULL);
cibcrypt_error_badsize = PyErr_NewException(cibcrypt.error_badsize, NULL, 
NULL);
...

PyModule_AddObject(o, error, cibcrypt_error_badparm);
PyModule_AddObject(o, error, cibcrypt_error_nomem);
PyModule_AddObject(o, error, cibcrypt_error_badsize);
...
}

is that correct?

 
  return NULL;
  }
  cRes = ekiEncodeUrl (url, strlen(url)+1, outdata,OutLen, 1, path);
 
  if (cRes == 0) {
  return Py_BuildValue(s, outdata);
 
 You are leaking the memory allocated for outdata here.

ok, and how can I handle that leak? When should I use the
Py_DECREF, or any other function (eg: free()?))?
 
 And, again, use the s# format.
 
 
  } else {
  handle_err(cRes);
  return NULL;
  }
 
 I assume this raises an appropriate exception?

yes, see the handle_error() function.

 
  return Py_None;
 
 This is unreachable code.

yes, that's just an old typo, I think I purged these... thanks.

 where ekiEncodeUrl is in a 3rd-party library.
 I should call this function from Python like this:
 
 import mycrypt
 
 message = PID=IEB0001MSGT=10TRID=00012345678
 crypted = mycrypt(mymessage, /path/to/key);
 
 Everything works fine, but sorry for the recurrent question: where
 should I use the Py_INCREF()/Py_DECREF() in code above?
 
 These two functions are not your problem here.

which two functions did you mean?
If you mean encrypt/decrypt (above), why did you wrote there is a
memleak?

 In any case, I recommend using Cython instead of plain C. It keeps
 you from getting distracted too much by reference counting and other
 CPython C-API details, so that you can think more about the
 functionality you want to implement.

ok, I'll try to see CPython, but - just for my passion :) - what
is the solution to handle the leak above?



Thanks again:


a.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Py_INCREF() incomprehension

2011-04-26 Thread Hegedüs Ervin
hello,

sorry for the typo, these are many cibcrypt reference, this is
the real name of my module - I just replaced it somewhere to
mycrypt - and somewhere I forgot... :(




 ...
 static PyObject *cibcrypt_error_badparm;
 ...
 
 void handle_err(int errcode) {
 switch(errcode) {
 ...
 case -4:PyErr_SetString(cibcrypt_error_badparm, Bad parameter);
 break;


bye:

a. 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Vectors

2011-04-26 Thread Algis Kabaila
On Monday 25 April 2011 20:49:34 Jonathan Hartley wrote:
 On Apr 20, 2:43 pm, Andreas Tawn andreas.t...@ubisoft.com 
wrote:
   Algis Kabaila akaba...@pcug.org.au writes:
Are there any modules for vector algebra (three
dimensional vectors, vector addition, subtraction,
multiplication [scalar and vector]. Could you give me
a reference to such module?
   
   NumPy has array (and matrix) types with support for these
   basic operations you mention. See the tutorial
   athttp://numpy.scipy.org/
  
  You might also want to
  considerhttp://code.google.com/p/pyeuclid/
  
  Cheers,
  
  Drea
 
 Stealing this from Casey Duncan's recent post to the Grease
 users list:
 
 
 - (ab)use complex numbers for 2D vectors (only). Very fast
 arithmetic and built-in to Python. Downside is lack of
 abstraction.
 
 - Use pyeuclid (pure python) if ultimate speed isn't an
 issue, or if compiled extensions are. It supports 3D and has
 a nice api
 
 - vectypes is a more recent project from the same author as
 pyeuclid. It offers a more consistent 'GLSL' like interface,
 including swizzling, and internally seems to have more
 maintainable code because it generates various sizes of
 vector and matrix from a single template. This is done
 without performance penalty because the generation is done
 at design time, not runtime.
 
 - Use pyeigen if you want fast vectors, and don't mind
 compiling some C/C++. I don't know how the Python api looks
 though
 
 - Use numpy if you want fast batch operations
Jonathan,

Thank you for a nice and extensive list of references. To 
clarify my position - surprisingly, speed is not an issue- I've 
programmed a matrix in pure python (3, but mainly iwth python 2 
syntax) and found that inversion was quite fast enough for my 
requirements.  

Good vector algebra is necessary for 3 D frame analysis, so a 
vector package is indicated.  numpy is great, but it is a tool 
like a sledge to drive a nail...

OldAl.
-- 
Algis
http://akabaila.pcug.org.au/StructuralAnalysis.pdf
-- 
http://mail.python.org/mailman/listinfo/python-list


Restarting a daemon

2011-04-26 Thread Jeffrey Barish
Not exactly a Python question, but I thought I would start here.

I have a server that runs as a daemon.  I can restart the server manually 
with the command 

myserver restart

This command starts a new myserver which first looks up the pid for the one 
that is running and sends it a terminate signal.  The new one then 
daemonizes itself.

I want the server to be able to restart itself.  Will it work to have 
myserver issue myserver restart using os.system?  I fear that the new 
myserver, which will be running in a subshell, will terminate the subshell 
along with the old myserver when it sends the terminate signal to the old 
myserver.  If so, what is the correct way to restart the daemon?  Will it 
work to run the restart command in a subprocess rather than a subshell or 
will a subprocess also terminate when its parent terminates?
-- 
Jeffrey Barish

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: De-tupleizing a list

2011-04-26 Thread Gnarlodious
On Apr 25, 10:59 pm, Steven D'Aprano wrote:

 In Python 3, map becomes lazy and returns an iterator instead of a list,
 so you have to wrap it in a call to list().

Ah, thanks for that tip. Also works for outputting a tuple:
list_of_tuples=[('0A',), ('1B',), ('2C',), ('3D',)]

#WRONG:
(x for (x,) in list_of_tuples)
generator object genexpr at 0x1081ee0

#RIGHT:
tuple(x for (x,) in list_of_tuples)

Thanks everyone for the abundant help.

-- Gnarlie
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Py_INCREF() incomprehension

2011-04-26 Thread Thomas Rachel

Am 26.04.2011 14:21, schrieb Thomas Rachel:


Especially look at the concepts called borrowed reference vs. owned
reference.


http://docs.python.org/extending/extending.html#reference-counting-in-python
will be quite helpful.


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Py_INCREF() incomprehension

2011-04-26 Thread Thomas Rachel

Am 26.04.2011 11:48, schrieb Ervin Hegedüs:


Everything works fine, but sorry for the recurrent question: where
should I use the Py_INCREF()/Py_DECREF() in code above?


That depends on the functions which are called. It should be given in 
the API description. The same counts for the incoming parameters (which 
are borrowed AFAIR - but better have a look).


The most critical parts are indeed

* the input parameters

and

* Py_BuildValue()

. Maybe you could as well have a look at some example code.

Especially look at the concepts called borrowed reference vs. owned 
reference.


And, for your other question:

 if (cRes == 0) {
   return Py_BuildValue(s, outdata);
 }

You ask how to stop leaking memory? Well, simply by not leaking it :-)

Just free the memory area:

if (cRes == 0) {
PyObject* ret = Py_BuildValue(s, outdata);
free(outdata);
return ret;
}

BTW: Is there any reason for using calloc()? malloc() would probably be 
faster...



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Restarting a daemon

2011-04-26 Thread Albert Hopkins
On Tue, 2011-04-26 at 06:13 -0600, Jeffrey Barish wrote:
 Not exactly a Python question, but I thought I would start here.
 
 I have a server that runs as a daemon.  I can restart the server manually 
 with the command 
 
 myserver restart
 
 This command starts a new myserver which first looks up the pid for the one 
 that is running and sends it a terminate signal.  The new one then 
 daemonizes itself.
 
 I want the server to be able to restart itself.  Will it work to have 
 myserver issue myserver restart using os.system?  I fear that the new 
 myserver, which will be running in a subshell, will terminate the subshell 
 along with the old myserver when it sends the terminate signal to the old 
 myserver.  If so, what is the correct way to restart the daemon?  Will it 
 work to run the restart command in a subprocess rather than a subshell or 
 will a subprocess also terminate when its parent terminates?

You should look into tools like daemon-tools, or similar.  It already
solves this (and many other) problems.

-a

-- 
http://mail.python.org/mailman/listinfo/python-list


Active Directory user creation with python-ldap

2011-04-26 Thread Nello
I need to create an Active Directory user using python-ldap library.
So, I authenticate with an admin account and I use add_s to create
the user.
Anyway, by default users are disabled on creation, and I can not set
userAccountControl to swith off the flag ACCOUNTDISABLE, i.e. setting
userAccountControl with 512 (NORMAL_ACCOUNT) value. See page
http://support.microsoft.com/kb/305144 for a complete list of
userAccount flags.

If I try, the server respond:
ldap.UNWILLING_TO_PERFORM: {'info': '052D: SvcErr: DSID-031A0FC0,
problem 5003 (WILL_NOT_PERFORM), data 0\n', 'desc': 'Server is
unwilling to perform'}

Same thing if - as someone suggests - I create the user without a
password and try to set userAccountCreation later.

This is the code I use to create the account.
Any suggestions?



import ldap
import ldap.modlist as modlist

def addUser(username, firstname, surname, email, password):
Create a new user in Active Directory
ldap.set_option(ldap.OPT_REFERRALS, 0)

# Open a connection
l = ldap.initialize(AD_LDAP_URL)

# Bind/authenticate with a user with apropriate rights to add
objects
l.simple_bind_s(ADMIN_USER, ADMIN_PASSWORD)

# The dn of our new entry/object
dn=cn=%s,%s % (username, AD_SEARCH_DN)

displayName = '%s %s [%s]' % (surname, firstname, username)

# A dict to help build the body of the object
attrs = {}
attrs['objectclass'] =
['top','person','organizationalPerson','user']
attrs['cn'] = str(username)
attrs['sAMAccountname'] = str(username)
attrs['userPassword'] = str(password)
attrs['givenName'] = str(firstname)
attrs['sn'] = str(surname)
attrs['displayName'] = str(displayName)
attrs['userPrincipalName'] = %s...@mail.domain.it % username

# Some flags for userAccountControl property
SCRIPT = 1
ACCOUNTDISABLE = 2
HOMEDIR_REQUIRED = 8
PASSWD_NOTREQD = 32
NORMAL_ACCOUNT = 512
DONT_EXPIRE_PASSWORD = 65536
TRUSTED_FOR_DELEGATION = 524288
PASSWORD_EXPIRED = 8388608

# this works!
attrs['userAccountControl'] = str(NORMAL_ACCOUNT + ACCOUNTDISABLE)

# this does not work :-(
attrs['userAccountControl'] = str(NORMAL_ACCOUNT)

# Convert our dict to nice syntax for the add-function using
modlist-module
ldif = modlist.addModlist(attrs)

l.add_s(dn,ldif)

-- 
http://mail.python.org/mailman/listinfo/python-list


Terrible FPU performance

2011-04-26 Thread Mihai Badoiu
Hi,

I have terrible performance for multiplication when one number gets very
close to zero.  I'm using cython by writing the following code:

cdef int i
cdef double x = 1.0
for 0 = i  1000:
x *= 0.8
#x += 0.01
print x

This code runs much much slower (20+ times slower) with the line x += 0.01
uncommented.  I looked at the deassembled code and it looks correct.
 Moreover, it's just a few lines and by writing a C code (without python on
top), I get the same code, but it's much faster.  I've also tried using sse,
but I get exactly the same behavior.  The best candidate that I see so far
is that Python sets up the FPU in a different state than C.

Any advice on how to solve this performance problem?

thanks!
-- 
http://mail.python.org/mailman/listinfo/python-list


Cannot get past this string related issue

2011-04-26 Thread Oltmans
Greetings, I hope you're doing well. I'm stuck in a strange issue,
most likely due to my own ignorance. I'm reading a config file using
ConfigParser module and passing database related info to _mssql.
Following doesn't work


config = ConfigParser.ConfigParser()
config.read('configs.txt')
server_info = config.get(DB_INFO,server)
db = config.get(DB_INFO,database)
username = config.get(DB_INFO,user)
pwd = config.get(DB_INFO,password)
print server_info,db,username,pwd
conn =
_mssql.connect(server=server_info,database=db,user=username,password=pwd)

but following does work

conn =
_mssql.connect(server='server',database='database',user='user',password='password')


Config file looks like following

[DB_INFO]
server = server
database = database
user = user
password = password


But I cannot figure out why? Any ideas or help will be highly
appreciated. Thanks!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Cannot get past this string related issue

2011-04-26 Thread Tim Golden

On 26/04/2011 14:48, Oltmans wrote:

Greetings, I hope you're doing well. I'm stuck in a strange issue,
most likely due to my own ignorance. I'm reading a config file using
ConfigParser module and passing database related info to _mssql.


[ ... ]


Config file looks like following

[DB_INFO]
server = server
database = database
user = user
password = password



A config file isn't a Python file: you don't need (and
don't want) double-quotes around those values.

TJG
--
http://mail.python.org/mailman/listinfo/python-list


Re: Py_INCREF() incomprehension

2011-04-26 Thread Hegedüs Ervin
Hello,

thanks for the answer,

 Everything works fine, but sorry for the recurrent question: where
 should I use the Py_INCREF()/Py_DECREF() in code above?
 
 That depends on the functions which are called. It should be given
 in the API description. The same counts for the incoming parameters
 (which are borrowed AFAIR - but better have a look).

I've read API doc (which you've included in another mail), but
that's not clear for me. :(
 
 The most critical parts are indeed
 
 * the input parameters
 
 and
 
 * Py_BuildValue()
 
 . Maybe you could as well have a look at some example code.
 
 Especially look at the concepts called borrowed reference vs.
 owned reference.
 
 And, for your other question:
 
  if (cRes == 0) {
return Py_BuildValue(s, outdata);
  }
 
 You ask how to stop leaking memory? Well, simply by not leaking it :-)

great! :)
 
 Just free the memory area:
 
 if (cRes == 0) {
 PyObject* ret = Py_BuildValue(s, outdata);
 free(outdata);
 return ret;
 }

so, it means when I implicit allocate a new object (whit
Py_BuildValue()), Python's GC will free that pointer when it
doesn't require anymore?
 
 BTW: Is there any reason for using calloc()? malloc() would probably
 be faster...

may be, I didn't measure it ever... but calloc() gives clear
space... :)


thanks:

a.
-- 
http://mail.python.org/mailman/listinfo/python-list


Development tools and practices for Pythonistas

2011-04-26 Thread snorble
I'm not a Pythonista, but I aspire to be.

My current tools:

Python, gvim, OS file system

My current practices:

When I write a Python app, I have several unorganized scripts in a
directory (usually with several named test1.py, test2.py, etc., from
random ideas I have tested), and maybe a todo.txt file. Then I hack
away, adding features in a semi-random order. Then I get busy with
other things. Maybe one week I spend 20 hours on development. The next
week, no time on development. A few weeks later when I have some time,
I'm excited to get back to making progress, only to find that I have
to spend 30-60 minutes figuring out where I left off. The code is
usually out of sync with todo.txt. I see people who release new
versions and bug fixes, so I sometimes will create a new directory and
continue working from that copy, because it seems like the thing to
do. But if I ever made something worth releasing, and got a request
like, I have problems with the 2.0 version. Can you send me the old
1.1 version? I'd be like, uhhh... let me hunt through my files by
hand and get back to you in a month. I'm thinking I can do a lot
better than this.

I am aware of tools like version control systems, bug trackers, and
things like these, but I'm not really sure if I need them, or how to
use them properly. I think having some organization to all of this
would help me to make more consistent progress, and spend less time
bringing myself up to speed after some time off.

I really like the idea of having a list of features, and tackling
those features one at a time. I read about people who do this, and
each new features gets a new minor version number. It sounds very
organized and clean. But I'm not really sure of the best way to
achieve this. Mainly I think I just need some recommendations to help
create a good mental map of what needs to happen, and mapping jargon
to concepts. Like, each feature gets its own directory. Or with a
version control tool, I don't know if a feature maps to a branch, or a
commit?

I appreciate any advice or guidance anyone has to offer.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Cannot get past this string related issue

2011-04-26 Thread Thomas Rachel

Am 26.04.2011 15:48, schrieb Oltmans:


Following doesn't work


config = ConfigParser.ConfigParser()
config.read('configs.txt')
server_info = config.get(DB_INFO,server)
db = config.get(DB_INFO,database)
username = config.get(DB_INFO,user)
pwd = config.get(DB_INFO,password)
print server_info,db,username,pwd
conn =
_mssql.connect(server=server_info,database=db,user=username,password=pwd)

but following does work

conn =
_mssql.connect(server='server',database='database',user='user',password='password')


Ok, if you are this far: what prevents you from trying

print server_info, db, username, pwd

and being aware that IF there are  around, they are not part of the 
string representation, but they are really there.




Config file looks like following

[DB_INFO]
server = server
database = database
user = user
password = password


I think if you will have seen the output above, you will probably see 
what is wrong here: too many s. :-)


HTH  HAND!

Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Cannot get past this string related issue

2011-04-26 Thread Oltmans
On Apr 26, 7:39 pm, Thomas Rachel nutznetz-0c1b6768-bfa9-48d5-
a470-7603bd3aa...@spamschutz.glglgl.de wrote:
 Am 26.04.2011 15:48, schrieb Oltmans:









  Following doesn't work

  config = ConfigParser.ConfigParser()
  config.read('configs.txt')
  server_info = config.get(DB_INFO,server)
  db = config.get(DB_INFO,database)
  username = config.get(DB_INFO,user)
  pwd = config.get(DB_INFO,password)
  print server_info,db,username,pwd
  conn =
  _mssql.connect(server=server_info,database=db,user=username,password=pwd)

  but following does work

  conn =
  _mssql.connect(server='server',database='database',user='user',password='pa 
  ssword')

 Ok, if you are this far: what prevents you from trying

 print server_info, db, username, pwd

 and being aware that IF there are  around, they are not part of the
 string representation, but they are really there.

  Config file looks like following

  [DB_INFO]
  server = server
  database = database
  user = user
  password = password

 I think if you will have seen the output above, you will probably see
 what is wrong here: too many s. :-)

Many thanks, really appreciate help.

 HTH  HAND!

 Thomas

-- 
http://mail.python.org/mailman/listinfo/python-list


How to concatenate unicode strings ???

2011-04-26 Thread Ariel
Hi everybody, how could I concatenate unicode strings ???
What I want to do is this:

unicode('this an example language ') + unicode('español')

but I get an:
Traceback (most recent call last):
  File console, line 1, in module
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 11:
ordinal not in range(128)

How could I concatenate unicode strings ???

Regards
Thanks in advance.
Ariel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Development tools and practices for Pythonistas

2011-04-26 Thread rusi
On Apr 26, 7:39 pm, snorble snor...@hotmail.com wrote:


 I am aware of tools like version control systems, bug trackers, and
 things like these, but I'm not really sure if I need them,

You either dont want version control

 But if I ever made something worth releasing, and got a request
 like, I have problems with the 2.0 version. Can you send me the old
 1.1 version? I'd be like, uhhh... let me hunt through my files by
 hand and get back to you in a month. I'm thinking I can do a lot
 better than this.

Or you do!


 or how to use them properly.

So the best advice would be: Forget python (for a while) and study one
(modern distributed) version control system:
http://en.wikipedia.org/wiki/Distributed_revision_control

From Joel Spolsky's  http://joelonsoftware.com/items/2010/03/17.html

With distributed version control, the distributed part is actually not
the most interesting part.
The interesting part is that these systems think in terms of changes,
not in terms of versions.

bug-trackers are a bit of overkill for a solo developer. However...

 My current tools:

 Python, gvim, OS file system
Hand in hand with a DVCS, you need to add a testing framework.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Development tools and practices for Pythonistas

2011-04-26 Thread Martin P. Hellwig

On 26/04/2011 14:39, snorble wrote:
cut explanation
I would strongly advice to get familiar with:
- Lint tools (like PyLint)
- Refactoring
- Source Control Systems (like Mercurial Hg)
- Unit Testing with Code Coverage

Followed by either writing your own toolset that integrates all of the 
above or start learning an IDE that has that stuff built-in (my personal 
preference is the latter with my current IDE being PyDev (Eclipse)).


Yes you will be less productive for a couple of weeks, but I promise you 
that once you know the above you win that time back very shortly, or if 
you are as chaotic as me, within a week :-).


--
mph
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to concatenate unicode strings ???

2011-04-26 Thread Chris Rebert
On Tue, Apr 26, 2011 at 8:58 AM, Ariel isaacr...@gmail.com wrote:
 Hi everybody, how could I concatenate unicode strings ???
 What I want to do is this:

 unicode('this an example language ') + unicode('español')

 but I get an:
 Traceback (most recent call last):
   File console, line 1, in module
 UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 11:
 ordinal not in range(128)

 How could I concatenate unicode strings ???

That error is from the 2nd call to unicode(), not from the
concatenation itself. Use proper Unicode string literals:

u'this an example language ' + u'español'

You'll probably also need to add the appropriate source file encoding
declaration; see http://www.python.org/dev/peps/pep-0263/

Cheers,
Chris
--
http://rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Development tools and practices for Pythonistas

2011-04-26 Thread Jayme Proni Filho
That's my Notebook/PC configuration and I'm user of Gentoo and Slackware.
* vi (not vim) for C/C++
* Eclipse for Java
* Wing IDE Professional (Python Editor);
* Django: Best framework for while in my opinion;
* tkinter, GTK and Qt;
* MySQL, PostgreSQL ;
* Rational Rose Modeler (very expensive)(UML: Because it is very important
for projecting app);
* Learn one CVS model and one SCM too. Enterprises use one of them;
* And you must STUDY, READ AND READ, READ every official docs about these
stuffs.

About your project's  tree, I can not help you because you wil get this with
time and CVS and SCM help you to learn about trees.

Now, about tree in the beginning, when I started in 1993, I used to do like
this way.

Primary folder called: pythoncourse
Sub folders calld: for, if, while, do_while, array, list, hash, functions,
pydb, publish, comments, iterative, recursion and etc.
My archives I called:
In sub-folder FOR: for1.py, for2.py, for_in_for.py for_final.py, for
example.

My English use to be better than this but I'm starving here writing for you
and thinking and so much stakes. Bye bye. :D

Good studies and ideas.

---
Jayme Proni Filho
Skype: jaymeproni
Twitter: @jaymeproni
Phone: +55 - 17 - 3631 - 6576
Mobile: +55 - 17 - 9605 - 3560
e-Mail: jaymeproni at yahoo dot com dot br
---
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to concatenate unicode strings ???

2011-04-26 Thread Dan Stromberg
On Tue, Apr 26, 2011 at 8:58 AM, Ariel isaacr...@gmail.com wrote:
 Hi everybody, how could I concatenate unicode strings ???
 What I want to do is this:

 unicode('this an example language ') + unicode('español')

 but I get an:
 Traceback (most recent call last):
   File console, line 1, in module
 UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 11:
 ordinal not in range(128)

 How could I concatenate unicode strings ???

I believe it's not the catenation, but rather the second of two
unicode() invocations getting an invalid character for the default
encoding:

$ /usr/local/cpython-2.7/bin/python
cmd started 2011 Tue Apr 26 09:10:22 AM
Python 2.7 (r27:82500, Aug  2 2010, 19:15:05)
[GCC 4.4.3] on linux2
Type help, copyright, credits or license for more information.
 unicode('this an example language ') + unicode('español')
Traceback (most recent call last):
  File stdin, line 1, in module
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position
4: ordinal not in range(128)
 unicode('this an example language ') + unicode('español', 'latin-1')
u'this an example language espa\xc3\xb1ol'

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Py_INCREF() incomprehension

2011-04-26 Thread Thomas Rachel

Am 26.04.2011 16:03, schrieb Hegedüs Ervin:


I've read API doc (which you've included in another mail), but
that's not clear for me. :(


No probem, I'll go in detail, now as I have read it again. (I didn't 
want to help from memory, as it is some time ago I worked with it, and 
didn't have time to read it.)



The most critical parts are indeed

* the input parameters


The ownership rules say that the input parameter belongs to the caller 
who holds it at least until we return. (We just borrow it.) So no 
action needed.




* Py_BuildValue()


This function transfers ownership, as it is none of 
(PyTuple_GetItem(), PyList_GetItem(), PyDict_GetItem(), 
PyDict_GetItemString()).


So the value it returns belongs to us, for now.

We do transfer ownership to our caller (implicitly), so no action is 
required as well here.




so, it means when I implicit allocate a new object (whit
Py_BuildValue()), Python's GC will free that pointer when it
doesn't require anymore?


In a way, yes. But you have to obey ownership: whom belongs the current 
reference? If it is not ours, and we need it, we do Py_(X)INCREF(). If 
we got it, but don't need it, we do Py_(X)DECREF().




BTW: Is there any reason for using calloc()? malloc() would probably
be faster...


may be, I didn't measure it ever... but calloc() gives clear
space... :)


Ok. (But as sizeof(char) is, by C standard definition, always 1, you can 
write it shorter.)



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to concatenate unicode strings ???

2011-04-26 Thread Ariel
And what about if after the string is concat I want it to pass is to the
command line to do anything else,  for instance:
one_command = cadena.decode('utf-8') + cadena1.decode('utf-8')
commands.getoutput(one_comand)

But I receive this error:
Traceback (most recent call last):
  File console, line 1, in module
  File /usr/lib/python2.6/commands.py, line 46, in getoutput
return getstatusoutput(cmd)[1]
  File /usr/lib/python2.6/commands.py, line 55, in getstatusoutput
pipe = os.popen('{ ' + cmd + '; } 21', 'r')
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf1' in position
31: ordinal not in range(128)

How could I solve that ???
Regards
Ariel

On Tue, Apr 26, 2011 at 6:07 PM, Chris Rebert c...@rebertia.com wrote:

 On Tue, Apr 26, 2011 at 8:58 AM, Ariel isaacr...@gmail.com wrote:
  Hi everybody, how could I concatenate unicode strings ???
  What I want to do is this:
 
  unicode('this an example language ') + unicode('español')
 
  but I get an:
  Traceback (most recent call last):
File console, line 1, in module
  UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 11:
  ordinal not in range(128)
 
  How could I concatenate unicode strings ???

 That error is from the 2nd call to unicode(), not from the
 concatenation itself. Use proper Unicode string literals:

 u'this an example language ' + u'español'

 You'll probably also need to add the appropriate source file encoding
 declaration; see http://www.python.org/dev/peps/pep-0263/

 Cheers,
 Chris
 --
 http://rebertia.com

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to concatenate unicode strings ???

2011-04-26 Thread Albert Hopkins
On Tue, 2011-04-26 at 17:58 +0200, Ariel wrote:
 Hi everybody, how could I concatenate unicode strings ??? 
 What I want to do is this:
 
 unicode('this an example language ') + unicode('español') 
 
 but I get an:
 Traceback (most recent call last):
   File console, line 1, in module
 UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position
 11: ordinal not in range(128)
 
 How could I concatenate unicode strings ???
 
Your problem isn't with concationation. Your problem is with:


unicode('español')


That is your are passing a non-unicode string to the unicode type and,
it seems the default encoding on your system is ASCII, but ñ
is not valid ASCII encoding.

So you can do one of two things:

* Use a unicode literal, e.g. u'español'
* pass whatever encoding you are actually using in your byte string,
  e.g. unicode('español', 'utf8')

If you are writing this in a module and you want to use unicode
literals, you should put something similar at the top of the file:

# -*- encoding: utf-8 -*-

HTH,
-a


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to concatenate unicode strings ???

2011-04-26 Thread Ariel
with commands.getoutput(one_comand.encode('utf-8'))  it works !!!

On Tue, Apr 26, 2011 at 6:22 PM, Ariel isaacr...@gmail.com wrote:

 And what about if after the string is concat I want it to pass is to the
 command line to do anything else,  for instance:
 one_command = cadena.decode('utf-8') + cadena1.decode('utf-8')
 commands.getoutput(one_comand)

 But I receive this error:

 Traceback (most recent call last):
   File console, line 1, in module
   File /usr/lib/python2.6/commands.py, line 46, in getoutput
 return getstatusoutput(cmd)[1]
   File /usr/lib/python2.6/commands.py, line 55, in getstatusoutput
 pipe = os.popen('{ ' + cmd + '; } 21', 'r')
 UnicodeEncodeError: 'ascii' codec can't encode character u'\xf1' in
 position 31: ordinal not in range(128)

 How could I solve that ???
 Regards
 Ariel


 On Tue, Apr 26, 2011 at 6:07 PM, Chris Rebert c...@rebertia.com wrote:

 On Tue, Apr 26, 2011 at 8:58 AM, Ariel isaacr...@gmail.com wrote:
  Hi everybody, how could I concatenate unicode strings ???
  What I want to do is this:
 
  unicode('this an example language ') + unicode('español')
 
  but I get an:
  Traceback (most recent call last):
File console, line 1, in module
  UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 11:
  ordinal not in range(128)
 
  How could I concatenate unicode strings ???

 That error is from the 2nd call to unicode(), not from the
 concatenation itself. Use proper Unicode string literals:

 u'this an example language ' + u'español'

 You'll probably also need to add the appropriate source file encoding
 declaration; see http://www.python.org/dev/peps/pep-0263/

 Cheers,
 Chris
 --
 http://rebertia.com



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to concatenate unicode strings ???

2011-04-26 Thread Jean-Michel Pichavant

Chris Rebert wrote:

On Tue, Apr 26, 2011 at 8:58 AM, Ariel isaacr...@gmail.com wrote:
  

Hi everybody, how could I concatenate unicode strings ???
What I want to do is this:

unicode('this an example language ') + unicode('español')

but I get an:
Traceback (most recent call last):
  File console, line 1, in module
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 11:
ordinal not in range(128)

How could I concatenate unicode strings ???



That error is from the 2nd call to unicode(), not from the
concatenation itself. Use proper Unicode string literals:

u'this an example language ' + u'español'

You'll probably also need to add the appropriate source file encoding
declaration; see http://www.python.org/dev/peps/pep-0263/

Cheers,
Chris
--
http://rebertia.com
  

an example of shebang

#!/usr/bin/python
# -*- coding: utf-8 -*-


that should allow you to write u'español' in your code.

JM
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to concatenate unicode strings ???

2011-04-26 Thread Terry Reedy

On 4/26/2011 12:07 PM, Chris Rebert wrote:

On Tue, Apr 26, 2011 at 8:58 AM, Arielisaacr...@gmail.com  wrote:

Hi everybody, how could I concatenate unicode strings ???
What I want to do is this:

unicode('this an example language ') + unicode('español')

but I get an:
Traceback (most recent call last):
   File console, line 1, inmodule
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 11:
ordinal not in range(128)

How could I concatenate unicode strings ???


That error is from the 2nd call to unicode(), not from the
concatenation itself. Use proper Unicode string literals:

u'this an example language ' + u'español'

You'll probably also need to add the appropriate source file encoding
declaration; see http://www.python.org/dev/peps/pep-0263/


Or use Python 3

--
Terry Jan Reedy


--
http://mail.python.org/mailman/listinfo/python-list


Re: Terrible FPU performance

2011-04-26 Thread Chris Colbert
On Tue, Apr 26, 2011 at 8:40 AM, Mihai Badoiu mbad...@gmail.com wrote:

 Hi,

 I have terrible performance for multiplication when one number gets very
 close to zero.  I'm using cython by writing the following code:


You should ask this question on the Cython users mailing list.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Development tools and practices for Pythonistas

2011-04-26 Thread Jayme Proni Filho
When I said to studying two kind of Version Control. It is because I don't
know if you are already working with prgramming. So I think it is very
important you know how to work both ways of version control system.
You don't need to install both. Just read about one and work with other. For
applying changes it is better to knoow both of them.

I agree with rusi in one thing. You must know one version control model, but
not just distributed because some enterprise use one kind of version control
system and others use another one.

---
Jayme Proni Filho
Skype: jaymeproni
Twitter: @jaymeproni
Phone: +55 - 17 - 3631 - 6576
Mobile: +55 - 17 - 9605 - 3560
e-Mail: jaymeproni at yahoo dot com dot br
---
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Py_INCREF() incomprehension

2011-04-26 Thread Hegedüs Ervin
Dear Thomas,

thank you again,

 The ownership rules say that the input parameter belongs to the
 caller who holds it at least until we return. (We just borrow it.)
 So no action needed.

ok, its' clear, I understand,
 
 * Py_BuildValue()
 
 This function transfers ownership, as it is none of
 (PyTuple_GetItem(), PyList_GetItem(), PyDict_GetItem(),
 PyDict_GetItemString()).
 
 So the value it returns belongs to us, for now.
 
 We do transfer ownership to our caller (implicitly), so no action is
 required as well here.

also,
 
 so, it means when I implicit allocate a new object (whit
 Py_BuildValue()), Python's GC will free that pointer when it
 doesn't require anymore?
 
 In a way, yes. But you have to obey ownership: whom belongs the
 current reference? If it is not ours, and we need it, we do
 Py_(X)INCREF(). If we got it, but don't need it, we do
 Py_(X)DECREF().

right, it's clear again,
 
 BTW: Is there any reason for using calloc()? malloc() would probably
 be faster...
 
 may be, I didn't measure it ever... but calloc() gives clear
 space... :)
 
 Ok. (But as sizeof(char) is, by C standard definition, always 1, you
 can write it shorter.)

oh' well, thanks, I just wrote from a wrist :), I just realize
it now... :)

Another question: here is an another part ot my code:

static PyObject*
mycrypt_decrypt(PyObject *self, PyObject *args)
{
if (!PyArg_ParseTuple(args, ss, data, path)) {
return NULL;
}

...

}

When I call this function from Python without argument or more
than it expects, I get an exception, eg.:
TypeError: function takes exactly 2 arguments (0 given)

But, when I don't read input arguments (there isn't
PyArg_ParseTuple), there isn't exception.

How Python handle the number of arguments? I just ask this,
because I don't set errstring with PyErr_SetString, but I get
TypeError - how does Python knows, this error raised?

Hope you understand my question... :)

thanks for all:


a.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Terrible FPU performance

2011-04-26 Thread Mihai Badoiu
Already did.  They suggested the python list, because the asm generated code
is really correct and the problem might be with the python running on top.

On Tue, Apr 26, 2011 at 1:04 PM, Chris Colbert sccolb...@gmail.com wrote:



 On Tue, Apr 26, 2011 at 8:40 AM, Mihai Badoiu mbad...@gmail.com wrote:

 Hi,

 I have terrible performance for multiplication when one number gets very
 close to zero.  I'm using cython by writing the following code:


 You should ask this question on the Cython users mailing list.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Terrible FPU performance

2011-04-26 Thread Philip Semanchuk

On Apr 26, 2011, at 1:34 PM, Mihai Badoiu wrote:

 Already did.  They suggested the python list, because the asm generated code
 is really correct and the problem might be with the python running on top.

Does the same timing in consistency appear when you use pure Python?

bye
Philip


 
 On Tue, Apr 26, 2011 at 1:04 PM, Chris Colbert sccolb...@gmail.com wrote:
 
 
 
 On Tue, Apr 26, 2011 at 8:40 AM, Mihai Badoiu mbad...@gmail.com wrote:
 
 Hi,
 
 I have terrible performance for multiplication when one number gets very
 close to zero.  I'm using cython by writing the following code:
 
 
 You should ask this question on the Cython users mailing list.
 
 
 -- 
 http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Py_INCREF() incomprehension

2011-04-26 Thread Thomas Rachel

Am 26.04.2011 19:28, schrieb Hegedüs Ervin:


Another question: here is an another part ot my code:

static PyObject*
mycrypt_decrypt(PyObject *self, PyObject *args)
{
 if (!PyArg_ParseTuple(args, ss,data,path)) {
 return NULL;
 }

...

}

When I call this function from Python without argument or more
than it expects, I get an exception, eg.:
TypeError: function takes exactly 2 arguments (0 given)

But, when I don't read input arguments (there isn't
PyArg_ParseTuple), there isn't exception.

How Python handle the number of arguments?


From what you tell it: with PyArg_ParseTuple(). (see 
http://docs.python.org/c-api/arg.html for this).


You give a format string (in your case: ss, again: better use s#s# 
if possible) which is parsed in order to get the (needed number of) 
parameters.


If you call with () or only one arg, args points to an empty tuple, but 
the parser wants two arguments - bang.


If you call with more than two args, the function notices it too: the 
arguments would just be dropped, which is probably not what is wanted.


If you call with two args, but of wrong type, they don't match to s 
(=string) - bang again.


Only with calling with the correct number AND type of args, the function 
says ok.


Why is s# better than s? Simple: the former gives the string length 
as well. s means a 0-terminated string, which might not be what you 
want, especially with binary data (what you have, I suppose).


If you give e.g. ab\0cd where s is used, you get an exception as 
well, as this string cannot be parsed cmpletely. So better use s# and 
get the length as well.




I just ask this,
because I don't set errstring with PyErr_SetString, but I get
TypeError - how does Python knows, this error raised?


There is magic inside... :-)


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to concatenate unicode strings ???

2011-04-26 Thread Algis Kabaila
On Wednesday 27 April 2011 02:33:00 Ariel wrote:
 with commands.getoutput(one_comand.encode('utf-8'))  it works
 !!!
 
 On Tue, Apr 26, 2011 at 6:22 PM, Ariel isaacr...@gmail.com 
wrote:
  And what about if after the string is concat I want it to
  pass is to the command line to do anything else,  for
  instance:
  one_command = cadena.decode('utf-8') +
  cadena1.decode('utf-8') commands.getoutput(one_comand)
  
  But I receive this error:
  
  Traceback (most recent call last):
File console, line 1, in module
File /usr/lib/python2.6/commands.py, line 46, in
getoutput

  return getstatusoutput(cmd)[1]

File /usr/lib/python2.6/commands.py, line 55, in
getstatusoutput

  pipe = os.popen('{ ' + cmd + '; } 21', 'r')
  
  UnicodeEncodeError: 'ascii' codec can't encode character
  u'\xf1' in position 31: ordinal not in range(128)
  
  How could I solve that ???
  Regards
  Ariel
  
  On Tue, Apr 26, 2011 at 6:07 PM, Chris Rebert 
c...@rebertia.com wrote:
  On Tue, Apr 26, 2011 at 8:58 AM, Ariel 
isaacr...@gmail.com wrote:
   Hi everybody, how could I concatenate unicode strings
   ??? What I want to do is this:
   
   unicode('this an example language ') +
   unicode('español')
   
   but I get an:
   
   Traceback (most recent call last):
 File console, line 1, in module
   
   UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3
   in position 11: ordinal not in range(128)
   
   How could I concatenate unicode strings ???
  
  That error is from the 2nd call to unicode(), not from the
  concatenation itself. Use proper Unicode string literals:
  
  u'this an example language ' + u'español'
  
  You'll probably also need to add the appropriate source
  file encoding declaration; see
  http://www.python.org/dev/peps/pep-0263/
  
  Cheers,
  Chris
  --
  http://rebertia.com
The following is from Idle3 (IDLE for Python3:

 'this an example language ' + 'español'
'this an example language español'
 

In Python3 all strings are unicode, so your problem just does 
not exist.  Upgrading to Python 3 would eliminate the problem, 
as the above extract demonstrates. 

Perhaps it is time to upgrade to Python 3.2 

In the above when I write Python 3, I mean Python 3.1 or 
higher.  

With kind regards,

OldAl.

PS: I do not have Spanish on my computer, but I do have at least 
one other languages that uses characters that are outside of 
ascii limit of 128.  
A.
-- 
Algis
http://akabaila.pcug.org.au/StructuralAnalysis.pdf
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Development tools and practices for Pythonistas

2011-04-26 Thread Jean-Michel Pichavant

snorble wrote:

I'm not a Pythonista, but I aspire to be.

My current tools:

Python, gvim, OS file system

My current practices:

When I write a Python app, I have several unorganized scripts in a
directory (usually with several named test1.py, test2.py, etc., from
random ideas I have tested), and maybe a todo.txt file. Then I hack
away, adding features in a semi-random order. Then I get busy with
other things. Maybe one week I spend 20 hours on development. The next
week, no time on development. A few weeks later when I have some time,
I'm excited to get back to making progress, only to find that I have
to spend 30-60 minutes figuring out where I left off. The code is
usually out of sync with todo.txt. I see people who release new
versions and bug fixes, so I sometimes will create a new directory and
continue working from that copy, because it seems like the thing to
do. But if I ever made something worth releasing, and got a request
like, I have problems with the 2.0 version. Can you send me the old
1.1 version? I'd be like, uhhh... let me hunt through my files by
hand and get back to you in a month. I'm thinking I can do a lot
better than this.

I am aware of tools like version control systems, bug trackers, and
things like these, but I'm not really sure if I need them, or how to
use them properly. I think having some organization to all of this
would help me to make more consistent progress, and spend less time
bringing myself up to speed after some time off.

I really like the idea of having a list of features, and tackling
those features one at a time. I read about people who do this, and
each new features gets a new minor version number. It sounds very
organized and clean. But I'm not really sure of the best way to
achieve this. Mainly I think I just need some recommendations to help
create a good mental map of what needs to happen, and mapping jargon
to concepts. Like, each feature gets its own directory. Or with a
version control tool, I don't know if a feature maps to a branch, or a
commit?

I appreciate any advice or guidance anyone has to offer.
  
You can have a look at SVN and bugzilla, they are free SCM  bug tracker 
applications.
Make sure it's worth the pain though, these tools are not that easy to 
administrate (the usage is pretty simple).


JM


--
http://mail.python.org/mailman/listinfo/python-list


Re: Development tools and practices for Pythonistas

2011-04-26 Thread Thomas Rachel

Am 26.04.2011 16:39, schrieb snorble:


When I write a Python app, I have several unorganized scripts in a
directory (usually with several named test1.py, test2.py, etc., from
random ideas I have tested), and maybe a todo.txt file. Then I hack
away, adding features in a semi-random order. Then I get busy with
other things. Maybe one week I spend 20 hours on development. The next
week, no time on development. A few weeks later when I have some time,
I'm excited to get back to making progress, only to find that I have
to spend 30-60 minutes figuring out where I left off. The code is
usually out of sync with todo.txt.


That happens...


 I see people who release new

versions and bug fixes, so I sometimes will create a new directory and
continue working from that copy, because it seems like the thing to
do. But if I ever made something worth releasing, and got a request
like, I have problems with the 2.0 version. Can you send me the old
1.1 version? I'd be like, uhhh... let me hunt through my files by
hand and get back to you in a month. I'm thinking I can do a lot
better than this.


That is another subject (IMO), and you are right: you can do a very lot 
better, using the right tools.




I am aware of tools like version control systems, bug trackers, and
things like these, but I'm not really sure if I need them, or how to
use them properly.


I have been using several VCS for about 5 or 6 years now, and I can only 
tell you that, once started using them, you will wonder how you ever got 
along without them.



 I think having some organization to all of this

would help me to make more consistent progress, and spend less time
bringing myself up to speed after some time off.


I don't see how these tools will help to get up to date the way you 
describe it - but all other issues are well coped with using a VCS. I 
personally started with cvs (don't do that!), then worked with svn (do 
that only if you really need that), then got working with hg 
(Mercurial), which is a very good thing.



Say, you have a certain development progress, and you add a new feature.
Then you do all the changes, including increasing the version number. 
The changes you just have done are one changeset which you commit, 
providing a good commit message. If you have a version which you ship 
out, you give it a tag. In this way, you can easily change from 2.0 you 
are working on to 1.5 requested by the customer.



Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Simple map/reduce utility function for data analysis

2011-04-26 Thread Raymond Hettinger
On Apr 25, 7:42 pm, Paul Rubin no.em...@nospam.invalid wrote:
 Raymond Hettinger pyt...@rcn.com writes:
  Here's a handy utility function for you guys to play with:
     http://code.activestate.com/recipes/577676/

 Cute, but why not use collections.defaultdict for the return dict?
 Untested:

My first draft had a defaultdict but that implementation detail would
get exposed to the user unless the return value was first coerced to a
regular dict.  Also, I avoided modern python features so the code
would run well on psyco and so that it would make sense to beginning
users.


 Untested:
   d = defaultdict(list)
   for key,value in ifilter(bool,imap(mapper, data)):
  d[key].append(value)
   ...

Nice use of itertools.  FWIW, ifilter() will accept None for the first
argument -- that's a bit faster than using bool().


Raymond
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Development tools and practices for Pythonistas

2011-04-26 Thread CM
On Apr 26, 10:39 am, snorble snor...@hotmail.com wrote:
 I'm not a Pythonista, but I aspire to be.

 My current tools:

 Python, gvim, OS file system

 My current practices:

 When I write a Python app, I have several unorganized scripts in a
 directory (usually with several named test1.py, test2.py, etc., from
 random ideas I have tested), and maybe a todo.txt file.

First, give your files meaningful names.  test1.py, test2.py...no
wonder you have to spend an hour just figuring out where you left
off.  Imagine instead if these were modules called
DiskAccessUtilities.py and UserPreferencesPanel.py.  You should also
use highly descriptive names in naming classes and functions, too,
with functions being verbs and don't be afraid of long(ish) function
names like AddNewCustomerToDatabase()

 Then I hack away, adding features in a semi-random order.

I'd spend an hour with your TODO list and break it into priority
categories, like High, Med, Low, and It Would Be Nice, and then take
them strictly in that order.  I'd also date the issues so you have a
sense for how long something has been waiting to be fixed.

 Then I get busy with other things. Maybe one week I spend 20 hours  on 
 development. The next week, no time on development. A few
 weeks later when I have some time, I'm excited to get back to making
 progress, only to find that I have to spend 30-60 minutes figuring out
 where I left off.

I would try to not stop in the middle of creating a new feature or
fixing a bug, but try to finish it out before taking a week off.  This
way, when you come back, you can just tackle something from the High
Priority stack and not have to figure out where you were.

 The code is usually out of sync with todo.txt.

That's just a matter of being disciplined.  When I fix a bug, I simply
cut the bug from the TODO part to the DONE part of the .txt file,
nearly every time.  It requires no effort compared to actually fixing
the bug, yet it feels satisfying to get that text moved.

 would help me to make more consistent progress, and spend less time
 bringing myself up to speed after some time off.

Are you documenting your code?  That can help (I need to get better
about that as well).  Also, are things broken down into modules that
are self-contained?  That also can help much.  Is the TODO.txt always
open while you are working?  It should be.  Lastly, keeping some kind
of notebook or even Post-Its or a bulletin board over your desk with
notes as to what's next and where to hunt in your code to get at it
should help.  Imagine if you take two weeks off, come back, want to
work on the project, and you find this note on your bulletin board:
In the CustomersPanel.py module, add support for a wxSearchCtrl
(search bar) that searches the Former_Customers table in the main
database...  Now you are ready to jump right in!

 I really like the idea of having a list of features, and tackling
 those features one at a time. I read about people who do this, and
 each new features gets a new minor version number.

Is that true?  I'm under the impression that projects do a bunch of
changes (bug fixes, and new features) and then release a new version
that has a decent amount of changes.  I don't think people want tons
of versions of most projects around, but that each release should have
an appreciable amount of good changes.

 to concepts. Like, each feature gets its own directory.

I guess it depends on your project, but that sounds needlessly complex
and way too tough with a VCS.  I'd say just don't go there.

Once you use a VCS you will probably settle into a better pattern, but
things like good naming, documenting, notes, prioritizing features/
bugs, and roadmaps don't magically go away.  Software development
takes long range thinking and some organizational discipline.

Che
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: De-tupleizing a list

2011-04-26 Thread Algis Kabaila
On Tuesday 26 April 2011 22:19:08 Gnarlodious wrote:
 On Apr 25, 10:59 pm, Steven D'Aprano wrote:
  In Python 3, map becomes lazy and returns an iterator
  instead of a list, so you have to wrap it in a call to
  list().
 
 Ah, thanks for that tip. Also works for outputting a tuple:
 list_of_tuples=[('0A',), ('1B',), ('2C',), ('3D',)]
 
 #WRONG:
 (x for (x,) in list_of_tuples)
 generator object genexpr at 0x1081ee0
 
 #RIGHT:
 tuple(x for (x,) in list_of_tuples)
 
 Thanks everyone for the abundant help.
 
 -- Gnarlie

I think you already have the following:
 t = [('0A',), ('1B',), ('2C',), ('3D',)]
 ls = [v[0] for v in t]
 ls
['0A', '1B', '2C', '3D']
 

I would prefer that to using a ready made module,  as it would 
be quicker than learning about the module,  OTH, learning about 
a module may be useful for other problems.  A standard dilema...

The above quote of code is in Idle3, running Python 3.1.

OldAl.
-- 
Algis
http://akabaila.pcug.org.au/StructuralAnalysis.pdf
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Development tools and practices for Pythonistas

2011-04-26 Thread CM
 I guess it depends on your project, but that sounds needlessly complex
 and way too tough with a VCS.  I'd say just don't go there.

(Whoops, I meant way too tough *without* a VCS, not with)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Development tools and practices for Pythonistas

2011-04-26 Thread Algis Kabaila
On Wednesday 27 April 2011 03:59:25 Thomas Rachel wrote:
 Am 26.04.2011 16:39, schrieb snorble:
  When I write a Python app, I have several unorganized

 I don't see how these tools will help to get up to date the
 way you describe it - but all other issues are well coped
 with using a VCS. I personally started with cvs (don't do
 that!), then worked with svn (do that only if you really
 need that), then got working with hg (Mercurial), which is a
 very good thing.
 
 Thomas

Thomas, have you tried bzr (Bazaar) and if so do you consider hg 
(Mercurial) better?

And why is it better?   (bzr is widely used in ubuntu, which is 
my favourite distro at present).

TIA,

OldAl.
-- 
Algis
http://akabaila.pcug.org.au/StructuralAnalysis.pdf
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Py_INCREF() incomprehension

2011-04-26 Thread Hegedüs Ervin
Hello,

 But, when I don't read input arguments (there isn't
 PyArg_ParseTuple), there isn't exception.
 
 How Python handle the number of arguments?
 
 From what you tell it: with PyArg_ParseTuple(). (see
 http://docs.python.org/c-api/arg.html for this).
 
 You give a format string (in your case: ss, again: better use
 s#s# if possible) which is parsed in order to get the (needed
 number of) parameters.
 
 If you call with () or only one arg, args points to an empty tuple,
 but the parser wants two arguments - bang.
 
 If you call with more than two args, the function notices it too:
 the arguments would just be dropped, which is probably not what is
 wanted.
 
 If you call with two args, but of wrong type, they don't match to
 s (=string) - bang again.
 
 Only with calling with the correct number AND type of args, the
 function says ok.
 
 Why is s# better than s? Simple: the former gives the string
 length as well. s means a 0-terminated string, which might not be
 what you want, especially with binary data (what you have, I
 suppose).
 
 If you give e.g. ab\0cd where s is used, you get an exception as
 well, as this string cannot be parsed cmpletely. So better use s#
 and get the length as well.

so, if em I right, if PyArg_ParseTuple() fails, _it_ raises
TypeError exception... (?)

I think it's clear, thanks :)
 
 I just ask this,
 because I don't set errstring with PyErr_SetString, but I get
 TypeError - how does Python knows, this error raised?
 
 There is magic inside... :-)

waov :)

and (maybe) final question: :)

I defined many exceptions:

static PyObject *cibcrypt_error_nokey;
static PyObject *cibcrypt_error_nofile;
static PyObject *cibcrypt_error_badpad;
...

void handle_err(int errcode) {
switch(errcode) {
case -1:PyErr_SetString(cibcrypt_error_nokey, Can't find key.);
break;
...
}
...
cibcrypt_error_nokey = PyErr_NewException(cibcrypt.error_nokey, NULL, 
NULL);
...
PyModule_AddObject(o, error, cibcrypt_error_nokey);

I am right, here also no need any Py_INCREF()/Py_DECREF() action,
based on this doc:
http://docs.python.org/c-api/arg.html

Another useful function is PyErr_SetFromErrno(), which only
takes an exception argument and constructs the associated value
by inspection of the global variable errno. The most general
function is PyErr_SetObject(), which takes two object arguments,
the exception and its associated value. You don’t need to
Py_INCREF() the objects passed to any of these function


so, this part of code is right?


thanks again:


a.
 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Development tools and practices for Pythonistas

2011-04-26 Thread Algis Kabaila
On Wednesday 27 April 2011 04:31:19 CM wrote:
  I guess it depends on your project, but that sounds
  needlessly complex and way too tough with a VCS.  I'd say
  just don't go there.
 
 (Whoops, I meant way too tough *without* a VCS, not with)

And read your own emails *before* sending them   :)

Actually, CM has given some very good advice!  As I am probably 
the oldest person on this list, so my penny's worth is that some 
going over old stuff is good - learning is repetitious and  
memory is not going to get better with age, so learn to live 
with it (it being the frailty of memory...)

OldAl.
-- 
Algis
http://akabaila.pcug.org.au/StructuralAnalysis.pdf
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Terrible FPU performance

2011-04-26 Thread Dan Goodman

Hi,

On 26/04/2011 15:40, Mihai Badoiu wrote:
 I have terrible performance for multiplication when one number gets very
 close to zero.  I'm using cython by writing the following code:

This might be an issue with denormal numbers:

http://en.wikipedia.org/wiki/Denormal_number

I don't know much about them though, so I can't advise any further than 
that...


Dan

--
http://mail.python.org/mailman/listinfo/python-list


Re: Terrible FPU performance

2011-04-26 Thread Mihai Badoiu
Yes, running on pure python has the same issue (but overall only a factor 3
away):

i = 0
x = 1.0
while i  1000:
x *= 0.8
#x += 0.01
i += 1
print x


On Tue, Apr 26, 2011 at 1:44 PM, Philip Semanchuk phi...@semanchuk.comwrote:


 On Apr 26, 2011, at 1:34 PM, Mihai Badoiu wrote:

  Already did.  They suggested the python list, because the asm generated
 code
  is really correct and the problem might be with the python running on
 top.

 Does the same timing in consistency appear when you use pure Python?

 bye
 Philip


 
  On Tue, Apr 26, 2011 at 1:04 PM, Chris Colbert sccolb...@gmail.com
 wrote:
 
 
 
  On Tue, Apr 26, 2011 at 8:40 AM, Mihai Badoiu mbad...@gmail.com
 wrote:
 
  Hi,
 
  I have terrible performance for multiplication when one number gets
 very
  close to zero.  I'm using cython by writing the following code:
 
 
  You should ask this question on the Cython users mailing list.
 
 
  --
  http://mail.python.org/mailman/listinfo/python-list

 --
 http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Terrible FPU performance

2011-04-26 Thread Dan Stromberg
On Tue, Apr 26, 2011 at 6:40 AM, Mihai Badoiu mbad...@gmail.com wrote:
 Hi,
 I have terrible performance for multiplication when one number gets very
 close to zero.  I'm using cython by writing the following code:
     cdef int i
     cdef double x = 1.0
     for 0 = i  1000:
         x *= 0.8
         #x += 0.01
     print x
 This code runs much much slower (20+ times slower) with the line x += 0.01
 uncommented.  I looked at the deassembled code and it looks correct.
  Moreover, it's just a few lines and by writing a C code (without python on
 top), I get the same code, but it's much faster.  I've also tried using sse,
 but I get exactly the same behavior.  The best candidate that I see so far
 is that Python sets up the FPU in a different state than C.
 Any advice on how to solve this performance problem?
 thanks!

I'm getting almost the opposite result: with 1 operation (just the
*=), it's about 6 times slower than with two operations (with the *=
and +=).

I investigated whether it was a GC issue; I conclude it's probably not
- the number of objects in the heap was consistent between the fast
and slow cython versions.

I also split out the addition and multiplication into their own
functions, foo1 and foo2, and profiled what was left of the original
function, foo.  This did not tell anything salient either, other than
that the foo function had some hidden performance penalty.  A line by
line profiler might help, if Cython supports going down to that level;
it seems that function by function profiling may not be granular
enough for this problem.

I found that the performance difference wasn't as marked as on your
system - more like 6x.

I also found that the duration of the faster of the two cython's was
almost exactly the same as the duration of the C program I created to
do the same thing.

BTW, what version of CPython, and what version of Cython are you
using?  I used mostly CPython 3.1, with a Cython compiled from an
early version that supported generators - which I believe has now been
merged.  You might try a few different versions of Cython and compare
the results.

What OS are you on?  What kind of CPU?  I'm on Ubuntu 10.10 with an
AMD Athlon(tm) II X2 245 Processor.

It might reveal something if you hand-coded a C extension module that
does the same thing.  And while this may be slow, it might also be
revealing to try using decimal.Decimal arithmetic.

Interesting that you get a semi-similar result in pure CPython.

Please find below a shar of my experiments.

#!/bin/sh
# This is a shell archive (produced by GNU sharutils 4.9).
# To extract the files from this archive, save it to some FILE, remove
# everything before the `#!/bin/sh' line above, then type `sh FILE'.
#
lock_dir=_sh26876
# Made on 2011-04-26 12:08 PDT by dstromberg@benchbox.
# Source directory was `/home/dstromberg/src/cython-slowdown'.
#
# Existing files will *not* be overwritten, unless `-c' is specified.
#
# This shar contains:
# length mode   name
# -- -- --
#463 -rw-r--r-- cs.m4
#324 -rwxr-xr-x main
#787 -rw-r--r-- Makefile
#   1954 -rw-r--r-- output
#180 -rw-r--r-- t.c
#
MD5SUM=${MD5SUM-md5sum}
f=`${MD5SUM} --version | egrep '^md5sum .*(core|text)utils'`
test -n ${f}  md5check=true || md5check=false
${md5check} || \
  echo 'Note: not verifying md5sums.  Consider installing GNU coreutils.'
save_IFS=${IFS}
IFS=${IFS}:
gettext_dir=FAILED
locale_dir=FAILED
first_param=$1
for dir in $PATH
do
  if test $gettext_dir = FAILED  test -f $dir/gettext \
  ($dir/gettext --version /dev/null 21)
  then
case `$dir/gettext --version 21 | sed 1q` in
  *GNU*) gettext_dir=$dir ;;
esac
  fi
  if test $locale_dir = FAILED  test -f $dir/shar \
  ($dir/shar --print-text-domain-dir /dev/null 21)
  then
locale_dir=`$dir/shar --print-text-domain-dir`
  fi
done
IFS=$save_IFS
if test $locale_dir = FAILED || test $gettext_dir = FAILED
then
  echo=echo
else
  TEXTDOMAINDIR=$locale_dir
  export TEXTDOMAINDIR
  TEXTDOMAIN=sharutils
  export TEXTDOMAIN
  echo=$gettext_dir/gettext -s
fi
if (echo testing\c; echo 1,2,3) | grep c /dev/null
then if (echo -n test; echo 1,2,3) | grep n /dev/null
 then shar_n= shar_c='
'
 else shar_n=-n shar_c= ; fi
else shar_n= shar_c='\c' ; fi
f=shar-touch.$$
st1=200112312359.59
st2=123123592001.59
st2tr=123123592001.5 # old SysV 14-char limit
st3=1231235901

if touch -am -t ${st1} ${f} /dev/null 21  \
   test ! -f ${st1}  test -f ${f}; then
  shar_touch='touch -am -t $1$2$3$4$5$6.$7 $8'

elif touch -am ${st2} ${f} /dev/null 21  \
   test ! -f ${st2}  test ! -f ${st2tr}  test -f ${f}; then
  shar_touch='touch -am $3$4$5$6$1$2.$7 $8'

elif touch -am ${st3} ${f} /dev/null 21  \
   test ! -f ${st3}  test -f ${f}; then
  shar_touch='touch -am $3$4$5$6$2 $8'

else
  shar_touch=:
  echo
  ${echo} 'WARNING: not restoring timestamps.  Consider getting and
installing GNU `touch'\'', distributed in GNU coreutils...'
  echo
fi
rm -f 

Re: Restarting a daemon

2011-04-26 Thread Chris Angelico
On Tue, Apr 26, 2011 at 10:13 PM, Jeffrey Barish
jeff_bar...@earthlink.net wrote:
 Not exactly a Python question, but I thought I would start here.

 I have a server that runs as a daemon.  I can restart the server manually
 with the command

 myserver restart

 This command starts a new myserver which first looks up the pid for the one
 that is running and sends it a terminate signal.  The new one then
 daemonizes itself.

What job manager do you have? Can you set up a script in /etc/init.d
or /etc/init and then use that to restart the daemon? Upstart scripts
can be managed with the 'initctl' command, for instance.

Chris Angelico
-- 
http://mail.python.org/mailman/listinfo/python-list


Reading Huge UnixMailbox Files

2011-04-26 Thread Brandon McGinty
List,
I'm trying to import hundreds of thousands of e-mail messages into a
database with Python.
However, some of these mailboxes are so large that they are giving
errors when being read with the standard mailbox module.
I created a buffered reader, that reads chunks of the mailbox, splits
them using the re.split function with a compiled regexp, and imports
each chunk as a message.
The regular expression work is where the bottle-neck appears to be,
based on timings.
I'm wondering if there is a faster way to do this, or some other method
that you all would recommend.

Brandon McGinty
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: De-tupleizing a list

2011-04-26 Thread Mark Niemczyk
Some interesting performance comparisons, under Python 3.2.  Times are 
relative, and are for an initial list of tuples with 500,000 items.  

(1)ans = [] 
#relative time: 298
 for item in lst:
 ans += list(item)
 return ans

(2)return [item[0] for item in lst]  #relative 
time:  106

(3)from operator import itemgetter #relative time:   84
 return list(map(itemgetter(0), lst))

(4)import itertools   
#relative time:  63
 return list(itertools.chain.from_iterable(lst))

(5)return [x for (x,) in lst]  
#relative time:  52


With the caveat that 'your mileage may vary'

Regards,
Mark

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: De-tupleizing a list

2011-04-26 Thread Chris Angelico
On Wed, Apr 27, 2011 at 5:39 AM, Mark Niemczyk praham...@gmail.com wrote:
 (2)    return [item[0] for item in lst]                          #relative 
 time:  106
 (5)    return [x for (x,) in lst]                                      
 #relative time:  52

Interesting indeed. #5 will of course only work with a tuple of length
1, where most of the others are flexible enough to take the first
element from any length tuple; but the time saving is quite
significant.

Chris Angelico
-- 
http://mail.python.org/mailman/listinfo/python-list


client-server parallellised number crunching

2011-04-26 Thread Hans Georg Schaathun
I wonder if anyone has any experience with this ...

I try to set up a simple client-server system to do some number
crunching, using a simple ad hoc protocol over TCP/IP.  I use 
two Queue objects on the server side to manage the input and the output
of the client process.  A basic system running seemingly fine on a single 
quad-core box was surprisingly simple to set up, and it seems to give
me a reasonable speed-up of a factor of around 3-3.5 using four client 
processes in addition to the master process.  (If anyone wants more
details, please ask.)

Now, I would like to use remote hosts as well, more precisely, student
lab boxen which are rather unreliable.  By experience I'd expect to
lose roughly 4-5 jobs in 100 CPU hours on average.  Thus I need some 
way of detecting lost connections and requeue unfinished tasks, 
avoiding any serious delays in this detection.  What is the best way to
do this in python?

It is, of course, possible for the master thread upon processing the
results, to requeue the tasks for any missing results, but it seems
to me to be a cleaner solution if I could detect disconnects and
requeue the tasks from the networking threads.  Is that possible
using python sockets?

Somebody will probably ask why I am not using one of the multiprocessing 
libraries.  I have tried at least two, and got trapped by the overhead
of passing complex pickled objects across.  Doing it myself has at least
helped me clarify what can be parallelised effectively.  Now,
understanding the parallelisable subproblems better, I could try again,
if I can trust that these libraries can robustly handle lost clients.
That I don't know if I can.

Any ideas?
TIA
-- 
:-- Hans Georg
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Development tools and practices for Pythonistas

2011-04-26 Thread Chris Angelico
On Wed, Apr 27, 2011 at 12:39 AM, snorble snor...@hotmail.com wrote:
 When I write a Python app, I have several unorganized scripts in a
 directory (usually with several named test1.py, test2.py, etc., from
 random ideas I have tested), and maybe a todo.txt file. ... The code is
 usually out of sync with todo.txt. I see people who release new
 versions and bug fixes, so I sometimes will create a new directory and
 continue working from that copy, because it seems like the thing to
 do. But if I ever made something worth releasing, and got a request
 like, I have problems with the 2.0 version. Can you send me the old
 1.1 version?

As other people have said, version control is very handy. I use git
myself, but imho the choice of _which_ VCS you use is far less
important than the choice of _whether_ to use one.

As to the todo file - I tend to keep only vague ideas in a separate
file. Any todo that can be logically associated with a code file or,
especially, a particular piece of code, goes in that source file:

def function(parms):
# TODO: This should check if Foo matches Bar and shortcut the computation
...

I have a very strict format: T, O, D, O, literal ASCII, always
uppercase. Makes it easy to grep (and I try to avoid todo in
lower-case, which means I can use a case-insensitive search if I
choose).

Additionally, if there's any task that will require checking of
multiple parts of the code, I'll create a keyword for it. For
instance, if I'm considering adding a local cache to an application to
reduce database traffic, I might do this:

//TODO CACHE: This will need to update the cache
...
//TODO CACHE: Read from cache instead
...
//TODO CACHE: Would this affect the cache?
... etc

The benefits of having the comments right beside the code cannot be
underestimated. Comments are far less likely to get out of sync if
they stare you in the face while you're changing the code - this is
why doxygen and friends are so useful.

Ultimately, it all comes down to discipline, and how important the
project is to you. At work, I have a lot of disciplines; we have a
wiki where stuff gets documented, we have source control, we have
daily backups (as well), etc, etc, etc. For little home projects, it's
not usually worth the effort. Take your pick, where do you want to go?

Chris Angelico
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: De-tupleizing a list

2011-04-26 Thread Hans Georg Schaathun
On Wed, 27 Apr 2011 04:30:01 +1000, Algis Kabaila
  akaba...@pcug.org.au wrote:
:  I would prefer that to using a ready made module,  as it would 
:  be quicker than learning about the module,  OTH, learning about 
:  a module may be useful for other problems.  A standard dilema...

More importantly, list comprehension is very readable to /other/
people.  I don't know exactly what the pythonic philosopy is,
but when I started using python, more readable and intuitive code 
was one of the main motivators, and the only one which favours
python both over C/java and over Matlab ...

List comprehension is understood even by readers with no experience
with python.

-- 
:-- Hans Georg
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Reading Huge UnixMailbox Files

2011-04-26 Thread Dan Stromberg
On Tue, Apr 26, 2011 at 12:39 PM, Brandon McGinty
brandon.mcgi...@gmail.com wrote:
 List,
 I'm trying to import hundreds of thousands of e-mail messages into a
 database with Python.
 However, some of these mailboxes are so large that they are giving
 errors when being read with the standard mailbox module.
 I created a buffered reader, that reads chunks of the mailbox, splits
 them using the re.split function with a compiled regexp, and imports
 each chunk as a message.
 The regular expression work is where the bottle-neck appears to be,
 based on timings.
 I'm wondering if there is a faster way to do this, or some other method
 that you all would recommend.

 Brandon McGinty

Is it traditional mbox, or the more recent mbox that uses a
Content-length header?

Either way, you could probably read the mbox files line by line, and
yield a string corresponding to one message - one message at a time.

Traditional mbox is easier - you just look for lines that start with
^From  - if a message actually wanted to include that in its body,
the MTA should prepend it with a  or something to avoid ambiguity.

With the Content-length header, you need to understand a little more
about the header lines - this header gives the length of the message
so that you don't need the ugly  escape for From's.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: client-server parallellised number crunching

2011-04-26 Thread Chris Angelico
On Wed, Apr 27, 2011 at 5:55 AM, Hans Georg Schaathun
ge...@schaathun.net wrote:
 It is, of course, possible for the master thread upon processing the
 results, to requeue the tasks for any missing results, but it seems
 to me to be a cleaner solution if I could detect disconnects and
 requeue the tasks from the networking threads.  Is that possible
 using python sockets?

 Somebody will probably ask why I am not using one of the multiprocessing
 libraries.  I have tried at least two, and got trapped by the overhead
 of passing complex pickled objects across.

If I were doing this, I would devise my own socket-layer protocol and
not bother with pickling objects at all. The two ends would read and
write the byte stream and interpret it as data.

But question: Why are you doing major number crunching in Python? On
your quad-core machine, recode in C and see if you can do the whole
job without bothering the unreliable boxen at all.

Chris Angelico
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Reading Huge UnixMailbox Files

2011-04-26 Thread Nobody
On Tue, 26 Apr 2011 15:39:37 -0400, Brandon McGinty wrote:

 I'm trying to import hundreds of thousands of e-mail messages into a
 database with Python.
 However, some of these mailboxes are so large that they are giving
 errors when being read with the standard mailbox module.
 I created a buffered reader, that reads chunks of the mailbox, splits
 them using the re.split function with a compiled regexp, and imports
 each chunk as a message.
 The regular expression work is where the bottle-neck appears to be,
 based on timings.
 I'm wondering if there is a faster way to do this, or some other method
 that you all would recommend.

Consider using awk. In my experience, high-level languages tend to have
slower regex libraries than simple tools such as sed and awk.

E.g. the following script reads a mailbox on stdin and writes a separate
file for each message:

#!/usr/bin/awk -f
BEGIN {
num = 0;
ofile = ;
}

/^From / {
if (ofile != ) close(ofile);
ofile = sprintf(%06d.mbox, num);
num ++;
}

{
print  ofile;
}

It would be simple to modify it to start a new file after a given number
of messages or a given number of lines.

You can then read the resulting smaller mailboxes using your Python script.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: De-tupleizing a list

2011-04-26 Thread Ethan Furman

Hans Georg Schaathun wrote:


List comprehension is understood even by readers with no experience
with python.


There's nothing magically understandable about a list comp -- the first 
time I saw one (which was in Python), I had to learn about them.


~Ethan~

--
http://mail.python.org/mailman/listinfo/python-list


Re: client-server parallellised number crunching

2011-04-26 Thread Dan Stromberg
On Tue, Apr 26, 2011 at 12:55 PM, Hans Georg Schaathun
ge...@schaathun.net wrote:
 I wonder if anyone has any experience with this ...

 I try to set up a simple client-server system to do some number
 crunching, using a simple ad hoc protocol over TCP/IP.  I use
 two Queue objects on the server side to manage the input and the output
 of the client process.  A basic system running seemingly fine on a single
 quad-core box was surprisingly simple to set up, and it seems to give
 me a reasonable speed-up of a factor of around 3-3.5 using four client
 processes in addition to the master process.  (If anyone wants more
 details, please ask.)

 Now, I would like to use remote hosts as well, more precisely, student
 lab boxen which are rather unreliable.  By experience I'd expect to
 lose roughly 4-5 jobs in 100 CPU hours on average.  Thus I need some
 way of detecting lost connections and requeue unfinished tasks,
 avoiding any serious delays in this detection.  What is the best way to
 do this in python?

 It is, of course, possible for the master thread upon processing the
 results, to requeue the tasks for any missing results, but it seems
 to me to be a cleaner solution if I could detect disconnects and
 requeue the tasks from the networking threads.  Is that possible
 using python sockets?

 Somebody will probably ask why I am not using one of the multiprocessing
 libraries.  I have tried at least two, and got trapped by the overhead
 of passing complex pickled objects across.  Doing it myself has at least
 helped me clarify what can be parallelised effectively.  Now,
 understanding the parallelisable subproblems better, I could try again,
 if I can trust that these libraries can robustly handle lost clients.
 That I don't know if I can.

You probably should assign a unique identifier to each piece of work,
and implement two timeouts - one on your socket, using select or poll
or similar, and one for the pieces of work based on the identifier.

http://gengnosis.blogspot.com/2007/01/level-triggered-and-edge-triggered.html
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: client-server parallellised number crunching

2011-04-26 Thread Dan Stromberg
On Tue, Apr 26, 2011 at 1:20 PM, Chris Angelico ros...@gmail.com wrote:

 But question: Why are you doing major number crunching in Python? On
 your quad-core machine, recode in C and see if you can do the whole
 job without bothering the unreliable boxen at all.

Hmm, or try Cython or PyPy.  ^_^

Here's that graph again:
http://stromberg.dnsalias.org/~dstromberg/backshift/performance/

I'd suggest that rewriting an entire software system in C because of
one inner loop, is overkill.  'better to rewrite just the inner loop
(if needed after profiling), and leave the rest in Python.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: client-server parallellised number crunching

2011-04-26 Thread Chris Angelico
On Wed, Apr 27, 2011 at 6:33 AM, Dan Stromberg drsali...@gmail.com wrote:
 On Tue, Apr 26, 2011 at 1:20 PM, Chris Angelico ros...@gmail.com wrote:

 But question: Why are you doing major number crunching in Python? On
 your quad-core machine, recode in C and see if you can do the whole
 job without bothering the unreliable boxen at all.

 Hmm, or try Cython or PyPy.  ^_^

Sure, or that. I automatically think in terms of coding in C++ for
performance, but that's because I'm fluent in it. If you're not, then
yep, PyPy or Cython will do better.

Chris Angelico
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: client-server parallellised number crunching

2011-04-26 Thread Hans Georg Schaathun
On Tue, Apr 26, 2011 at 1:20 PM, Chris Angelico ros...@gmail.com wrote:
 But question: Why are you doing major number crunching in Python? On
 your quad-core machine, recode in C and see if you can do the whole
 job without bothering the unreliable boxen at all.

The reason is very simple.  I cannot afford the time to code it in C.
Furthermore, the work is research and the system is experimental, 
making the legibility of the code paramount.

On Tue, 26 Apr 2011 13:33:25 -0700, Dan Stromberg
  drsali...@gmail.com wrote:
:  I'd suggest that rewriting an entire software system in C because of
:  one inner loop, is overkill.  'better to rewrite just the inner loop
:  (if needed after profiling), and leave the rest in Python.

Well, that's the other reason.  The most intense number crunching in
this part of the project is basic vector and matrix operations done in 
numpy.  AFAIU that means I use exactly the same libraries underneath as
I would have done in C.  Please correct me if I am wrong.

I could run a profiler and squeeze out some performance by coding
another couple of components in C, but I cannot afford the programming
time whereas I can afford to waste the CPU cycles.  And once I get the
client/server parallellisation working, I shall be able to reuse it at
negligible cost on other subproblems, whereas the profiling and C
reimplementation would cost almost as much time for every subsystem.

Does that answer your question, Chris?


-- 
:-- Hans Georg
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Development tools and practices for Pythonistas

2011-04-26 Thread Dan Stromberg
On Tue, Apr 26, 2011 at 11:04 AM, Jean-Michel Pichavant
jeanmic...@sequans.com wrote:
 You can have a look at SVN and bugzilla, they are free SCM  bug tracker
 applications.
 Make sure it's worth the pain though, these tools are not that easy to
 administrate (the usage is pretty simple).

http://trac.edgewall.org/ is purportedly pretty easy to set up - I've
only used it, not set it up.  Trac gives you SVN and an issue tracker.
 It has plugins for other source control systems.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Reading Huge UnixMailbox Files

2011-04-26 Thread Dan Stromberg
On Tue, Apr 26, 2011 at 1:23 PM, Nobody nob...@nowhere.com wrote:
 E.g. the following script reads a mailbox on stdin and writes a separate
 file for each message:

        #!/usr/bin/awk -f
        BEGIN {
                num = 0;
                ofile = ;
        }

        /^From / {
                if (ofile != ) close(ofile);
                ofile = sprintf(%06d.mbox, num);
                num ++;
        }

        {
                print  ofile;
        }

For the archive: This assumes traditional mbox.  A SysV-ish sendmail,
for example, may not like it.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: De-tupleizing a list

2011-04-26 Thread Hans Georg Schaathun
On Tue, 26 Apr 2011 13:37:40 -0700, Ethan Furman
  et...@stoneleaf.us wrote:
:  Hans Georg Schaathun wrote:
:  List comprehension is understood even by readers with no experience
:  with python.
: 
:  There's nothing magically understandable about a list comp -- the first 
:  time I saw one (which was in Python), I had to learn about them.

Well, there is a first time for everything.
For all the other proposals, the first time is bound to be in python.

List comprehension is found in many languages, as just list
comprehension, and rather quickly comprehensible by analogy 
to anyone familiar with set comprehension in mathematics.
The syntax is slightly different, but python's use of plain
English keywords make the transission fairly simple.

(-: and I did not say by /all/ readers :-)

-- 
:-- Hans Georg
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: client-server parallellised number crunching

2011-04-26 Thread Chris Angelico
On Wed, Apr 27, 2011 at 6:47 AM, Hans Georg Schaathun h...@schaathun.net 
wrote:
 Does that answer your question, Chris?

Yup! It does. :)

Chris Angelico
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Case study: debugging failed assertRaises bug

2011-04-26 Thread Raymond Hettinger
On Apr 25, 11:05 pm, Steven D'Aprano steve
+comp.lang.pyt...@pearwood.info wrote:
 I've just spent two hours banging my head against what I *thought*
 (wrongly!) was a spooky action-at-a-distance bug in unittest, so I
 thought I'd share it with anyone reading.

Thanks for telling your story.
I'm sure the lessons learned
will be helpful to your readers.


Raymond
twitter: @raymondh
-- 
http://mail.python.org/mailman/listinfo/python-list


2.X functools.update_wrapper dislikes missing function attributes

2011-04-26 Thread samwyse
I noticed a behavior in Jython 2.5.2 that's arguably an implementation
bug, but I'm wondering if it's something to be fixed for all versions
of Python.  I was wanting to decorate a Java instance method, and
discovered that it didn't have a __module__ attribute.  This caused
the following message:

  File C:\jython2.5.2\Lib\functools.py, line 33, in update_wrapper
setattr(wrapper, attr, getattr(wrapped, attr))
AttributeError: 'instancemethod' object has no attribute '__module__'

The relevant code is:
for attr in assigned:
setattr(wrapper, attr, getattr(wrapped, attr))
for attr in updated:
getattr(wrapper, attr).update(getattr(wrapped, attr, {}))

Note that attributes to be updated get a default value.  I'm proposing
that attributes to be assigned do the same, most likely an empty
string.  A non-string value (such as None) could break anything
expecting a string value, so it seems like a bad idea.  Python 3.2
catches AttributeError and passes.  I don't like this solution.  While
it prevents any attributes from being added to the wrapper, the
wrapper likely has its own values (at least for the default
attributes) and using those values could cause confusion.

Any opinions?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: De-tupleizing a list

2011-04-26 Thread Raymond Hettinger
On Apr 25, 8:28 pm, Gnarlodious gnarlodi...@gmail.com wrote:
 I have an SQLite query that returns a list of tuples:

 [('0A',), ('1B',), ('2C',), ('3D',),...

 What is the most Pythonic way to loop through the list returning a
 list like this?:

 ['0A', '1B', '2C', '3D',...

You could unpack the 1-tuple the same way you would with a 2-tuple.

 result = [('0A',), ('1B',), ('2C',), ('3D',)]
 for elem, in result:
print elem


0A
1B
2C
3D


Raymond
http://twitter.com/raymondh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: 2.X functools.update_wrapper dislikes missing function attributes

2011-04-26 Thread samwyse
I just noticed an old issue that relate to this: 
http://bugs.python.org/issue3445

This dates back to 2008 and is marked as fixed, but my copies of
Python 2.5.4 and 2.7.1 don't seem to implement it.  I'll try to dig
further.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: client-server parallellised number crunching

2011-04-26 Thread geremy condra
On Tue, Apr 26, 2011 at 12:55 PM, Hans Georg Schaathun
ge...@schaathun.net wrote:
 I wonder if anyone has any experience with this ...

 I try to set up a simple client-server system to do some number
 crunching, using a simple ad hoc protocol over TCP/IP.  I use
 two Queue objects on the server side to manage the input and the output
 of the client process.  A basic system running seemingly fine on a single
 quad-core box was surprisingly simple to set up, and it seems to give
 me a reasonable speed-up of a factor of around 3-3.5 using four client
 processes in addition to the master process.  (If anyone wants more
 details, please ask.)

 Now, I would like to use remote hosts as well, more precisely, student
 lab boxen which are rather unreliable.  By experience I'd expect to
 lose roughly 4-5 jobs in 100 CPU hours on average.  Thus I need some
 way of detecting lost connections and requeue unfinished tasks,
 avoiding any serious delays in this detection.  What is the best way to
 do this in python?

 It is, of course, possible for the master thread upon processing the
 results, to requeue the tasks for any missing results, but it seems
 to me to be a cleaner solution if I could detect disconnects and
 requeue the tasks from the networking threads.  Is that possible
 using python sockets?

 Somebody will probably ask why I am not using one of the multiprocessing
 libraries.  I have tried at least two, and got trapped by the overhead
 of passing complex pickled objects across.  Doing it myself has at least
 helped me clarify what can be parallelised effectively.  Now,
 understanding the parallelisable subproblems better, I could try again,
 if I can trust that these libraries can robustly handle lost clients.
 That I don't know if I can.

Without knowledge of what you're doing it's hard to comment
intelligently, but I'd try something like CHAOS or OpenSSI to see if
you can't get what you need for free, if that doesn't do it then try
dropping a liveCD with Hadoop on it in each machine and running it
that way. If that can't work, try MPI. If you've gotten that far and
nothing does the trick then you're probably going to have to give more
details.

On a side note, there are a reasonable number of examples of how to do
similar things on a GPU in GPU Gems- depending on your needs you
might be able to just apply a copypasta solver ;)

Geremy Condra
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Terrible FPU performance

2011-04-26 Thread Terry Reedy

On 4/26/2011 3:27 PM, Dan Stromberg wrote:

 for 0= i  1000:

  x *= 0.8
  #x += 0.01
  print x


In my WinXP (Athlon), 3.2 standard install

x=1.0
print(x)
for i in range(1000):
  x *= 0.8
  x += 0.01
print(x)

takes about 3 1/2 secs with addition commented out and about 4 when 
included as above, which is about what I would expect.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Development tools and practices for Pythonistas

2011-04-26 Thread Thomas Rachel

Am 26.04.2011 20:42, schrieb Algis Kabaila:


Thomas, have you tried bzr (Bazaar) and if so do you consider hg
(Mercurial) better?


I have played around with bzr, but afterwards more with hg which gave me 
a better beeling (don't know why)...


Thomas
--
http://mail.python.org/mailman/listinfo/python-list


Re: Development tools and practices for Pythonistas

2011-04-26 Thread Ben Finney
Chris Angelico ros...@gmail.com writes:

 As other people have said, version control is very handy. I use git
 myself, but imho the choice of _which_ VCS you use is far less
 important than the choice of _whether_ to use one.

True enough. But the modern crop of first-tier VCSen – Bazaar, Git,
Mercurial – are the ones to choose from. Anoyone recommending a VCS tool
that has poor merging support (such as Subversion or, heaven help us,
CVS) is doing the newcomer a disservice.

Learn the basics in all three of those, and learn one of them well. My
choice is Bazaar, because it has a very friendly command-line interface
and has excellent support for repositories created by all the others :-)

 def function(parms):
 # TODO: This should check if Foo matches Bar and shortcut the computation
 ...

 I have a very strict format: T, O, D, O, literal ASCII, always
 uppercase. Makes it easy to grep (and I try to avoid todo in
 lower-case, which means I can use a case-insensitive search if I
 choose).

Note that following that specific convention will match the default in
many programmers's text editors for highlighting those entries to bring
them to the programmer's attention. The defaults for PyLint also report
such comments in the code.

 Ultimately, it all comes down to discipline, and how important the
 project is to you. At work, I have a lot of disciplines; we have a
 wiki where stuff gets documented, we have source control, we have
 daily backups (as well), etc, etc, etc. For little home projects, it's
 not usually worth the effort. Take your pick, where do you want to go?

The two practices above – use a modern VCS, maintain TODO items such
that the computer can report them automatically – are so useful and so
inexpensive that I think anyone aspiring to become a good programmer is
foolish if they omit them on any project.

-- 
 \   “If you make people think they're thinking, they'll love you; |
  `\ but if you really make them think, they'll hate you.” —Donald |
_o__) Robert Perry Marquis |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Case study: debugging failed assertRaises bug

2011-04-26 Thread Ben Finney
Steven D'Aprano steve+comp.lang.pyt...@pearwood.info writes:

 I've just spent two hours banging my head against what I *thought* 
 (wrongly!) was a spooky action-at-a-distance bug in unittest, so I 
 thought I'd share it with anyone reading.

Much appreciated. I am experiencing a tear-my-hair-out test failure
which sounds exactly the same as the behaviour you describe, so you've
given me motivation to try again at figuring it out.

 (1) assertRaises REALLY needs a better error message. If not a custom
 message, at least it should show the result it got instead of an
 exception.

+1

Is this one of the many improvements in Python 3.2's ‘unittest’ that
Michael Foord presided over? Or are we still stuck with the terrible
behaviour of ‘assertRaises’?

 (4) Unit tests should test one thing, not four. I thought my loop over 
 four different bad input arguments was one conceptual test, and that 
 ended up biting me. If the failing test was testBadArgTypeStr I would 
 have realised what was going on much faster. 

For test cases where you want to run the same test against several sets
of data, and have each combination of test-plus-data run as an
independent test identified in the report, I highly recommend
‘testscenarios’ URL:http://pypi.python.org/pypi/testscenarios.

 (5) There's only so many tiny little tests you can write before your
 head explodes, so I'm going to keep putting for arg in spam loops in
 my tests. But now I know to unroll those suckers into individual tests
 on the first sign of trouble.

Use ‘testscenarios’ to do that without repeated code.

-- 
 \“But it is permissible to make a judgment after you have |
  `\examined the evidence. In some circles it is even encouraged.” |
_o__)—Carl Sagan, _The Burden of Skepticism_, 1987 |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Automatic placement of a text box? ie empty legend [matplotlib]

2011-04-26 Thread C Barrington-Leigh
The automatic placement functionality of legend() is nice. I'd like to
make use of it to place a box with just a title or title and comment
(ie, some text) but no lines or legend entries.  Does anyone know a
way to do this?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Development tools and practices for Pythonistas

2011-04-26 Thread Algis Kabaila
On Wednesday 27 April 2011 09:41:53 Ben Finney wrote:
 Chris Angelico ros...@gmail.com writes:
  As other people have said, version control is very handy. I
  use git myself, but imho the choice of _which_ VCS you use
  is far less important than the choice of _whether_ to use
  one.
 
 True enough. But the modern crop of first-tier VCSen –
 Bazaar, Git, Mercurial – are the ones to choose from.
 Anoyone recommending a VCS tool that has poor merging
 support (such as Subversion or, heaven help us, CVS) is
 doing the newcomer a disservice.
 
All golden advice!  Two things were not mentioned:

1. Work on an existing project that is based on Python. The 
Launchpad of ubuntu has a great environment for a project. 
There are many existing projects in which to participate.  Even 
Bazaar's GUI version is programmed in Python with PyQt for the 
GUI proper.  You will get a lot out of participation in an 
existing project, probably a lot more than you input.  
Ultimately it is how much you input that is of of most benefit 
to you yourself.

2. I failed to see questions and suggestions of platform to work 
from.  Your experience, interests,  and computer platform will 
determine what is useful to you and what is less useful.

3. Finally, it is important to have a project about which you 
can master enough enthusiasm to persist.  What project  
depends very much on item 2.

May the favourable wind fill your sails,

OldAl.
-- 
Algis
http://akabaila.pcug.org.au/StructuralAnalysis.pdf
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [OT] Comparing VCS tools (was Development tools and practices for Pythonistas)

2011-04-26 Thread Tim Chase

On 04/26/2011 01:42 PM, Algis Kabaila wrote:

Thomas, have you tried bzr (Bazaar) and if so do you consider hg
(Mercurial) better?

And why is it better?   (bzr is widely used in ubuntu, which is
my favourite distro at present).


Each of the main 3 (bzr, hg, git) have advantages and 
disadvantages.  As Ben (and others?) mentions, it's best to learn 
one of these instead of starting with something like Subversion 
or worse (CVS or worse, *shudder* MS Visual SourceSafe)



Bazaar (bzr)

launchpad.net popular for hosting
Pros:
- some Ubuntu interactions (such as launchpad) easier
- a rigorous focus on correctness
- written in Python (with a small optional bit of C)
- easy-to-use interface (CVS-ish)
- good cross-platform support

Cons:
- was slow, though I understand they've worked on improving this

Protocols:
- custom/smart protocol
- http
- sftp
- ftp
- rsync (via plugin)


Mercurial (hg)
==
BitBucket is popular for hosting
Pros:
- speedy
- written in Python (with a small optional bit of C)
- easy-to-use interface (CVS-ish)
- fairly compact repositories
- EXCELLENT documentation via online book
- chosen by Python as the repository of choice
- good cross-platform support

Cons:
- no biggies that I've found

Protocols:
- http
- ssh


Git (git)
=
GitHub is popular for hosting
Pros:
- a *lot* of popular projects use it (Linux kernel)
- fast
- fairly compact repositories
- good documentation (though somewhat scattered)

Cons:
- interface diverges from the CVS standards
- (was?) not native on
- repositories require periodic maintenance using git gc
- Win32 support is/was a little clunky
- interface was under tumultuous change for a while (though it 
seems to have stabilized now)


Protocols:
- custom/smart protocol
- http
- sftp
- ftp


So that said, I've become a Mercurial user because the interface 
was close to SVN which I used previously, and it was speedy on my 
older machines.  If bzr has come up to comparable speed, I'd be 
game to probe it again.  I just don't care for git's command-line 
UI, but that's a personal preference thing (just like I prefer 
vi/vim over emacs, but acknowledge there are lots of smart folks 
on the other side, too).


-tkc

For at least hg vs. git, see
http://stackoverflow.com/questions/1598759/git-and-mercurial-compare-and-contrast






--
http://mail.python.org/mailman/listinfo/python-list


Re: [OT] Comparing VCS tools

2011-04-26 Thread Ben Finney
Tim Chase python.l...@tim.thechases.com writes:

 Bazaar (bzr)
 
 launchpad.net popular for hosting
 Pros:
 - some Ubuntu interactions (such as launchpad) easier
 - a rigorous focus on correctness
 - written in Python (with a small optional bit of C)
 - easy-to-use interface (CVS-ish)
 - good cross-platform support

- Launchpad is free software (so anyone could run their own instance to
  host Bazaar repositories).

- Merges preserve all revision data, but history displays don't show
  merged revisions by default. (This obviates most of the need for
  history-altering commands in other VCSen to “tidy up” the revision
  data: it's tidy already by default.)

- Very smooth interaction with “foreign” VCS repositories, especially
  Subversion.

- Supports a wide range of workflows, without forcing peers to the same
  workflow.

  - Especially: Supports “centralised” (Subversion-style) VCS workflow
without losing any of the distributed advantages.

- Treats files, and filename changes, as first-class citizens in the
  revision data. (Git and some others use fallible heuristics to figure
  those out after-the-fact instead of recording the data.)

 Cons:
 - was slow, though I understand they've worked on improving this

Right, that's not a count against Bazaar for at least the last several
versions (since 2009 at least). Bazaar is easily fast enough for
anything people use, say, Mercurial for.

Cons:

- Repository formats were changing frequently for a while, leaving a
  legacy of confusion (fixed now, but the confusion is still a black
  mark).

- Limited developer base, because of Canonical's community-hostile
  “contribution agreement” requirements.

- Currently only one big public full-featured hosting of Bazaar
  repositories: Launchpad.net.

- The most advanced web UI to browse Bazaar repositories, “loggerhead”,
  is somewhat lacking compared to the ones at Git and Mercurial hosting
  sites.

 Mercurial (hg)
 ==
 BitBucket is popular for hosting
 Pros:
 - speedy

This isn't a significant advantage for Mercurial against Git (which is
much faster) or Bazaar (which is easily as fast as Mercurial).

 - fairly compact repositories

Again, this isn't a significant advantage for Mercurial over either of
Git or Bazaar.

 - chosen by Python as the repository of choice

As I understand it, the decision was down to Bazaar or Mercurial, which
were each close enough in technical and workflow assessment that the
decision was made on personal preference. Fair enough, and it does give
another reason *now* to use Mercurial, but not due to any particular
advantage in Mercurial.

 Cons:
 - no biggies that I've found

- (Anecdotal) Merge algorithm sometimes fails catastrophically.

- Merged revisions aren't hidden, leading users to alter history.

 Protocols:
 - http
 - ssh


 Git (git)
 =
 GitHub is popular for hosting

Unlike GitHub, Gitorious is a free-software Git hosting provider.

 Pros:
 - a *lot* of popular projects use it (Linux kernel)
 - fast
 - fairly compact repositories
 - good documentation (though somewhat scattered)

Hmm. Can't really overcome the rampant NIH syndrome: there is a lot that
shouldn't *need* so much documentation if the interface were better
designed from the start. I wouldn't count this as a pro for Git.

 Cons:
 - interface diverges from the CVS standards

- Terminology and command-line API gratuitously arcane (I'm reminded of
  GNU Arch, *shudder*).

- Merged revisions aren't hidden, leading users to alter history.

 So that said, I've become a Mercurial user because the interface was
 close to SVN which I used previously, and it was speedy on my older
 machines. If bzr has come up to comparable speed, I'd be game to probe
 it again.

I recommend doing so.

-- 
 \  “If I haven't seen as far as others, it is because giants were |
  `\   standing on my shoulders.” —Hal Abelson |
_o__)  |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Comparing VCS tools (was Development tools and practices for Pythonistas)

2011-04-26 Thread rusi
On Apr 27, 6:44 am, Tim Chase python.l...@tim.thechases.com wrote:
 On 04/26/2011 01:42 PM, Algis Kabaila wrote:

  Thomas, have you tried bzr (Bazaar) and if so do you consider hg
  (Mercurial) better?

  And why is it better?   (bzr is widely used in ubuntu, which is
  my favourite distro at present).

 Each of the main 3 (bzr, hg, git) have advantages and
 disadvantages.  As Ben (and others?) mentions, it's best to learn
 one of these instead of starting with something like Subversion
 or worse (CVS or worse, *shudder* MS Visual SourceSafe)

pros and cons of bzr, git, mercurial snipped

The distributed revision control page on wikipedia (bottom)
http://en.wikipedia.org/wiki/Distributed_revision_control
in addition to these, mentions fossil -- something I had not heard of
till now.


Its claims seem to match the OPs lightweight requirements more closely
than any other:

(from above link)
---
Fossil is cross-platform; its source code compiles on Linux, Mac OS X
and Microsoft Windows. It is not only capable of distributed version
control like Git and Mercurial but also supports distributed bug
tracking, a distributed wiki and a distributed blog mechanism all in a
single integrated package. With its built-in and easy-to-use web
interface, Fossil simplifies project tracking and promotes situational
awareness. A user may simply type fossil ui from within any check-
out and Fossil automatically opens the user's web browser in a page
that gives detailed history and status information on that project.
--
Well so much for the claims :-)

What's the facts? Anyone with any experiences on this?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Terrible FPU performance

2011-04-26 Thread David Cournapeau
On Wed, Apr 27, 2011 at 4:14 AM, Dan Goodman dg.gm...@thesamovar.net wrote:
 Hi,

 On 26/04/2011 15:40, Mihai Badoiu wrote:
 I have terrible performance for multiplication when one number gets very
 close to zero.  I'm using cython by writing the following code:

 This might be an issue with denormal numbers:

 http://en.wikipedia.org/wiki/Denormal_number

 I don't know much about them though, so I can't advise any further than
 that...

This indeed sounds like it. Mihai, which CPU are you using ? Pentium4
are especially known to have terrible (read order of magnitude slower)
performance with denormal numbers.

There is unfortunately no simple way to know whether a float is
denormal or not in python, but since you are using cython, if you are
under posix you should be able to use fpclassify to check this,

From there, if you see a difference between cython/python and C, it
will be easier to debug.

cheers,

David
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Terrible FPU performance

2011-04-26 Thread Alec Taylor
What's an FPU?

On Tue, Apr 26, 2011 at 11:40 PM, Mihai Badoiu mbad...@gmail.com wrote:
 Hi,
 I have terrible performance for multiplication when one number gets very
 close to zero.  I'm using cython by writing the following code:
     cdef int i
     cdef double x = 1.0
     for 0 = i  1000:
         x *= 0.8
         #x += 0.01
     print x
 This code runs much much slower (20+ times slower) with the line x += 0.01
 uncommented.  I looked at the deassembled code and it looks correct.
  Moreover, it's just a few lines and by writing a C code (without python on
 top), I get the same code, but it's much faster.  I've also tried using sse,
 but I get exactly the same behavior.  The best candidate that I see so far
 is that Python sets up the FPU in a different state than C.
 Any advice on how to solve this performance problem?
 thanks!
 --
 http://mail.python.org/mailman/listinfo/python-list


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Terrible FPU performance

2011-04-26 Thread Chris Angelico
On Wed, Apr 27, 2011 at 3:11 PM, Alec Taylor alec.tayl...@gmail.com wrote:
 What's an FPU?

Floating Point Unit, the part of your computer's processor that
handles floating-point mathematics. Integer calculations are done in
the main CPU, but the FPU (which these days is part of the same hunk
of silicon, but used to be a separate purchase) gets all the really
hard work farmed off to it.

Gross oversimplification, but hopefully helpful.

Chris Angelico
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Comparing VCS tools (was Development tools and practices for Pythonistas)

2011-04-26 Thread alex23
rusi rustompm...@gmail.com wrote:
 What's the facts? Anyone with any experiences on this?

No experience, but I'm rather torn over Fossil. On the one hand, it
feels like NIH writ large; on the other hand, it's a DVCS with Trac-
like features in a standalone executable less than 1MB in size...by
the author of sqlite.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Terrible FPU performance

2011-04-26 Thread rusi
On Apr 27, 10:11 am, Alec Taylor alec.tayl...@gmail.com wrote:
 What's an FPU?

http://lmgtfy.com/?q=fpu
-- 
http://mail.python.org/mailman/listinfo/python-list


[issue11682] PEP 380 reference implementation for 3.3

2011-04-26 Thread Yury Selivanov

Changes by Yury Selivanov yseliva...@gmail.com:


--
nosy: +Yury.Selivanov

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11682
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11918] Drop OS/2 and VMS support in Python 3.3

2011-04-26 Thread Stefan Krah

Stefan Krah stefan-use...@bytereef.org added the comment:

I wrote to the maintainer of vmspython, and he said this:

Python on VMS is actively maintained, for example you can take a look at
http://www.vmspython.org/ and http://www.vmspython.org/History
Our plan are to port, this year 2.7, then 3.x.


So the vmspython project looks active, but that still does not help
us of course. I hope Jean-François will comment here.

--
nosy: +pieronne, skrah

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11918
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11918] Drop OS/2 and VMS support in Python 3.3

2011-04-26 Thread Piéronne Jean-François

Piéronne Jean-François piero...@users.sourceforge.net added the comment:

How can we help you ?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11918
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10517] test_concurrent_futures crashes with --with-pydebug on RHEL5 with Fatal Python error: Invalid thread state for this thread

2011-04-26 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 Not necessarily. You can have several interpreters (and therefore several 
 thread states) in a single thread, using Py_NewInterpreter(). It's used by 
 mod_wsgi and probably other software. If you overwrite the old value with the 
 new one, it may break such software.


OK, I didn't know. Better not to change that in that case.

 Would it be possible to cleanup the autoTLS mappings in PyOS_AfterFork() 
 instead?


Well, after fork, all threads have exited, so you'll be running on the
behalf of the child process' main - and only - thread, so by
definition you can't access other threads' thread-specific data, no?
As an alternate solution, I was thinking of calling
PyThread_delete_key_value(autoTLSkey) in the path of thread bootstrap,
i.e. starting in Modules/_threadmodule.c t_bootstrap. Obviously, this
should be done before calling _PyThreadState_Init, since it can also
be called from Py_NewInterpreter.
The problem is that it would require exporting autoTLSkey whose scope
is now limited to pystate.c (we could also create a small wrapper
function in pystate.c to delete the autoTLSkey, since it's already
done in PyThreadState_DeleteCurrent and PyThreadState_Delete).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10517
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11918] Drop OS/2 and VMS support in Python 3.3

2011-04-26 Thread Jesús Cea Avión

Jesús Cea Avión j...@jcea.es added the comment:

Would be possible to publish a notice in python insider blog?.Enigmail

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11918
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10154] locale.normalize strips - from UTF-8, which fails on Mac

2011-04-26 Thread Marc-Andre Lemburg

Marc-Andre Lemburg m...@egenix.com added the comment:

Piotr Sikora wrote:
 
 Piotr Sikora piotr.sik...@frickle.com added the comment:
 
 It's the same on OpenBSD (and I'm pretty sure it's true for other BSDs as 
 well).
 
 locale.resetlocale()
 Traceback (most recent call last):
   File stdin, line 1, in module
   File /usr/local/lib/python2.6/locale.py, line 523, in resetlocale
 _setlocale(category, _build_localename(getdefaultlocale()))
 locale.Error: unsupported locale setting
 locale._build_localename(locale.getdefaultlocale())
 'en_US.UTF8'
 
 Works fine with Marc-Andre's alias table fix.
 
 Any chances this will be eventually fixed in 2.x?

This can go into Python 2.7, and, of course, into the 3.x
branches.

--
title: locale.normalize strips - from UTF-8, which fails on Mac - 
locale.normalize strips - from UTF-8,which fails on Mac

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10154
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9390] Error in sys.excepthook on windows when redirecting output of the script

2011-04-26 Thread Amaury Forgeot d'Arc

Amaury Forgeot d'Arc amaur...@gmail.com added the comment:

This post: 
http://stackoverflow.com/questions/3018848/cannot-run-python-script-on-windows-with-output-redirected
suggests that there is a difference between python test.py  out.log and 
test.py  out.log.
It also suggests a change in the registry that fixed the problem for me some 
months ago. Can you try it?

--
nosy: +amaury.forgeotdarc

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9390
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   >