Re: Obtaining user information

2011-12-10 Thread Cameron Simpson
On 10Dec2011 08:43, Hans Mulder  wrote:
| On 10/12/11 02:44:48, Tim Chase wrote:
| >Currently I can get the currently-logged-in-userid via getpass.getuser()
| >which would yield something like "tchase".
| >
| >Is there a cross-platform way to get the full username (such as from the
| >GECOS field of /etc/passed or via something like NetUserGetInfo on Win32
| >so I'd get "Tim Chase" instead?
| 
| How about:
| pwd.getpwuid(os.getuid()).pw_gecos
| This will give you the GECOS field of /etc/passed.
| I'd assume it contains "Tim Chase" for your account.

Up to a comma is the convention. Eg "Cameron Simpson, x2983".
-- 
Cameron Simpson  DoD#743
http://www.cskk.ezoshosting.com/cs/

Microsoft - where "cross platform" means "runs in both Win95 and WinNT".
- Andy Newman 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Buffering of sys.stdout and sys.stderr in python3 (and documentation)

2011-12-10 Thread Geoff Bache
Hi Terry,

> The difference from 2.x should be in What's New in 3.0, except that the
> new i/o module is in 2.6, so it was not exactly new.

The io module existed in 2.6, but it was not used by default for
standard output and standard error. The only mention of this in
"What's New in 3.0" is in the section marked for changes that were
already in 2.6 (which is wrong in this case), and it notes only that
io.TextIOWrapper is now used, but not what implications that has for
its behaviour and backward compatibility.

> You might be able
> to find more inhttp://python.org/dev/peps/pep-3116/

I skimmed through it but couldn't find anything relevant. It seems
more "advanced" and implementation-focussed.

>
> You *should* be able to find sufficient info in the 3.x docs. If, after
> you get other responses (or not), you think the docs need upgrading,
> open an issue on the tracker at bugs.python.org with suggestions as
> specific as possible, including changed or new lines of text based on
> your experience and experiments.

OK, I'll do that if nobody points me at some existing docs here.

Regards,
Geoff Bache
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python 2 or 3

2011-12-10 Thread Enrico 'Henryx' Bianchi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Steven D'Aprano wrote:

> RHEL supports Python 3, it just doesn't provide Python 3.

True, but as you say later, the only method is to recompile. So, if I want 
to use Python 3 in a production environment like RHEL, I need:

 - A development environment similar to production (e.g. if I use RHEL 5 in
   production, I need at least a CentOS 5.x);
 - Compile Python 3 in a development environment;
 - Write the python app;
 - Release a *huge* package to install.

The only bright side is to freeze version of Python and the libraries, but 
every update (e.g. bug fixing on a library) is by hand

> When installing, don't use "make install", as that will replace the
> system Python, instead use "make altinstall".

Good, I didn't know this option

> Then the command "python"
> will still refer to the system Python (probably Python 2.4 or 2.5?), and
> "python3" should refer to Python 3.x.

RHEL (and CentOS) 5.x use Python 2.4

> You shouldn't be learning programming on a production server :)

Of course, but if I want to use an application written in Python 3 on a 
production environment which doesn't support it, I have to prepare at least 
a development environment similar to production (ok, ok, with a VM is 
simple, but I need to track the exception)

Enrico
P.S. an alternative may be cx_freeze, but I don't know exactly hot it works
P.P.S. I'm boring, but I would like my point of view because I've found 
precisely in this case
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iQEcBAEBAgAGBQJO40dHAAoJED3SMOGZLYdYIPoH/1J6OljjCJQnmR/uwjEFCrHy
TEMpfKodD40gL7lOZLMHnpJrs+Ct2Vo/1+mtAIi+vZ6rkhFG50ykBJMAlMgXkCjt
I6fkp9YLKmFX9OjRuJ9qE+2P5PliyNDPKVljyfaXMhalbrtHnn7mrL9524TzhcoG
+Ape1U9MPTu3naVULKWK0FjGa/RwhbSOdDOKX2IBmRHFEgtf3dZJ2xNYXUJnhnT8
fbfD87ykXyyVg6LS8c14PPeWnpFeeZBQappjoHg9+XZd8/Y1uH1NuP7k4cepzJB2
Car4lucChW9+llM4mz1BADQZuo4J1v71K5DR8mVXyM2usUlNWelkgR6GVWUmXE0=
=trwj
-END PGP SIGNATURE-

-- 
http://mail.python.org/mailman/listinfo/python-list


WeakValueDict and threadsafety

2011-12-10 Thread Darren Dale
I am using a WeakValueDict in a way that is nearly identical to the
example at the end of 
http://docs.python.org/library/weakref.html?highlight=weakref#example
, where "an application can use objects IDs to retrieve objects that
it has seen before. The IDs of the objects can then be used in other
data structures without forcing the objects to remain alive, but the
objects can still be retrieved by ID if they do." My program is
multithreaded, so I added the necessary check for liveliness that was
discussed at 
http://docs.python.org/library/weakref.html?highlight=weakref#weak-reference-objects
. Basically, I have:

import threading
import weakref

registry = weakref.WeakValueDictionary()
reglock = threading.Lock()

def get_data(oid):
with reglock:
data = registry.get(oid, None)
if data is None:
data = make_data()
registry[id(data)] = data
return data

I'm concerned that this is not actually thread-safe. When I no longer
hold strong references to an instance of data, at some point the
garbage collector will kick in and remove that entry from my registry.
How can I ensure the garbage collection process does not modify the
registry while I'm holding the lock?

Thanks,
Darren
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Obtaining user information

2011-12-10 Thread Tim Chase

On 12/10/11 01:37, Cameron Simpson wrote:

On 09Dec2011 19:44, Tim Chase  wrote:
| Currently I can get the currently-logged-in-userid via
| getpass.getuser() which would yield something like "tchase".

_If_ you're on a terminal. _And_ that's exactly what you want.
Personally I need to the name of geteuid() or getuid() more often.


yes, it deals with emailing so the local userid and full-name are 
what I want.



| Is there a cross-platform way to get the full username (such as from
| the GECOS field of /etc/passed or via something like NetUserGetInfo
| on Win32 so I'd get "Tim Chase" instead?

Hmm. Doesn't windows have a posix layer?

   pwd.getpwnam(os.getuid())[4].split(',')[0]

is the best I've got. ANd it probably doesn't work in Windows:-(


well, that's a more readable version of my hand-crafted opening 
of /etc/passwd and parsing by hand, so thanks!  As you mention, 
the pwd module isn't available on Win32 so I still have to branch 
my code.  I found Tim Golden's suggestion in a comment on 
ActiveState[1] that gave this one-liner for Win32:


win32net.NetUserGetInfo (win32net.NetGetAnyDCName (), 
win32api.GetUserName (), 1)


By changing the "1" to a "20", one of the returned key/value 
pairs was "full_name" and the username, so my code currently reads:


  def get_user_info():
"Return (userid, username) e.g. ('jsmith', 'John Smith')"
userid = username = getpass.getuser()
if sys.platform.lower().startswith("win"):
  try:
import win32net, win32api
USER_INFO_2 = 2
username = win32net.NetUserGetInfo(
  win32net.NetGetAnyDCName(),
  win32api.GetUserName(),
  USER_INFO_2,
  )["full_name"] or username
  except ImportError:
pass # no win32* module, so default to userid
else:
  import pwd
  username = pwd.getpwnam(userid).pw_gecos.split(',',1)[0]
return userid, username

It only addresses Win32 and Posix, but that's what I need for 
now.  Thanks again.


-tkc

[1]
http://code.activestate.com/recipes/66314-get-user-info-on-windows-for-current-user/






--
http://mail.python.org/mailman/listinfo/python-list


Re: I love the decorator in Python!!!

2011-12-10 Thread 88888 Dihedral
Just wrap the  exec() to spawn for fun. 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: I love the decorator in Python!!!

2011-12-10 Thread 88888 Dihedral
Wrap functions to yield is somewhat like  a sub-threading in Erlang.  
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: WeakValueDict and threadsafety

2011-12-10 Thread Duncan Booth
Darren Dale  wrote:

> I'm concerned that this is not actually thread-safe. When I no longer
> hold strong references to an instance of data, at some point the
> garbage collector will kick in and remove that entry from my registry.
> How can I ensure the garbage collection process does not modify the
> registry while I'm holding the lock?

You can't, but it shouldn't matter.

So long as you have a strong reference in 'data' that particular object 
will continue to exist. Other entries in 'registry' might disappear while 
you are holding your lock but that shouldn't matter to you.

What is concerning though is that you are using `id(data)` as the key and 
then presumably storing that separately as your `oid` value. If the 
lifetime of the value stored as `oid` exceeds the lifetime of the strong 
references to `data` then you might get a new data value created with the 
same id as some previous value.

In other words I think there's a problem here, but nothing to do with the 
lock.

-- 
Duncan Booth
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to test attribute existence of feedparser objects

2011-12-10 Thread xDog Walker
On Thursday 2011 December 08 01:34, HansPeter wrote:
> Hi,
>
> While using the feedparser library for downloading RSS feeds some of
> the blog entries seem to have no title.
>
>  File "build\bdist.win32\egg\feedparser.py", line 382, in __getattr__
> AttributeError: object has no attribute 'title'
>
> Is there a way to test the existence of an attribute?
>

>From the Fine Manual for feedparser 5.1:

Testing for Existence¶

Feeds in the real world may be missing elements, even elements that are 
required by the specification. You should always test for the existence of an 
element before getting its value. Never assume an element is present.

Use standard Python dictionary functions such as has_key to test whether an 
element exists.
Testing if elements are present¶

>>> import feedparser
>>> d = feedparser.parse('http://feedparser.org/docs/examples/atom10.xml')
>>> d.feed.has_key('title')
True
>>> d.feed.has_key('ttl')
False
>>> d.feed.get('title', 'No title')
u'Sample feed'
>>> d.feed.get('ttl', 60)
60


-- 
I have seen the future and I am not in it.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to test attribute existence of feedparser objects

2011-12-10 Thread Steven D'Aprano
On Sat, 10 Dec 2011 09:19:31 -0800, xDog Walker wrote:

> Use standard Python dictionary functions such as has_key to test whether
> an element exists.

Idiomatic Python code today no longer uses has_key.

# Was:
d.feed.has_key('title')

# Now preferred
'title' in d.feed



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing bug, is information ever omitted from a traceback?

2011-12-10 Thread John Ladasky
On Dec 9, 9:00 pm, Terry Reedy  wrote:
> On 12/9/2011 6:14 PM, John Ladasky wrote:
>
> >http://groups.google.com/group/comp.lang.python/browse_frm/thread/751...
>
> > I'm programming in Python 2.6 on Ubuntu Linux 10.10, if it matters.
>
> It might, as many bugs have been fixed since.
> Can you try the same code with the most recent 2.x release, 2.7.2?
> Do you have working and non-working code that you can publicly release?
> Can you reduce the size and dependencies so the examples are closer to
> 'small' than 'large'? And in any case, self-contained?
>
> In my first response, I said you might have found a bug. A bogus
> exception message qualifies. But to do much, we need minimal good/bad
> examples that run or not on a current release (2.7.2 or 3.2.2).
>
> --
> Terry Jan Reedy

All right, Terry, you've convinced me to look.  This will take some
time.  I'll hack away at the two versions of my 500-line programs,
until I have a minimal example.

Why did you specify Python 2.7.2, instead of the 2.7.6 version that is
being offered to me by Ubuntu Software Center?  Does it matter?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: WeakValueDict and threadsafety

2011-12-10 Thread Darren Dale
On Dec 10, 11:19 am, Duncan Booth 
wrote:
> Darren Dale  wrote:
> > I'm concerned that this is not actually thread-safe. When I no longer
> > hold strong references to an instance of data, at some point the
> > garbage collector will kick in and remove that entry from my registry.
> > How can I ensure the garbage collection process does not modify the
> > registry while I'm holding the lock?
>
> You can't, but it shouldn't matter.
>
> So long as you have a strong reference in 'data' that particular object
> will continue to exist. Other entries in 'registry' might disappear while
> you are holding your lock but that shouldn't matter to you.
>
> What is concerning though is that you are using `id(data)` as the key and
> then presumably storing that separately as your `oid` value. If the
> lifetime of the value stored as `oid` exceeds the lifetime of the strong
> references to `data` then you might get a new data value created with the
> same id as some previous value.
>
> In other words I think there's a problem here, but nothing to do with the
> lock.

Thank you for the considered response. In reality, I am not using
id(data). I took that from the example in the documentation at
python.org in order to illustrate the basic approach, but it looks
like I introduced an error in the code. It should read:

def get_data(oid):
with reglock:
data = registry.get(oid, None)
if data is None:
data = make_data(oid)
registry[oid] = data
return data

Does that look better? I am actually working on the h5py project
(bindings to hdf5), and the oid is an hdf5 object identifier.
make_data(oid) creates a proxy object that stores a strong reference
to oid.

My concern is that the garbage collector is modifying the dictionary
underlying WeakValueDictionary at the same time that my multithreaded
code is trying to access it, producing a race condition. This morning
I wrote a synchronized version of WeakValueDictionary (actually
implemented in cython):

class _Registry:

def __cinit__(self):
def remove(wr, selfref=ref(self)):
self = selfref()
if self is not None:
self._delitem(wr.key)
self._remove = remove
self._data = {}
self._lock = FastRLock()

__hash__ = None

def __setitem__(self, key, val):
with self._lock:
self._data[key] = KeyedRef(val, self._remove, key)

def _delitem(self, key):
with self._lock:
del self._data[key]

def get(self, key, default=None):
with self._lock:
try:
wr = self._data[key]
except KeyError:
return default
else:
o = wr()
if o is None:
return default
else:
return o

Now that I am using this _Registry class instead of
WeakValueDictionary, my test scripts and my actual program are no
longer producing segfaults.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing bug, is information ever omitted from a traceback?

2011-12-10 Thread Andrew Berg
On 12/10/2011 11:53 AM, John Ladasky wrote:
> Why did you specify Python 2.7.2, instead of the 2.7.6 version that is
> being offered to me by Ubuntu Software Center?  Does it matter?
There is no Python 2.7.6. I think you have it confused with the version
2.7.2-6. If I'm not mistaken, that appended 6 has to do with packaging
and nothing at all to do with the software itself.

-- 
CPython 3.2.2 | Windows NT 6.1.7601.17640 | Thunderbird 7.0
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: WeakValueDict and threadsafety

2011-12-10 Thread 88888 Dihedral
On Sunday, December 11, 2011 1:56:38 AM UTC+8, Darren Dale wrote:
> On Dec 10, 11:19 am, Duncan Booth 
> wrote:
> > Darren Dale  wrote:
> > > I'm concerned that this is not actually thread-safe. When I no longer
> > > hold strong references to an instance of data, at some point the
> > > garbage collector will kick in and remove that entry from my registry.
> > > How can I ensure the garbage collection process does not modify the
> > > registry while I'm holding the lock?
> >
> > You can't, but it shouldn't matter.
> >
> > So long as you have a strong reference in 'data' that particular object
> > will continue to exist. Other entries in 'registry' might disappear while
> > you are holding your lock but that shouldn't matter to you.
> >
> > What is concerning though is that you are using `id(data)` as the key and
> > then presumably storing that separately as your `oid` value. If the
> > lifetime of the value stored as `oid` exceeds the lifetime of the strong
> > references to `data` then you might get a new data value created with the
> > same id as some previous value.
> >
> > In other words I think there's a problem here, but nothing to do with the
> > lock.
> 
> Thank you for the considered response. In reality, I am not using
> id(data). I took that from the example in the documentation at
> python.org in order to illustrate the basic approach, but it looks
> like I introduced an error in the code. It should read:
> 
> def get_data(oid):
> with reglock:
> data = registry.get(oid, None)
> if data is None:
> data = make_data(oid)
> registry[oid] = data
> return data
> 
> Does that look better? I am actually working on the h5py project
> (bindings to hdf5), and the oid is an hdf5 object identifier.
> make_data(oid) creates a proxy object that stores a strong reference
> to oid.
> 
> My concern is that the garbage collector is modifying the dictionary
> underlying WeakValueDictionary at the same time that my multithreaded
> code is trying to access it, producing a race condition. This morning
> I wrote a synchronized version of WeakValueDictionary (actually
> implemented in cython):
> 
> class _Registry:
> 
> def __cinit__(self):
> def remove(wr, selfref=ref(self)):
> self = selfref()
> if self is not None:
> self._delitem(wr.key)
> self._remove = remove
> self._data = {}
> self._lock = FastRLock()
> 
> __hash__ = None
> 
> def __setitem__(self, key, val):
> with self._lock:
> self._data[key] = KeyedRef(val, self._remove, key)
> 
> def _delitem(self, key):
> with self._lock:
> del self._data[key]
> 
> def get(self, key, default=None):
> with self._lock:
> try:
> wr = self._data[key]
> except KeyError:
> return default
> else:
> o = wr()
> if o is None:
> return default
> else:
> return o
> 
> Now that I am using this _Registry class instead of
> WeakValueDictionary, my test scripts and my actual program are no
> longer producing segfaults.
I'll prefer to get iterators and iterables that can accept a global  signal 
called  a clock to replace these CS mess. 

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing bug, is information ever omitted from a traceback?

2011-12-10 Thread Chris Angelico
On Sun, Dec 11, 2011 at 5:38 AM, Andrew Berg  wrote:
> On 12/10/2011 11:53 AM, John Ladasky wrote:
>> Why did you specify Python 2.7.2, instead of the 2.7.6 version that is
>> being offered to me by Ubuntu Software Center?  Does it matter?
> There is no Python 2.7.6. I think you have it confused with the version
> 2.7.2-6. If I'm not mistaken, that appended 6 has to do with packaging
> and nothing at all to do with the software itself.

On 12/9/2011 6:14 PM, John Ladasky wrote:
> I'm programming in Python 2.6 on Ubuntu Linux 10.10, if it matters.

As with my Ubuntu 10.10 - it's version 2.6.6. (I also altinstalled 3.3
built straight from Mercurial, which is approximately as new a Python
as one can have, but that doesn't count.)

There's a few differences between 2.6 and 2.7; not usually enough to
be concerned about in daily use, but when dealing with weird issues,
it helps to have the latest release.

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing bug, is information ever omitted from a traceback?

2011-12-10 Thread John Ladasky
On Dec 10, 10:38 am, Andrew Berg  wrote:
> On 12/10/2011 11:53 AM, John Ladasky wrote:> Why did you specify Python 
> 2.7.2, instead of the 2.7.6 version that is
> > being offered to me by Ubuntu Software Center?  Does it matter?
>
> There is no Python 2.7.6. I think you have it confused with the version
> 2.7.2-6. If I'm not mistaken, that appended 6 has to do with packaging
> and nothing at all to do with the software itself.
>
> --
> CPython 3.2.2 | Windows NT 6.1.7601.17640 | Thunderbird 7.0

How annoying, here's how Ubuntu Software Center described it:

"Version: 2.7-6 (python2.7)"

And here it is in the Synaptic Package Manager:

"package: python2.7-minimal, installed version: 2.7-6"

At first I considered that this truncated name might be the
consequence of a Linux bug.  But looking up and down my list of
installed software in both Ubuntu Software Center and Synaptic, I can
find version names which extend arbitrarily, such as my wxPython
installation, for which the version number reads
"2.8.11.0-0ubuntu4.1".

Now my hypothesis is that someone manually enters the revision numbers
into the Linux database, and they made a typo.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: WeakValueDict and threadsafety

2011-12-10 Thread Duncan Booth
Darren Dale  wrote:

> On Dec 10, 11:19 am, Duncan Booth 
> wrote:
>> Darren Dale  wrote:
> def get_data(oid):
> with reglock:
> data = registry.get(oid, None)
> if data is None:
> data = make_data(oid)
> registry[oid] = data
> return data
> 
> Does that look better? I am actually working on the h5py project
> (bindings to hdf5), and the oid is an hdf5 object identifier.
> make_data(oid) creates a proxy object that stores a strong reference
> to oid.

Yes, that looks better.

> 
> Now that I am using this _Registry class instead of
> WeakValueDictionary, my test scripts and my actual program are no
> longer producing segfaults.
> 
I think that so far as multi-thread race conditions are concerned Python 
usually tries to guarantee that you won't get seg faults. So if you were 
getting seg faults my guess would be that either you've found a bug in the 
WeakValueDictionary implementation or you've got a bug in some of your code 
outside Python.

For example if your proxy object has a __del__ method to clean up the 
object it is proxying then you could be creating a new object with the same 
oid as one that is in the process of being destroyed (the object disappears 
from the WeakValueDictionary before the __del__ method is actually called). 

Without knowing anything about HDF5 I don't know if that's a problem but I 
could imagine you could end up creating a new proxy object that references 
something in the HDF5 library which you then destroy as part of cleaning up 
a previous incarnation of the object but continue to access through the new 
proxy.

-- 
Duncan Booth http://kupuguy.blogspot.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: WeakValueDict and threadsafety

2011-12-10 Thread Darren Dale
On Dec 10, 2:09 pm, Duncan Booth  wrote:
> Darren Dale  wrote:
> > On Dec 10, 11:19 am, Duncan Booth 
> > wrote:
> >> Darren Dale  wrote:
> > def get_data(oid):
> >     with reglock:
> >         data = registry.get(oid, None)
> >         if data is None:
> >             data = make_data(oid)
> >             registry[oid] = data
> >     return data
>
> > Does that look better? I am actually working on the h5py project
> > (bindings to hdf5), and the oid is an hdf5 object identifier.
> > make_data(oid) creates a proxy object that stores a strong reference
> > to oid.
>
> Yes, that looks better.
>
>
>
> > Now that I am using this _Registry class instead of
> > WeakValueDictionary, my test scripts and my actual program are no
> > longer producing segfaults.
>
> I think that so far as multi-thread race conditions are concerned Python
> usually tries to guarantee that you won't get seg faults. So if you were
> getting seg faults my guess would be that either you've found a bug in the
> WeakValueDictionary implementation or you've got a bug in some of your code
> outside Python.

Have you seen Alex Martelli's answer at
http://stackoverflow.com/questions/3358770/python-dictionary-is-thread-safe
? The way I read that, it seems pretty clear that deleting items from
a dict can lead to crashes in threaded code. (Well, he says as long as
you don't performing an assignment or a deletion in threaded code,
there may be issues, but at least it shouldn't crash.)

> For example if your proxy object has a __del__ method to clean up the
> object it is proxying then you could be creating a new object with the same
> oid as one that is in the process of being destroyed (the object disappears
> from the WeakValueDictionary before the __del__ method is actually called).
>
> Without knowing anything about HDF5 I don't know if that's a problem but I
> could imagine you could end up creating a new proxy object that references
> something in the HDF5 library which you then destroy as part of cleaning up
> a previous incarnation of the object but continue to access through the new
> proxy.

We started having problems when HDF5 began recycling oids as soon as
their reference count went to zero, which was why we began using
IDProxy and the registry. The IDProxy implementation below does have a
__dealloc__ method, which we use to decrease the HDF5's internal
reference count to the oid. Adding these proxies and registry dealt
with the issue of creating a new proxy that references an old oid
(even in non-threaded code), but it created a rare (though common
enough) segfault in multithreaded code. This synchronized registry is
the best I have been able to do, and it seems to address the problem.
Could you suggest another approach?

cdef IDProxy getproxy(hid_t oid):
# Retrieve an IDProxy object appropriate for the given object
identifier
cdef IDProxy proxy
proxy = registry.get(oid, None)
if proxy is None:
proxy = IDProxy(oid)
registry[oid] = proxy

return proxy


cdef class IDProxy:

property valid:
def __get__(self):
return H5Iget_type(self.id) > 0

def __cinit__(self, id):
self.id = id
self.locked = 0

def __dealloc__(self):
if self.id > 0 and (not self.locked) and H5Iget_type(self.id)
> 0 \
  and H5Iget_type(self.id) != H5I_FILE:
H5Idec_ref(self.id)
-- 
http://mail.python.org/mailman/listinfo/python-list


Overriding a global

2011-12-10 Thread Roy Smith
I've got a code pattern I use a lot.  In each module, I create a logger 
for the entire module and log to it all over:

logger = logging.getLogger('my.module.name')

class Foo:
   def function(self):
  logger.debug('stuff')
  logger.debug('other stuff')

and so on.  This works, but every once in a while I decide that a 
particular function needs a more specific logger, so I can adjust the 
logging level for that function independent of the rest of the module.  
What I really want to do is:

   def function(self):
  logger = logger.getChild('function')
  logger.debug('stuff')
  logger.debug('other stuff')

which lets me not have to change any lines of code other than inserting 
the one to redefine logger.  Unfortunately, that's not legal Python (it 
leads to "UnboundLocalError: local variable 'logger' referenced before 
assignment").

Any ideas on the best way to implement this?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Overriding a global

2011-12-10 Thread MRAB

On 10/12/2011 20:47, Roy Smith wrote:

I've got a code pattern I use a lot.  In each module, I create a logger
for the entire module and log to it all over:

logger = logging.getLogger('my.module.name')

class Foo:
def function(self):
   logger.debug('stuff')
   logger.debug('other stuff')

and so on.  This works, but every once in a while I decide that a
particular function needs a more specific logger, so I can adjust the
logging level for that function independent of the rest of the module.
What I really want to do is:

def function(self):
   logger = logger.getChild('function')
   logger.debug('stuff')
   logger.debug('other stuff')

which lets me not have to change any lines of code other than inserting
the one to redefine logger.  Unfortunately, that's not legal Python (it
leads to "UnboundLocalError: local variable 'logger' referenced before
assignment").

Any ideas on the best way to implement this?


You could use a different name:

def function(self):
logger2 = logger.getChild('function')
logger2.debug('stuff')
logger2.debug('other stuff')

or use 'globals':

def function(self):
logger = globals()['logger'].getChild('function')
logger.debug('stuff')
logger.debug('other stuff')
--
http://mail.python.org/mailman/listinfo/python-list


Re: Overriding a global

2011-12-10 Thread Roy Smith
MRAB  wrote:

> or use 'globals':
> 
>  def function(self):
>  logger = globals()['logger'].getChild('function')
>  logger.debug('stuff')
>  logger.debug('other stuff')

Ah-ha!  That's precisely what I was looking for.  Much appreciated.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Misleading error message of the day

2011-12-10 Thread Lie Ryan

On 12/09/2011 03:57 PM, alex23 wrote:

On Dec 9, 11:46 am, Lie Ryan  wrote:

perhaps the one that talks about `a, a.foo = 1, 2` blowing up?


Are you sure you're not confusing this with the recent thread on 'x =
x.thing = 1'?


Ah, yes I do

--
http://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing bug, is information ever omitted from a traceback?

2011-12-10 Thread Terry Reedy

On 12/10/2011 2:02 PM, Chris Angelico wrote:


There's a few differences between 2.6 and 2.7; not usually enough to
be concerned about in daily use, but when dealing with weird issues,
it helps to have the latest release.


There are 2 issues. First, 2.7.2 has perhaps a couple hundred bug fixes 
since 2.7.0/2.6.6 were released. So it is possible that this particular 
problem was fixed, or at least changed. Second (leaving security issues 
aside), a Py2 bug fix will only be applied to the post-2.7.2 tip and 
only after the bug has been demonstrated to exist in the post-2.7.2 tip.


About the request for minimal code exhibiting the problem: each bug 
exposes a hole in the test suite. We try to add a new test case with 
each bug fix so fixed bugs stay fixed. So code examples not only show 
that the bug is in the latest version, but also serve as the basis for 
new tests.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: order independent hash?

2011-12-10 Thread Lie Ryan

On 12/09/2011 10:27 PM, Steven D'Aprano wrote:

On Thu, 08 Dec 2011 10:30:01 +0100, Hrvoje Niksic wrote:


In a language like Python, the difference between O(1) and O(log n) is
not the primary reason why programmers use dict; they use it because
it's built-in, efficient compared to alternatives, and convenient to
use.  If Python dict had been originally implemented as a tree, I'm sure
it would be just as popular.


Except for people who needed dicts with tens of millions of items.


who should be using a proper DBMS in any case.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Overriding a global

2011-12-10 Thread Terry Reedy

On 12/10/2011 3:47 PM, Roy Smith wrote:


What I really want to do is:

def function(self):


Add a global statement to rebind a global name:
   global logger


   logger = logger.getChild('function')
   logger.debug('stuff')
   logger.debug('other stuff')

which lets me not have to change any lines of code other than inserting
the one to redefine logger.  Unfortunately, that's not legal Python (it
leads to "UnboundLocalError: local variable 'logger' referenced before
assignment").

Any ideas on the best way to implement this?



--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: order independent hash?

2011-12-10 Thread Chris Angelico
On Sun, Dec 11, 2011 at 10:58 AM, Lie Ryan  wrote:
> On 12/09/2011 10:27 PM, Steven D'Aprano wrote:
>> Except for people who needed dicts with tens of millions of items.
>
> who should be using a proper DBMS in any case.

Not necessarily. "Database" usually implies disk-based and relational,
features that may well be quite superfluous; and a pure-memory
database with no relational facilities... is basically a dict. So why
not use one?

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Overriding a global

2011-12-10 Thread Terry Reedy

On 12/10/2011 7:14 PM, Terry Reedy wrote:

On 12/10/2011 3:47 PM, Roy Smith wrote:


What I really want to do is:

def function(self):


Add a global statement to rebind a global name:
global logger


But I see that that is not what you want to do, which is to override the 
global name just within the function while still accessing the global 
name. MRAB's solution does that nicely.



logger = logger.getChild('function')
logger.debug('stuff')
logger.debug('other stuff')

which lets me not have to change any lines of code other than inserting
the one to redefine logger. Unfortunately, that's not legal Python (it
leads to "UnboundLocalError: local variable 'logger' referenced before
assignment").

Any ideas on the best way to implement this?






--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: order independent hash?

2011-12-10 Thread Lie Ryan

On 12/11/2011 11:17 AM, Chris Angelico wrote:

On Sun, Dec 11, 2011 at 10:58 AM, Lie Ryan  wrote:

On 12/09/2011 10:27 PM, Steven D'Aprano wrote:

Except for people who needed dicts with tens of millions of items.


who should be using a proper DBMS in any case.


Not necessarily. "Database" usually implies disk-based and relational,
features that may well be quite superfluous; and a pure-memory
database with no relational facilities... is basically a dict. So why
not use one?


It is very unlikely you'd have millions of items in a dict and you're 
not planning to do any data processing at all. In any case, there are 
very few use cases which requires the use of a dict with millions of 
items that wouldn't be better served by a proper database.


--
http://mail.python.org/mailman/listinfo/python-list


Re: Dynamic variable creation from string

2011-12-10 Thread Nobody
On Fri, 09 Dec 2011 01:55:28 -0800, Massi wrote:

> Thank you all for your replies, first of all my Sum function was an
> example simplifying what I have to do in my real funciton. In general
> the D dictionary is complex, with a lot of keys, so I was searching
> for a quick method to access all the variables in it without doing the
> explicit creation:
> 
> a, b, c = D['a'], D['b'], D['c']
> 
> and without using directly the D dictionary (boring...).

If just you're trying to avoid getting a repetitive strain injury in your
right-hand little finger from typing all the [''], you could turn
the keys into object attributes, e.g.:

class DictObject:
def __init__(self, d):
for key, value in d.iteritems():
setattr(self, key, value)
...
o = DictObject(D)
# use o.a, o.b, etc


-- 
http://mail.python.org/mailman/listinfo/python-list