Re: gedit 'External Tools' plugin hashlib weirdness

2010-10-01 Thread Joel Hedlund
I apparently replied to soon.

Removing /usr/lib/python2.4 from PYTHONPATH did not solve the problem.
I think I may have had a launcher-started gedit running somewhere in
the background while testing. Any subsequent terminal-launches would
then just create new windows for the existing (non-bugged) process,
rather than starting a new process and tripping the bug (and by bug I
mean configuration error on my part, most likely).

However, I went further along the sys.path diff and found a path to a
second python2.6 installation. Apparently, this one shipped with a
standalone middleware client for distributed computing, and was
insinuated into my PYTHONPATH via a call to its startup script in
my .bashrc. Removing the call to the startup script solved the problem
again (!)

I still can't explain the traceback though.

But fwiw, this solution made the problem stay solved past a reboot, so
I have high hopes this time.

/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


gedit 'External Tools' plugin hashlib weirdness

2010-09-30 Thread Joel Hedlund
I'm having a weird problem with the 'External Tools' plugin for gedit,
that seems to get weirder the more I dig into it. When I start gedit
by clicking a launcher (from the Ubuntu menu, panel or desktop)
everything is dandy and the 'External Tools' plugin works as expected.
When gedit is launched from the terminal, the 'External Tools' plugin
is greyed out in the plugin list and I get this traceback on stderr:

$ gedit
Traceback (most recent call last):
  File /usr/lib/gedit-2/plugins/externaltools/__init__.py, line 24,
in module
from manager import Manager
  File /usr/lib/gedit-2/plugins/externaltools/manager.py, line 27,
in module
import hashlib
  File /usr/lib/python2.6/hashlib.py, line 136, in module
md5 = __get_builtin_constructor('md5')
  File /usr/lib/python2.6/hashlib.py, line 63, in
__get_builtin_constructor
import _md5
ImportError: No module named _md5

** (gedit:8714): WARNING **: Error loading plugin 'External Tools'

The same thing happens if I try to activate the plugin from a gedit
launched from the terminal (if it's already been deactivated from a
gedit launched from the menu).

My analysis is that gedit tries to import the externaltools package,
which imports hashlib, which tries to import _hashlib but fails and
falls back to _md5 which also fails, which apparently /should not
happen/, or so google tells me. One of _hashlib and _md5 should always
exist.

However, importing _hashlib in a python interpreter works just fine,
i.e:

$ python -c 'import _hashlib'

returns nothing. What also puzzles me is that I don't seem to have
_hashlib* anywhere on my system (am I supposed to?) and getting the
__file__ attribute off the module doesn't work, and help(_hashlib)
says FILE is (built-in).

 import _hashlib
 _hashlib.__file__
Traceback (most recent call last):
  File stdin, line 1, in module
AttributeError: 'module' object has no attribute '__file__'

Google drops vague hints that there may be a a virtualenv that one
might have to rebuild, but in that case I have no idea where to
begin.

I've tried reinstalling the ubuntu packages gedit, gedit-common and
gedit-plugins, but to no avail. And the machine runs a fully updated
ubuntu karmic koala (10.4) that has survived numerous dist-upgrades,
if that's of any use to anybody.

I'd appreciate any input on this, even if it's just new bushes to
whack for scaring the problem out into the light.

Cheers!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: gedit 'External Tools' plugin hashlib weirdness

2010-09-30 Thread Joel Hedlund
bah, I meant to say I'm running a fully updated ubuntu lucid lynx
(10.4).
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: gedit 'External Tools' plugin hashlib weirdness

2010-09-30 Thread Joel Hedlund
How do I catch output to stdout/stderr when launching from a launcher?

I added this to /usr/lib/gedit-2/plugins/externaltools/__init__.py:

import sys
f = open('/tmp/eraseme.txt', 'w')
print  f, The executable is %r. % sys.executable
f.close()

In both cases (launcher/termial) the contents of eraseme.txt are:
The executable is '/usr/bin/python'.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: gedit 'External Tools' plugin hashlib weirdness

2010-09-30 Thread Joel Hedlund
FOUND IT!

I added the line

print  f, '\n'.join(sorted(sys.path))

and diff:ed the files produced from terminal/launcher.

When using the launcher, changes to PYTHONPATH done in ~/.bashrc are
not picked up, and I apparently had an old reference to /usr/lib/
python2.4 sitting in there. Removed it, reloaded .bashrc, plugin now
works.

The question still remains why gnome disregards ~/.bashrc, but that's
a whole other topic. Thanks a bunch, you guys are ever so helpful.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: gedit 'External Tools' plugin hashlib weirdness

2010-09-30 Thread Joel Hedlund
I guess the moral of the story is don't always dist-upgrade.

Reformat once in a while to remove old forgotten garbage. Clear the
blood clots from your systems, so to say.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: gedit 'External Tools' plugin hashlib weirdness

2010-09-30 Thread Joel Hedlund
On Sep 30, 3:40 pm, Peter Otten __pete...@web.de wrote:
 I'm surprised that /usr/lib/python2.4 doesn't appear in the traceback.

That certainly would have been useful, wouldn't it?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: safe eval of moderately simple math expressions

2009-04-11 Thread Joel Hedlund

Aaron Brady wrote:

Would you be willing to examine a syntax tree to determine if there
are any class accesses? 


Sure? How do I do that? I've never done that type of thing before so I 
can't really say if it would work or not.


/Joel
--
http://mail.python.org/mailman/listinfo/python-list


Re: safe eval of moderately simple math expressions

2009-04-11 Thread Joel Hedlund

Matt Nordhoff wrote:

'\x5f'

'_'

getattr(42, '\x5f\x5fclass\x5f\x5f') # __class__

type 'int'

Is that enough to show you the error of your ways?


No, because

 print '_' in '\x5f\x5fclass\x5f\x5f'
True


:-D Cuz seriously, it's a bad idea.


Yes probably, but that's not why. :-)


(BTW: What if a user tries to do some ridiculously large calculation to
DoS the app? Is that a problem?)


Nope. If the user wants to hang her own app that's fine with me.

/Joel
--
http://mail.python.org/mailman/listinfo/python-list


Re: safe eval of moderately simple math expressions

2009-04-11 Thread Joel Hedlund

Matt Nordhoff wrote:

'\x5f'

'_'

getattr(42, '\x5f\x5fclass\x5f\x5f') # __class__

type 'int'

Is that enough to show you the error of your ways?


No, because

 print '_' in '\x5f\x5fclass\x5f\x5f'
True


:-D Cuz seriously, it's a bad idea.


Yes probably, but that's not why. :-)


(BTW: What if a user tries to do some ridiculously large calculation to
DoS the app? Is that a problem?)


Nope. If the user wants to hang her own app that's fine with me.

/Joel
--
http://mail.python.org/mailman/listinfo/python-list


Re: safe eval of moderately simple math expressions

2009-04-11 Thread Joel Hedlund

Peter Otten wrote:

But what you're planning to do seems more like


def is_it_safe(source):

... return _ not in source
...

source = getattr(42, '\\x5f\\x5fclass\\x5f\\x5f')
if is_it_safe(source):

... print eval(source)
...
type 'int'


Bah. You are completely right of course.

Just as a thought experiment, would this do the trick?

def is_it_safe(source):
return _ not in source and r'\' not in source

I'm not asking because I'm hellbent on having eval in my app, but 
because it's always useful to see what hazards you don't know about.


/Joel
--
http://mail.python.org/mailman/listinfo/python-list


Re: safe eval of moderately simple math expressions

2009-04-11 Thread Joel Hedlund

Peter Otten wrote:

def is_it_safe(source):
 return _ not in source and r'\' not in source



.join(map(chr, [95, 95, 110, 111, 95, 95]))

'__no__'


But you don't have access to neither map or chr?

/Joel
--
http://mail.python.org/mailman/listinfo/python-list


Re: safe eval of moderately simple math expressions

2009-04-11 Thread Joel Hedlund

Peter Otten wrote:

Joel Hedlund wrote:


Peter Otten wrote:

def is_it_safe(source):
 return _ not in source and r'\' not in source

.join(map(chr, [95, 95, 110, 111, 95, 95]))

'__no__'

But you don't have access to neither map or chr?

/Joel



'5f5f7374696c6c5f6e6f745f736166655f5f'.decode(hex)

'__still_not_safe__'


Now *that's* a thing of beauty. A horrible, horrible kind of beauty.

Thanks for blowing holes in my inflated sense of security!
/Joel
--
http://mail.python.org/mailman/listinfo/python-list


safe eval of moderately simple math expressions

2009-04-09 Thread Joel Hedlund

Hi all!

I'm writing a program that presents a lot of numbers to the user, and I 
want to let the user apply moderately simple arithmentics to these 
numbers. One possibility that comes to mind is to use the eval function, 
but since that sends up all kinds of warning flags in my head, I thought 
I'd put my idea out here first so you guys can tell me if I'm insane. :-)


This is the gist of it:
--
import math

globals = dict((s, getattr(math, s)) for s in dir(math) if '_' not in s)
globals.update(__builtins__=None, divmod=divmod, round=round)

def calc(expr, x):
if '_' in expr:
raise ValueError(expr must not contain '_' characters)
try:
return eval(expr, globals, dict(x=x))
except:
raise ValueError(bad expr or x)

print calc('cos(x*pi)', 1.33)
--

This lets the user do stuff like exp(-0.01*x) or round(100*x) but 
prevents malevolent stuff like __import__('os').system('del *.*') or
(t for t in (42).__class__.__base__.__subclasses__() if t.__name__ == 
'file').next() from messing things up.


I assume there's lots of nasty and absolutely lethal stuff that I've 
missed, and I kindly request you show me the error of my ways.


Thank you for your time!
/Joel Hedlund
--
http://mail.python.org/mailman/listinfo/python-list


Re: weird dict problem, how can this even happen?

2008-12-19 Thread Joel Hedlund

Joel Hedlund wrote:
First off, please note that I consider my problem to be solved, many 
thanks to c.l.p and especially Duncan Booth. But of course continued 
discussion on this topic can be both enlightening and entertaining as 
long as people are interested. So here goes:


heh, nothing like a wall of text to kill off interest I guess. :-)

But thank you all for your time and helpful advice! Aand happy holidays!

/Joel
--
http://mail.python.org/mailman/listinfo/python-list


Re: weird dict problem, how can this even happen?

2008-12-17 Thread Joel Hedlund

Steven D'Aprano wrote:

On Tue, 16 Dec 2008 14:32:39 +0100, Joel Hedlund wrote:

Duncan Booth wrote:

Alternatively give up on defining hash and __eq__ for FragmentInfo and
rely on object identity instead.

Object identity wouldn't work so well for caching. Objects would always
be drawn as they appeared for the first time. No updates would be shown
until the objects were flushed from the cache.


Perhaps I don't understand your program structure, but I don't see how 
that follows.


First off, please note that I consider my problem to be solved, many 
thanks to c.l.p and especially Duncan Booth. But of course continued 
discussion on this topic can be both enlightening and entertaining as 
long as people are interested. So here goes:


I'm making a scientific program that visualizes data. It can be thought 
of as a freely zoomable map viewer with a configurable stack of data 
feature renderers, themselves also configurable to enhance different 
aspects of the individual features. As you zoom in, it becomes possible 
to render more types of features in a visually recognizable manner, so 
consequentially more feature renderers become enabled. The determining 
properties for the final image are then the coordinates and zoom level 
of the view, and the current renderer stack configuration. Rendering may 
be time consuming, so I cache the resulting bitmap fragments using a 
key object called FragmentInfo that holds this information.


Some renderers may take a very long time to do their work, so in order 
to keep the gui nice and responsive, I interrupt the rendering chain at 
that point, put the remainder of the rendering chain in a job pool, and 
postpone finishing up the rendering until there are cpu cycles to spare. 
At that time, I go through the job pool and ask the cache which fragment 
was most recently accessed. This fragment is the most likely to actually 
be in the current view, and thus the most interesting for the user to 
have finallized.


Now, when the user navigates the view to any given point, the gui asks 
the cache if the bitmap fragments necessary to tile up the current view 
have already been rendered, and if so, retrieves them straight from the 
cache and paints them to the screen. And that's why object identity 
wouldn't work. If the user changes something in the config for the 
current renderer stack (=mutates the objects), the renderers still 
retain the same object identities and therefore the old versions would 
be retrieved from the cache, and no updates would be shown until the 
fragments are flushed from the cache, and the fragment subsequently 
redrawn.


I guess you could delve deep into the data members and pull out all 
their object identities and hash wrt that if you'd really want to, but I 
don't really see the point.


The stupid cache implementation that I originally employed used a dict 
to store the FragmentInfo:BitmapFragment items plus a use tracker (list) 
on the side. This obviously doesn't work, because as soon as the 
renderer stack mutates, list and dict go out of sync and I can no longer 
selectively flush old items from the dict, because it's not readily 
apparent how to reconstruct the old keys. Purely using lists here is 
vastly superior because I can just .pop() the least recently used items 
from the tail and be done with them. Also for lists as small as this, 
the cost in performance is at most negligible.



I've been experimenting with a list cache now and I can't say I'm
noticing any change in performance for a cache of 100 items. 


100 items isn't very big though. If you have 50,000 items you may notice 
significant slow down :)


If having many items in the cache is possible, you should consider using 
a binary search instead of a linear search through the cache. See the 
bisect module.


Thanks for the tip, but I don't forsee this cache ever needing to be 
that big. ~100 is quite enough for keeping the gui reasonably responsive 
in this case.


/Joel
--
http://mail.python.org/mailman/listinfo/python-list


Re: weird dict problem, how can this even happen?

2008-12-16 Thread Joel Hedlund

Duncan Booth wrote:
It could happen quite easily if the hash value of the object has changed 
since it was put in the dictionary. what does the definition of your 
core.gui.FragmentInfo object look like?


Dunno if it'll help much, but:

class FragmentInfo(object):
def __init__(self, renderer, render_area):
self.renderer = renderer
self.render_area = render_area

def __hash__(self):
return hash((FragmentInfo, self.renderer, self.render_area))

def __eq__(self, other):
return (isinstance(other, self.__class__) and
other.renderer == self.renderer and
other.render_area == self.render_area)


Is the hash definitely immutable?


No. It's a hash of a tuple of its key attributes, themselves similar 
objects.


The program can be thought of as a map viewer. In the gui part, image 
fragments are cached for speed, and fragments are only half rendered if 
there's a lot of complex features to draw. The pool consists of semi 
rendered fragments. The reason I did it this way is the cache. The cache 
is a dict with a use tracker so when the hash changes, the older 
fragments eventually drop from the cache. Hmm... I'm starting to realise 
now why my implementation of this isn't so hot. I'm going to hazard a 
guess here, and then you can correct me ok?


I create a dict and populate it with a key-value pair, and then the 
key's hash changes. When the key is returned by k = d.keys(), then k not 
in d, even though k in d.keys().


Simple example:

class moo(object):
def __init__(self, a):
self.a = a
def __hash__(self):
return hash(self.a)

d = {moo(1): 1}

k = d.keys()[0]
k.a = 2

k = d.keys()[0]
print k in d, k in d.keys()

 d.pop(k)

output:

False True
Traceback (most recent call last):
  File /bioinfo/yohell/eclipse/Test/src/test.py, line 14, in module
d.pop(k)
KeyError: __main__.moo object at 0x7f1c64120590


I'd say that's pretty similar to what I observed.

I guess the logical outcome is that the cache dict will fill up with old 
junk that I can't access and can't selectively flush since the hashes 
have changed, unless I actually somehow save copies of the keys, which 
can get pretty complex and probably won't do wonders for execution 
speed. Yeah this was probably a bad soulution.


I should probably do this with lists instead because I can't really 
think of a way of salvaging this. Am i right?


Thanks for your help!
/Joel
--
http://mail.python.org/mailman/listinfo/python-list


Re: weird dict problem, how can this even happen?

2008-12-16 Thread Joel Hedlund

Duncan Booth wrote:
I think you probably are correct. The only thing I can think that might 
help is if you can catch all the situations where changes to the dependent 
values might change the hash and wrap them up: before changing the hash pop 
the item out of the dict, then reinsert it after the change.


That would probably require a lot of uncomfortable signal handling, 
especially for a piece of functionality I'd like to be as unobtrusive as 
possible in the application.


Alternatively give up on defining hash and __eq__ for FragmentInfo and rely 
on object identity instead.


Object identity wouldn't work so well for caching. Objects would always 
be drawn as they appeared for the first time. No updates would be shown 
until the objects were flushed from the cache.


I've been experimenting with a list cache now and I can't say I'm 
noticing any change in performance for a cache of 100 items. I'm still 
using the hash to freeze a sort of object tag in order to detect 
changes, and I require both hash and object equality for cache hits, 
like so:



def index(self, key):
h = hash(key)
for i, item in enumerate(self.items):
if item.hash == h and item.key == key:
return i
raise KeyError(key)


This seems to do what I want and does OK performance wise.

Thanks again!

/Joel
--
http://mail.python.org/mailman/listinfo/python-list


Re: weird dict problem, how can this even happen?

2008-12-16 Thread Joel Hedlund

Scott David Daniels wrote:

Perhaps your hash function could be something like:


I'm not sure I understand what you're suggesting.

/Joel
--
http://mail.python.org/mailman/listinfo/python-list


weird dict problem, how can this even happen?

2008-12-15 Thread Joel Hedlund
I'm having a very hard time explaining why this snippet *sometimes* 
raises KeyError:


snippet:

print type(self.pool)
for frag in self.pool.keys():
if frag is fragment_info:
print the fragment_info *is* in the pool, hash(frag), 
hash(fragment_info), hash(frag) == hash(fragment_info), frag == fragment_info, frag in 
self.pool, frag in self.pool.keys()
try:
renderer_index = self.pool.pop(fragment_info)
except KeyError:
print Glorious KeyError!
for frag in self.pool.keys():
if frag is fragment_info:
print the fragment_info *is* in the pool, hash(frag), 
hash(fragment_info), hash(frag) == hash(fragment_info), frag == fragment_info, frag in 
self.pool, frag in self.pool.keys()
raise



output:

type 'dict'
the fragment_info *is* in the pool 987212075 987212075 True True False True
Glorious KeyError!
the fragment_info *is* in the pool 987212075 987212075 True True False True
Traceback (most recent call last):
  File /home/yohell/workspace/missy/core/gui.py, line 92, in process_job
renderer_index = self.pool.pop(fragment_info)
KeyError: core.gui.FragmentInfo object at 0x8fc906c


This snippet is part of a much larger gtk program, and the problem only 
from time to time, predominantly when the cpu is under heavy load and 
this method gets called a lot. If I didn't know better I'd say it's a 
bug in python's dict implementation, but I do know better, so I know 
it's far more likely that I've made a mistake somewhere. I'll be damned 
if I can figure out what and where though. I've reproduced this bug (?) 
with python-2.5.2 on Ubuntu 8.10 and python-2.5.1 on WinXP.


I would very much like an explanation to this that does not involve 
threads, because I haven't made any that I'm aware of. I can't even 
understand how this could happen. How do I even debug this?


Please help, I feel like I've taken crazy pills here!
/Joel Hedlund
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to do_size_allocate properly in a gtk.Viewport subclass

2008-10-23 Thread Joel Hedlund

Joel Hedlund wrote:
And another relevant question: am I overcomplicating this? 


Yes. :-)

The proper way of doing this is to pack the widget in a container, and 
then add the container (with viewport) to a scrolledwindow.


For example, for a centered widget choose a 1x1 gtk.Table and attach the 
widget using xoptions = yoptions = gtk.EXPAND (and not gtk.FILL). For a 
widget glued to the upper left corner choose a gtk.Alignment().


Thanks John Finlay at [EMAIL PROTECTED]

/Joel
--
http://mail.python.org/mailman/listinfo/python-list


How to do_size_allocate properly in a gtk.Viewport subclass

2008-10-22 Thread Joel Hedlund

Hi!

I've raised this issue on #pygtk and #gtk+ but with no luck. I haven't 
been able to solve this even with aid of google, the pygtk reference and 
the gtk C source, so pretty please help?


I'm making an application that you can think of as an image viewer. I 
want to display a widget in a gtk.Viewport. The widget can have any size 
from tiny to humungous. I don't want the viewport to ever give the 
widget a larger size allocation than requested, and I don't want the 
viewport to ever resize to accomodate a large widget. It should rather 
leave grey areas around the widget/show only a portion of the widget.


To do this I have subclassed gtk.Viewport (MyViewport) and overrided the 
do_size_allocate method:



def do_size_allocate(self, allocation):
self.allocation = allocation
child_req = self.child.get_child_requisition()
child_alloc = gtk.gdk.Rectangle(0, 0, *child_req)
self.child.size_allocate(child_alloc)
self.props.hadjustment.update(allocation.width, child_alloc.width)
self.props.vadjustment.update(allocation.height, child_alloc.height)
if self.flags()  gtk.REALIZED:
self.window.move_resize(*self.allocation)
self.child.window()


When I add a very large widget (a gtk.DrawingArea) to MyViewport only 
the originally visible portion of the widget is redrawn when I resize 
the window using the mouse, and the grey area around widget gets 
littered with grey lines that are not redrawn if you minimize and 
restore the window. I assume this comes from that the proper gdk windows 
haven't been updated, and that the grey lines are remnants of old 
Viewport borders.


In gtk_viewport_size_allocate in gtkviewport.c, gdk_window_move_resize 
is called on three gdk windows: viewport-window, viewport-view_window 
and viewport-bin_window, but in pygtk I only have gtk.Viewport.window. 
I assume that this is the problem? If so, how can I fix this? Or is 
there something else that I have overlooked?


And another relevant question: am I overcomplicating this? Is there some 
kind of flag that I could set on a vanilla viewport to accomplish this?


Thanks in advance,
Joel
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to do_size_allocate properly in a gtk.Viewport subclass

2008-10-22 Thread Joel Hedlund

Hrvoje Niksic wrote:

Note that there's a mailing list dedicated to PyGTK,
[EMAIL PROTECTED], so you might also want to ask your question there.


Thanks. I'll try that and hope people won't take offense from 
cross-posting. I'll be wathching this thread for answers too though. In 
my experience, c.l.p usually delivers.


Cheers!
/Joel
--
http://mail.python.org/mailman/listinfo/python-list


gtk.gdk.Pixbuf.scale() unexpected behavior when offset != 0

2008-06-17 Thread Joel Hedlund

Hi!

I'm developing a pygtk application where I need to show images zoomed in 
so that the user can see individual pixels. gtk.gdk.Pixbuf.scale() 
seemed ideal for this, but if I set offset_x and offset_y to anything 
other than 0, the resulting image is heavily distorted and the offset is 
wrong. I've searched the internet for any snippet of code that uses this 
function with nonzero offset, but even after several hours of searching 
I've still come up blank. I think this may be a bug, but since it seems 
so fundamental I think it's way more likely that I've misunderstood 
something, so I thought I'd pass this by c.l.p first. Any help is 
greatly appreciated.


I wrote a test program to show off this behavior. Please find attached 
an image of David Hasselhoff with some puppies to help facilitate this 
demonstration. (feel free to use any image, but who doesn't like 
Hasselhoff and puppies?)


show_hasselhoff.py
#---
import gtk

original = gtk.gdk.pixbuf_new_from_file('hasselhoff.jpeg')
w = original.get_width()
h = original.get_height()
interp = gtk.gdk.INTERP_NEAREST

nice = gtk.gdk.Pixbuf(gtk.gdk.COLORSPACE_RGB, False, 8, w, h)
ugly = gtk.gdk.Pixbuf(gtk.gdk.COLORSPACE_RGB, False, 8, w, h)
original.scale(nice, 0, 0, w, h, 0, 0, 2, 2, interp)
original.scale(ugly, 0, 0, w, h, w/2, h/2, 2, 2, interp)

outtake = original.subpixbuf(w/4, h/4, w/2, w/2)
expected = outtake.scale_simple(w, h, interp)

w = gtk.Window()
hbox = gtk.HBox()
hbox.add(gtk.image_new_from_pixbuf(original))
hbox.add(gtk.image_new_from_pixbuf(nice))
hbox.add(gtk.image_new_from_pixbuf(ugly))
hbox.add(gtk.image_new_from_pixbuf(expected))
w.add(hbox)
w.show_all()
w.connect('destroy', gtk.main_quit)
gtk.main()
#---

When you run this, you should see 4 images in a window. From left to 
right: original, nice, ugly and expected. nice, ugly and expected are 
scaled/cropped copies of original, but ugly and expected are offset to 
show less mullet and more face. expected is what I expected ugly to turn 
out like judging from the pygtk docs.


Things to note about ugly:
* The topleft pixel of original has been stretched to the area of 
offset_x * offset_y.
* The first offset_x - 1 top pixels of original have been scaled by a 
factor 2 horizontally and then stretched vertically to the height of 
offset_y.

* Vice versa for the first offset_y - 1 leftmost pixels of original.
* The remaining area of ugly is a scaled version of 
original(1,1,width/2-1,height/2-1).


Things to note about the methods:
* This behavior is constant for all interpolation methods.
* This behavior is identical in gtk.gdk.Pixbuf.compose().

This can't possibly be how this is supposed to work! Have I 
misunderstood something, or is this a bug?


Cheers!
/Joel Hedlund
inline: hasselhoff.jpeg--
http://mail.python.org/mailman/listinfo/python-list

Re: Test-driven development and code size

2007-09-26 Thread Joel Hedlund
 test-driven development merely means that you take that test case and
 *keep it* in your unit test. Then, once you're assured that you will
 find the bug again any time it reappears, go ahead and fix it.

My presumption has been that in order to do proper test-driven development I 
would have to make enormous test suites covering all bases for my small hacks 
before I could getting down and dirty with coding (as for example in 
http://www.diveintopython.org/unit_testing). This of course isn't very 
appealing when you need something done now. But if I understand you 
correctly, if I would formalize what little testing I do, so that I can add to 
a growing test suite for each program as bugs are discovered and needs arise, 
would you consider that proper test-driven development? (or rather, is that how 
you do it?)

Thanks for taking the time!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


What is a good way of having several versions of a python module installed in parallell?

2007-09-25 Thread Joel Hedlund
Hi!

I write, use and reuse a lot of small python programs for variuos purposes in 
my work. These use a growing number of utility modules that I'm continuously 
developing and adding to as new functionality is needed. Sometimes I discover 
earlier design mistakes in these modules, and rather than keeping old garbage I 
often rewrite the parts that are unsatisfactory. This often breaks backwards 
compatibility, and since I don't feel like updating all the code that relies on 
the old (functional but flawed) modules, I'm left with a hack library that 
depends on halting versions of my utility modules. The way I do it now is that 
I update the programs as needed when I need them, but this approach makes me 
feel a bit queasy. It seems to me like I'm thinking about this in the wrong way.

Does anyone else recognize this situation in general? How do you handle it? 

I have a feeling it should be possible to have multiple versions of the modules 
installed simultaneously, and maybe do something like this: 

mymodule/
+ mymodule-1.1.3/
+ mymodule-1.1.0/
+ mymodule-0.9.5/
- __init__.py

and having some kind of magic in __init__.py that let's the programmer choose 
version after import:

import mymodule
mymodule.require_version(1.1.3)

Is this a good way of thinking about it? What would be an efficient way of 
implementing it?

Cheers!
/Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is a good way of having several versions of a python module installed in parallell?

2007-09-25 Thread Joel Hedlund
First of all, thanks for all the input - it's appreciated.

 Otherwise, three words:
 
   test driven development

Do you also do this for all the little stuff, the small hacks you just 
whip together to get a particular task done? My impression is that doing 
proper unittests adds a lot of time to development, and I'm thinking 
that this may be a low return investment for the small programs.

I try to aim for reusability and generalizability also in my smaller 
hacks mainly as a safeguard. My reasoning here is that if I mess up 
somehow, sooner or later I'll notice, and then I have a chance of making 
realistic damage assessments. But even so, I must admit that I tend to 
do quite little testing for these small projects... Maybe I should be 
rethinking this?

Cheers!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is a good way of having several versions of a python module installed in parallell?

2007-09-25 Thread Joel Hedlund
First of all, thanks for all the input - it's appreciated.

 Otherwise, three words:
 
   test driven development

Do you also do this for all the little stuff, the small hacks you just 
whip together to get a particular task done? My impression is that doing 
proper unittests adds a lot of time to development, and I'm thinking 
that this may be a low return investment for the small programs.

I try to aim for reusability and generalizability also in my smaller 
hacks mainly as a safeguard. My reasoning here is that if I mess up 
somehow, sooner or later I'll notice, and then I have a chance of making 
realistic damage assessments. But even so, I must admit that I tend to 
do quite little testing for these small projects... Maybe I should be 
rethinking this?

Cheers!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Slightly OT: Why all the spam?

2007-05-23 Thread Joel Hedlund
 Thus you may want to consider reading c.l.p via nntp when at work.

I'm doing that using Thunderbird 1.5.0, and I still get the spam. 
Googling for a bit shows me that people have been having issues with 
Thunderbird not removing expired articles all the way since 2003.

Does anyone have a suggestion on how I can make thunderbird remove 
expired articles? Or should I perhaps use another news reader?

/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Slightly OT: Why all the spam?

2007-05-23 Thread Joel Hedlund
  Expired articles are removed on the server by the server.
  ...
  maybe Thunderbird is doing something weird (caching headers?).

I can see the spam headers and also read the actual articles, and there 
are lots of them for the last 5 days. Nothing much before that, though.

/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Slightly OT: Why all the spam?

2007-05-23 Thread Joel Hedlund
 Then they aren't expired.  If they were expired, you wouldn't
 see them.

Alright, so the solution is not to browse c.l.p articles newer than a 
week while the boss is behind your back then. :-)

Thanks for educating a usenet white belt though!

/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Slightly OT: Why all the spam?

2007-05-22 Thread Joel Hedlund
Does anyone know why we get so much spam to this group? It's starting to 
get embarrasing to read at work and that's just not how it should be.

Cheers!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Need startup suggestions for writing a MSA viewer GUI in python

2007-01-11 Thread Joel Hedlund
 This will probably be a major, but not humongous project. wxPython,
 pyGTk, and pyQt all have the architecture and basics you'll need, it
 will probably be about the same amount of work to create in all of
 them. Pick the one that best suites your licensing and platform needs.

Thanks for the suggestions. Now I'll start my reading up.

Cheers!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Need startup suggestions for writing a MSA viewer GUI in python

2007-01-11 Thread Joel Hedlund
 UI design requires a different skillset than programming. It can be a
 very frustrating and thankless task as well. It is incomparably easier
 to see the flaws in existing interfaces than correcting them (or even
 creating the said interface). Make sure to start with something simple,
 and learn that way.
 
 I would also recommend that you implement something novel, there are
 many existing MSA viewers and it won't be easy to improve on them. Do
 something that adds new value and people will be willing to expend
 effort to use it.

That's true for any software. And for any aspect of life, come to think 
about it.

/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Need startup suggestions for writing a MSA viewer GUI in python

2007-01-10 Thread Joel Hedlund
Hi!

I've been thinking about writing a good multiple sequence alignment 
(MSA) viewer in python. Sort of like ClustalX, only with better zoom and 
pan tools. I've been using python in my work for a couple of years, but 
this is my first shot at making a GUI so I'd very much appreciate some 
ideas from you people to get me going in the right direction. Despite my 
GUI n00b-ness I need to get it good and usable with an intuitive look 
and feel.

What do you think I should do? What packages should I use?

For you non-bioinformatic guys out there, an MSA is basically a big 
matrix (~1000 cols x ~100 rows) of letters, where each row represents a 
biological sequence (gene, protein, etc...). Each sequence has an ID 
that is usually shorter than 40 characters (typically, 8-12). Usually, 
msa visualizers color the letters and their backgrouds according to 
chemical properties.

I want the look and feel to be pretty much like a modern midi sequencer 
(like cubase, nuendo, reason etc...). This means the GUI should have 
least three panes; one to the left to hold the IDs, one in the bottom to 
hold graphs and plots (e.g: user configurable tracks), and the main one 
that holds the actual MSA and occupies most of the space. The left and 
bottom panes should be resizable and foldable.

I would like to be able to zoom and pan x and y axes independently to 
view different portions of the MSA, and the left and bottom panes should 
follow the main pane. I would also like to be able to use drag'n'drop on 
IDs for reordering sequences, and possibly also on the MSA itself to 
shift sequences left and right. Furthermore, I would like to be able to 
select sequences and positions (individually, in ranges or sparsely). I 
would like to have a context sensitive menu on the right mouse button, 
possibly with submenus. Finally, I'd like to be able to export printable 
figures (eps?) of regions and whole MSAs.

I'm thinking all three panes may have to be rendered using some sort of 
scalable graphics because of the coloring and since I'd like to be able 
to zoom freely. I'll also need to draw graphs and plots for the tracks. 
Is pygame good for this, or is there a better way of doing it?

I want my viewer to behave and look like any other program, so I'm 
thinking maybe I should use some standard GUI toolkit instead, say PyQT 
or PyGTK? Would they still allow me to render the MSA nicely?

Does this seem like a humongous project?

Thanks for taking the time!
/Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


import parser does not import parser.py in same dir on win

2006-11-11 Thread Joel Hedlund
Hi!

I have a possibly dumb question about imports. I've written two python 
modules:

parser.py

class Parser(object):
 my parser


app.py

from parser import Parser
print import successful


Running app.py on linux, gives:

import succesful


However, runnning it on windows gives:

Traceback (most recent call last):
   File test.py, line 1, in ?
 from parser import Parser
ImportError: cannot import name Parser


It turns out that on Windows, the builtin parser module is imported 
instead. Why? Why is there a difference? What other names are taken?

In both cases the script dir is first on sys.path, and I'm using the 
plain old terminal/cmd window.

Thanks for your time.

Cheers!
/Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: import parser does not import parser.py in same dir on win

2006-11-11 Thread Joel Hedlund
 the table of built-in modules are checked before searching the path.

I figured as much. But why is the behavior different on linux/win? Is 
this documented somewhere?

/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Slightly OT: Is pyhelp.cgi documentation search broken?

2006-10-27 Thread Joel Hedlund
 It works now again.  

You are now officially my hero.

 Note that you can also download the module and use it locally.

Cool. I'll do that!

Thanks!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Slightly OT: Is pyhelp.cgi documentation search broken?

2006-10-26 Thread Joel Hedlund
Hi!

For a number of days I haven't been able to search the online python 
docs at:

http://starship.python.net/crew/theller/pyhelp.cgi

that is the search the docs with link at this page:

http://www.python.org/doc/

Instead of the search engine, I get Error 404. This is a real bother for 
me, since I rely heavily on it for my work.

Is it broken? If so, is anybody trying to get it back up again, and 
what's the time scale in that case? Is there an alternative available 
somewhere?

Cheers!
/Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python style: to check or not to check args and data members

2006-09-01 Thread Joel Hedlund
  Short answer: Use Traits. Don't invent your own mini-Traits.

Thanks for a quick and informative answer! I'll be sure to read up on the 
subject. (And also: thanks Bruno for your contributions!)

  Types are very frequently exactly the wrong thing you want to check for.

I see what you mean. Allowing several data types may generate unwanted side 
effects (integer division when expecting real division, for example).

I understand that Traits can do value checking which is superior to what I 
presented, and that they can help me move validation away from functional 
code, which is always desirable. But there is still the problem of setting 
an approprate level of validation.

Should I validate data members only? This is quite easily done using Traits 
or some other technique and keeps validation bloat localized in the code. 
This is in line with the DRY principle and makes for smooth extensibility, 
but the tracebacks will be less useful.

Or should I go the whole way and validate at every turn (all data members, 
every arg in every method, ...)? This makes for very secure code and very 
useful tracebacks, but does not feel very DRY to me... Are the benefits 
worth the costs? Do I build myself a fortress of unmaintainability this way? 
Will people laugh at my modules?

Or taken to the other extreme: Should I simply duck-type everything, and 
only focus my validation efforts to external data (from users, external 
applications and other forces of evil). This solution makes for extremely 
clean code, but the thought of potential silent data corruption makes me 
more than a little queasy.

What level do you go for?

Thanks!
/Joel

Robert Kern wrote:
 Joel Hedlund wrote:
 Hi!

 The question of type checking/enforcing has bothered me for a while, and 
 since this newsgroup has a wealth of competence subscribed to it, I 
 figured this would be a great way of learning from the experts. I feel 
 there's a tradeoff between clear, easily readdable and extensible code 
 on one side, and safe code providing early errors and useful tracebacks 
 on the other. I want both! How do you guys do it? What's the pythonic 
 way? Are there any docs that I should read? All pointers and opinions 
 are appreciated!
 
 Short answer: Use Traits. Don't invent your own mini-Traits.
 
 (Disclosure: I work for Enthought.)
 
http://code.enthought.com/traits/
 
 Unfortunately, I think the standalone tarball on that page, uh, doesn't stand 
 alone right now. We're cleaning up the interdependencies over the next two 
 weeks. Right now, your best bet is to get the whole enthought package:
 
http://code.enthought.com/ets/
 
 Talk to us on enthought-dev if you need any help.
 
https://mail.enthought.com/mailman/listinfo/enthought-dev
 
 
 Now back to Traits itself:
 
 Traits does quite a bit more than type-checking, and I think that is its 
 least-useful feature that it provides for Python users. Types are very 
 frequently exactly the wrong thing you want to check for. They allow inputs 
 that 
 you would like to be invalid and disallow inputs that would have worked just 
 fine if you had relied on duck-typing. In general terms, Traits does 
 value-checking; it's just that some of the traits definitions check values by 
 validating their types.
 
 You have to be careful with type-checking, because it can introduce fragility 
 without enhancing safety. But sometimes you are working with other code that 
 necessarily has type requirements (like extension code), and moving the 
 requirements forward a bit helps build usable interfaces.
 
 Your examples would look like this with Traits:
 
 
 from enthought.traits.api import HasTraits, Int, method
 
 class MyClass(HasTraits):
  My example class.
  
 
  int_member = Int(0, desc=I am an integer)
 
  method(None, Int)
  def process_data(self, data):
   Do some data processing.
   
 
   self.int_member += 1
 
 
 a = MyClass(int_member=9)
 a = MyClass(int_member='moo')
 
 Traceback (most recent call last):
File stdin, line 1, in ?
File /Users/kern/svn/enthought-lib/enthought/traits/trait_handlers.py, 
 line 
 172, in error
  raise TraitError, ( object, name, self.info(), value )
 enthought.traits.trait_errors.TraitError: The 'int_member' trait of a MyClass 
 instance must be a value of type 'int', but a value of moo was specified.
 
 
 # and similar errors for
 # a.int_member = 'moo'
 # a.process_data('moo')
 
 
 The method() function predates 2.4 and has not yet been converted to a 
 decorator. We don't actually use it much.
 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python style: to check or not to check args and data members

2006-09-01 Thread Joel Hedlund
Bruno  Your email address seem to be wrong. I tried to reply to you 
directly in order to avoid thread bloat but my mail bounced.

Thanks for the quick reply though. I've skimmed through some docs on your 
suggestions and I'll be sure to read up on them properly later. But as I 
said to Robert Kern in this thread, this does not really seem resolve the
problem of setting an approprate level of validation.

How do you do it? Please reply to the group if you can find the time.

Cheers!
/Joel Hedlund

Bruno Desthuilliers wrote:
 Joel Hedlund a écrit :
 Hi!

 The question of type checking/enforcing has bothered me for a while, 
 (snip)
 I've also whipped up some examples in order to put the above questions 
 in context and for your amusement. :-)
 (snip)
 These are the attached modules:

 * nocheck_module.py:
   As the above example, but with docs. No type checking.

 * property_module.py
   Type checking of data members using properties.

 * methodcheck_module.py
   Type checking of args within methods.

 * decorator_module.py
   Type checking of args using method decorators.

 * maximum_security_module.py
   Decorator and property type checking.
 
 You forgot two other possible solutions (that can be mixed):
 - using custom descriptors
 - using FormEncode
 
 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python style: to check or not to check args and data members

2006-09-01 Thread Joel Hedlund
 And while we're at it : please avoid top-posting.

Yes, that was sloppy. Sorry.

/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python style: to check or not to check args and data members

2006-09-01 Thread Joel Hedlund
 I'm not sure that trying to fight against the language is a sound
 approach, whatever the language. 

That's the very reason I posted in the first place. I feel like I'm fighting 
the language, and since python at least to me seems to be so well thought 
out in all other aspects, the most obvious conclusion must be that I'm 
thinking about this the wrong way. And that's why I need your input!

  Or taken to the other extreme: Should I simply duck-type everything, and
  only focus my validation efforts to external data (from users, external
  applications and other forces of evil). 
 
 IMHO and according to my experience : 99% yes (there are few corner
 cases where it makes sens to ensure args correctness - which may or not
 imply type-checking). Packages like FormEncode are great for data
 conversion/validation. Once you have trusted data, the only possible
 problem is within your code.

That approach is quite in line with the blame yourself methodology, which 
seems to work in most other circumstances. Sort of like, developers who feed 
bad data into my code have only themselves to blame! I can dig that. :-)

Hmmm... So. I should build grimly paranoid parsers for external data, use 
duck-typed interfaces everywhere on the inside, and simply callously 
disregard developers who are disinclined to read documentation? I could do that.

  if you're really serious, unit tests is the way to go - they can check
  for much more than just types.

Yes, I'm very much serious indeed. But I haven't done any unit testing. I'll 
have to check into that. Thanks!

 My 2 cents.

Thankfully recieved and collecting interest as we speak.

Cheers!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python style: to check or not to check args and data members

2006-09-01 Thread Joel Hedlund
 I still wait for a
 proof that it leads to more robust programs - FWIW, MVHO is that it
 usually leads to more complex - hence potentially less robust - code.

MVHO? I assume you are not talking about Miami Valley Housing Opportunities 
here, but bloat probably leads to bugs, yes.

 Talking about interfaces, you may want to have a look at PyProtocols
 (PEAK) and Zope3 Interfaces.

Ooh. Neat.

 As long as you provide a usable documentation, misuse of your code is
 not your problem anymore (unless of course you're the one misusing it !-).

But hey, then I'm still just letting idiots suffer from their idiocy, and 
since that's part of our greater plan anyway I guess that's ok :-D

 Then you probably want to read the relevant chapter in DiveIntoPython.

You are completely correct. Thanks for the tip.

Thanks for your help! It's been real useful. Now I'll sleep better at night.

Cheers!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python style: to check or not to check args and data members

2006-09-01 Thread Joel Hedlund
 You might try doctests, they can be easier to write and fit into the
 unit test framework if needed.

While I firmly believe in keeping docs up to date, I don't think that 
doctests alone can solve the problem of maintaining data integrity in 
projects with more comlex interfaces (which is what I really meant to 
talk about. Sorry if my simplified examples led you to believe 
otherwise). For simple, deterministic functions like math.pow I think 
it's great, but for something like BaseHTTPServer... probably not. The 
__doc__'s required would be truly fascinating to behold. And probably 
voluminous and mostly unreadable for humans. Or is there something that 
I've misunderstood?

/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python style: to check or not to check args and data members

2006-09-01 Thread Joel Hedlund
 Oh, I was just addressing your bit about not knowing unit tests.
 Doctests can be quicker to put together and have only a small learning
 curve.

OK, I see what you mean. And you're right. I'm struggling mightily right 
now with trying to come up with sane unit tests for a bunch of 
generalized parser classes that I'm about to implement, and which are 
supposed to play nice with each other... Gah! But I'll get there 
eventually... :-)

 On the larger scale, I too advocate extensive checking of 'tainted'
 data from 'external' sources, then assuming 'clean' data is as expected
 and doing no explicit further data checks, after all, you've got to
 trust your development team/yourself.

Right.

Thanks for helpful tips and insights, and for taking the time!

Cheers!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: sys.argv[0] doesn't always contain the full path of running script.

2006-08-31 Thread Joel Hedlund
  How can I find where exactly the current python script is running?
 
 Doesnt __file__ attribute of each module contain the full filepath of
 the module?
 

Yes indeed! But the path to the module will not be the same as the path to 
the script if you are currently in an imported module. Consider this:

my_script.py:
---
import my_module

---

my_module.py:
---
print __file__

---

Running python test.py now prints /path/to/my_module.py, not 
/path/to/my_script.py.

Cheers!
/Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: sys.argv[0] doesn't always contain the full path of running script.

2006-08-31 Thread Joel Hedlund
  Running python test.py now prints /path/to/my_module.py, not
  /path/to/my_script.py.

That should have been python my_script.py. Sorry for the slip-up.

Cheers!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Python style: to check or not to check args and data members

2006-08-31 Thread Joel Hedlund

Hi!

The question of type checking/enforcing has bothered me for a while, and 
since this newsgroup has a wealth of competence subscribed to it, I 
figured this would be a great way of learning from the experts. I feel 
there's a tradeoff between clear, easily readdable and extensible code 
on one side, and safe code providing early errors and useful tracebacks 
on the other. I want both! How do you guys do it? What's the pythonic 
way? Are there any docs that I should read? All pointers and opinions 
are appreciated!


I've also whipped up some examples in order to put the above questions 
in context and for your amusement. :-)


Briefly:

class MyClass(object):
def __init__(self, int_member = 0):
self.int_member = int_member
def process_data(self, data):
self.int_member += data

The attached files are elaborations on this theme, with increasing 
security and, alas, rigidity and bloat. Even though 
maximum_security_module.py probably will be the safest to use, the 
coding style will bloat the code something awful and will probably make 
maintenance harder (please prove me wrong!). Where should I draw the line?


These are the attached modules:

* nocheck_module.py:
  As the above example, but with docs. No type checking.

* property_module.py
  Type checking of data members using properties.

* methodcheck_module.py
  Type checking of args within methods.

* decorator_module.py
  Type checking of args using method decorators.

* maximum_security_module.py
  Decorator and property type checking.

Let's pretend I'm writing a script, I import one of the above modules 
and then execute the following code


...
my_object = MyClass(data1)
my_object.process_data(data2)

and then let's pretend dataX is of a bad type, say for example str.

nocheck_module.py
=
Now, if data2 is bad, we get a suboptimal traceback (possibly to 
somewhere deep within the code, and probably with an unrelated error 
message). However, the first point of failure will in fact be included 
in the traceback, so this error should be possible to find with little 
effort. On the other hand, if data1 is bad, the exception will be raised 
somewhere past the point of first failure. The traceback will be 
completely off, and the error message will still be bad. Even worse: if 
both are bad, we won't even get an exception. We will trundle on with 
corrupted data and take no notice. Very clear code, though. Easily 
extensible.


property_module.py
==
Here we catch that data1 failure. Tracebacks may still be inconcise with 
uninformative error messages, however they will not be as bad as in 
nocheck_module.py. Bloat. +7 or more lines of boilerplate code for each 
additional data member. Quite clear code. Readily extensible.


methodcheck_module.py
=
Good, concise tracebacks with exact error messages. Lots of bloat and 
obscured code. Misses errors where data members are changed directly. 
Very hard to read and extend.


decorator_module.py
===
Good, concise tracebacks with good error messages. Some bloat. Misses 
errors where data members are changed directly. Clear, but somewhat hard 
to extend. Decorators for *all* methods?! This cannot be the purpose of 
python!?


maximum_security_method.py
==
Good, concise tracebacks with good error messages. No errors missed (I 
think? :-) . Bloat. Lots of decorators and boilerplate property code all 
over the place (thankfully not within functional code, though). Is this 
how it's supposed to be done?



And if you've read all the way down here I thank you so very much for 
your patience and perseverance. Now I'd like to hear your thoughts on 
this! Where should the line be drawn? Should I just typecheck data from 
unreliable sources (users/other applications) and stick with the 
barebone strategy, or should I go all the way? Did I miss something 
obvious? Should I read some docs? (Which?) Are there performance issues 
to consider?


Thanks again for taking the time.

Cheers!
/Joel Hedlund
Example module without method argument type checking.

Pros:
Pinpointed tracebacks with very exact error messages.

Cons:
Lots of boilerplate typechecking code littered all over the place, 
obscuring functionality at the start of every function. 
Bloat will accumulate rapidly. +2 lines of boilerplate code per method and
argument.
If I at some point decide that floats are also ok, I'll need to crawl all 
over the code with a magnifying glass and a pair of tweezers. 
We don't catch errors of the type 
a = MyClass()
a.int_member = 'moo!
a.process_data(1)



class MyClass(object):
My example class.
def __init__(self, int_member = 0):
Instantiate a new MyClass object.

IN:
int_member = 0: int
Set the value for the data member. Must be int.
 

# Boilerplate typechecking code.
if not isinstance(int_member, int):
raise

Re: Has anyone used davlib by Greg Stein?

2006-07-31 Thread Joel Hedlund
 Has anyone worked with this? Is it any good? 

I'll take the palpable silence as a no then. :-)

Thank's anyway!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Has anyone used davlib by Greg Stein?

2006-07-18 Thread Joel Hedlund
Hi!

I want to PUT files to a authenticated https WebDAV server from within a python 
script. Therefore I put python dav into google, and the davlib module by Greg 
Stein (and Guido?) came up. It seems to be soild but has very little docs. Has 
anyone worked with this? Is it any good? Where can I find some good docs for 
it? 

Thanks for your time! 

Cheers!
/Joel 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: logging module: add client_addr to all log records

2006-05-11 Thread Joel Hedlund
 See a very similar example which uses the new 'extra' keyword argument:

Now that's brilliant! Exactly what I need.

But unfortunately, it's also unavailable until 2.5 comes out. Until then I'm 
afraid I'm stuck with my shoddy hack... but it's always nice to know the time 
will come when I can finally throw it out!

Thanks for taking the time!

Cheers!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is this a good use of __metaclass__?

2006-05-09 Thread Joel Hedlund
Hi!

Thanks for taking the time to answer. I will definitely have a look at writing 
dispatchers.

  The problem you have with your metaclass version, is the infamous
  metaclass conflict.

I think I solved the problem of conflicting metaclasses in this case and I 
posted it as a reply to Bruno Desthuilliers in this thread. Do you also think 
that's a bad use of __metaclass__? It's not that I'm hellbent on using 
metaclasses - I'm just curious how people think they should be used.

Cheers!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Is this a good use of __metaclass__?

2006-05-05 Thread Joel Hedlund
Hi!

I need some input on my use of metaclasses since I'm not sure I'm using them in 
a pythonic and graceful manner. I'm very grateful for any tips, pointers and 
RTFMs I can get from you guys.

Below, you'll find some background info and an executable code example.

In the code example I have two ways of doing the same thing. The problem is 
that the Neat version doesn't work, and the Ugly version that works gives 
me the creeps.

The Neat version raises a TypeError when I try the multiple inheritance 
(marked with comment in the code):

Traceback (most recent call last):
   File /bioinfo/yohell/pybox/gridjs/gridjs-2.0/test.py, line 132, in ?
 class FullAPI(JobAPI, UserAPI, AdminAPI):
   File /bioinfo/yohell/pybox/gridjs/gridjs-2.0/test.py, line 43, in __new__
 return type.__new__(cls,classname,bases,classdict)
TypeError: Error when calling the metaclass bases
 metaclass conflict: the metaclass of a derived class must be a 
(non-strict) subclass of the metaclasses of all its bases

In the Ugly version, I'm changing the metaclass in the global scope between 
class definitions, and that gives me bad vibes.

What should I do? Is there a way to fix my Neat solution? Is my Ugly 
solution in fact not so horrid as I think it is? Or should I rethink the whole 
idea? Or maybe stick with decorating manually (or in BaseAPI.__init__)?

Sincere thanks for your time.

Cheers!
/Joel Hedlund




Background
##
(feel free to skip this if you are in a hurry)
I'm writing an XMLRPC server that serves three types of clients (jobs, users 
and admins). To do this I'm subclassing SimpleXMLRPCServer for all the 
connection work, and I was planning on putting the entire XMLRPC API as public 
methods of a class, and expose it to clients using .register_instance(). Each 
session is initiated by a handshake where a challenge is presented to the 
client, each method call must then be authenticated using certificates and 
incremental digest response. Each client type must be authenticated 
differently, and each type of client will also use a discrete set of methods.

At first, I was planning to use method decorators to do the validation, and 
have a different decorator for each type of client validation, like so:

class FullAPI:
 @valid_job_required
 def do_routine_stuff(self, clientid, response, ...):
 pass
 @valid_user_required
 def do_mundane_stuff(self, clientid, response, ...):
 pass
 @valid_admin_required
 def do_scary_stuff(self, clientid, response, ...):
 pass
 ...

There will be a lot of methods for each client type, so this class would become 
monstrous. Therefore I started wondering if it weren't a better idea to put the 
different client APIs in different classes and decorate them separately using 
metaclasses, and finally bring the APIs together using multiple inheritance. 
This is what I had in mind:

class BaseAPI(object):
 pass

class JobApi(BaseAPI):
 pass

class UserApi(BaseAPI):
 pass

class AdminApi(BaseAPI):
 pass

class FullApi(JobAPI, UserAPI, AdminAPI):
 pass

Now, I'm having trouble implementing the metaclass bit in a nice and pythonic 
way.



Code example


test.py
===
# Base metaclass for decorating public methods:
from decorator import decorator

@decorator
def no_change(func, *pargs, **kwargs):
 return func(*pargs, **kwargs)

class DecoratePublicMethods(type):
 Equip all public methods with a given decorator.

 Class data members:
 decorator = no_change: decorator
 The decorator that you wish to apply to public methods of the class
 instances. The default does not change program behavior.
 do_not_decorate = []: iterable str
 Names of public methods that should not be decorated.
 multiple_decoration = False: bool
 If set to False, methods will not be decorated if they already
 have been decorated by a prior metaclass.
 decoration_tag = '__public_method_decorated__': str
 Decorated public methods will be equipped with an attribute
 with this name and a value of True.

 

 decorator = no_change
 do_not_decorate = []
 multiple_decoration = True
 decoration_tag = '__public_method_decorated__'

 def __new__(cls,classname,bases,classdict):
 for attr,item in classdict.items():
 if not callable(item):
 continue
 if attr in cls.do_not_decorate or attr.startswith('_'):
 continue
 if (not cls.multiple_decoration
 and hasattr(classdict[attr], cls.decoration_tag)):
 continue
 classdict[attr] = cls.decorator(item)
 setattr(classdict[attr], cls.decoration_tag, True)
 return type.__new__(cls,classname,bases,classdict)


## Authentication stuff:
class AuthenticationError(Exception):
 pass

import random

Re: Possibly dumb question about dicts and __hash__()

2006-05-04 Thread Joel Hedlund
Hi!

  Just the hash is not enough. You need to define equality, too:

Thanks a million for clearing that up.

Cheers!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Possibly dumb question about dicts and __hash__()

2006-05-03 Thread Joel Hedlund
Hi!

There's one thing about dictionaries and __hash__() methods that puzzle me. I 
have a class with several data members, one of which is 'name' (a str). I would 
like to store several of these objects in a dict for quick access 
({name:object} style). Now, I was thinking that given a list of objects I might 
do something like

d = {}
for o in objects:
 d[o] = o

and still be able to retrieve the data like so:

d[name]

if I just defined a __hash__ method like so:

def __hash__(self):
 return self.name.__hash__()

but this fails miserably. Feel free to laugh if you feel like it. I cooked up a 
little example with sample output below if you care to take the time.

Code:
---
class NamedThing(object):
 def __init__(self, name):
 self.name = name
 def __hash__(self):
 return self.name.__hash__()
 def __repr__(self):
 return 'foo'
name = 'moo'
o = NamedThing(name)
print This output puzzles me:
d = {}
d[o] = o
d[name] = o
print d
print
print If I wrap all keys in hash() calls I'm fine:
d = {}
d[hash(o)] = o
d[hash(name)] = o
print d
print
print But how come the first method didn't work?
---

Output:
---
This output puzzles me:
{'moo': foo, foo: foo}

If I wrap all keys in hash() calls I'm fine:
{2038943316: foo}

But how come the first method didn't work?
---

I'd be grateful if anyone can shed a litte light on this, or point me to some 
docs I might have missed.

Also:
Am I in fact abusing the __hash__() method? If so - what's the intended use of 
the __hash__() method?

Is there a better way of implementing this?

I realise I could just write

d[o.name] = o

but this problem seems to pop up every now and then and I'm curious if there's 
some neat syntactic trick that I could legally apply here.

Thanks for your time!
/Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Possibly dumb question about dicts and __hash__()

2006-05-03 Thread Joel Hedlund
Hi!

Thanks for the quick response!

  Although this is a bit illegal, because repr is not supposed to be used
  this way.

How illegal is it? If I document it and put it in an opensource project, will 
people throw tomatoes?

/Joel

[EMAIL PROTECTED] wrote:
 Use __repr__.  Behold:
 
 
class NamedThing(object):
 
  def __init__(self, name):
  self.name = name
  def __repr__(self):
  return self.name
 
 
a = NamedThing(Delaware)
b = NamedThing(Hawaii)
d = {}
d[a] = 1
d[b] = 50
print d
 
 {Delaware: 1, Hawaii: 50}
 
d[a]
 
 1
 
d[b]
 
 50
 
 Although this is a bit illegal, because repr is not supposed to be used
 this way.
 
 Joel Hedlund wrote:
 
Hi!

There's one thing about dictionaries and __hash__() methods that puzzle me. I
have a class with several data members, one of which is 'name' (a str). I 
would
like to store several of these objects in a dict for quick access
({name:object} style). Now, I was thinking that given a list of objects I 
might
do something like

d = {}
for o in objects:
 d[o] = o

and still be able to retrieve the data like so:

d[name]

if I just defined a __hash__ method like so:

def __hash__(self):
 return self.name.__hash__()

but this fails miserably. Feel free to laugh if you feel like it. I cooked up 
a
little example with sample output below if you care to take the time.

Code:
---
class NamedThing(object):
 def __init__(self, name):
 self.name = name
 def __hash__(self):
 return self.name.__hash__()
 def __repr__(self):
 return 'foo'
name = 'moo'
o = NamedThing(name)
print This output puzzles me:
d = {}
d[o] = o
d[name] = o
print d
print
print If I wrap all keys in hash() calls I'm fine:
d = {}
d[hash(o)] = o
d[hash(name)] = o
print d
print
print But how come the first method didn't work?
---

Output:
---
This output puzzles me:
{'moo': foo, foo: foo}

If I wrap all keys in hash() calls I'm fine:
{2038943316: foo}

But how come the first method didn't work?
---

I'd be grateful if anyone can shed a litte light on this, or point me to some
docs I might have missed.

Also:
Am I in fact abusing the __hash__() method? If so - what's the intended use of
the __hash__() method?

Is there a better way of implementing this?

I realise I could just write

d[o.name] = o

but this problem seems to pop up every now and then and I'm curious if there's
some neat syntactic trick that I could legally apply here.

Thanks for your time!
/Joel Hedlund
 
 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Possibly dumb question about dicts and __hash__()

2006-05-03 Thread Joel Hedlund
Beautiful!

But how come my attempt didn't work? I've seen docs that explain how __hash__() 
methods are used to put objects in dict buckets:

http://docs.python.org/ref/customization.html#l2h-195

But if it's really hash(str(o)) that's used for dict keys, what good are 
__hash__() methods? Or am I reading the docs wrong?

/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Free Python IDE ?

2006-03-30 Thread Joel Hedlund
Ernesto wrote:
 I'm looking for a tool that I can use to step through python software
 (debugging environment).  Is there a good FREE one (or one which is
 excellent and moderately priced ) ?
 

Try searching this newsgroup for python ide, editor and such and you'll get 
plenty of good advice. This topic is discussed about once every week or so.

Cheers!
/joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Difference between 'is' and '=='

2006-03-29 Thread Joel Hedlund
 There's no requirement that the socket module or
 anything else return values using the same object that the
 socket.AF_UNIX constant uses.

Ouch. That's certainly an eyeopener.

For me, this means several things, and I'd really like to hear people's 
thoughts about them.

It basically boils down to don't ever use 'is' unless pushed into a corner, 
and nevermind what PEP8 says about it.

So here we go... *takes deep breath*

Identity checks can only be done safely to compare a variable to a defined 
builtin singleton such as None. Since this is only marginally faster than a 
value equality comparison, there is little practical reason for doing so. 
(Except for the sake of following PEP8, more of that below).

You cannot expect to ever have identity between a value returned by a 
function/method and a CONSTANT defined in the same package/module, if you do 
not have comlete control over that module. Therefore, such identity checks 
should always be given a value equality fallback. In most cases the identity 
check will not be significantly faster than a value equality check, so for the 
sake of readability it's generally a good idea to skip the identity check and 
just do a value equality check directly. (Personally, I don't think it's good 
style to define constants and not be strict about how you use them, but that's 
on a side note and not very relevant to this discussion)

It may be a good idea to use identity checks for variables vs CONSTANTs defined 
in the same module/package, if it's Your module/package and you have complete 
control over it. Felipe Almeida Lessa provided a good argument for this earlier 
in this thread:

 Here I knew 128 == MSG_EOR, but what if that was a
 coincidence of some other function I created? I would *never* catch that
 bug as the function that tests for MSG_EOR expects any integer. By
 testing with is you test for *that* integer, the one defined on your
 module and that shouldn't go out of it anyway.

However it may be a bad idea to do so, since it may lure you into a false sense 
of security, so you may start to unintentionally misuse 'is' in an unsafe 
manner.

So the only motivated use of 'is' would then be the one shown in my first 
example with the massive_computations() function: as a shortcut past costly 
value equality computations where the result is known, and with an added value 
equality fallback for safety. Preferably, the use of identity should then also 
be motivated in a nearby comment.

My conlusion is then that using 'is' is a bad habit and leads to less readable 
code. You should never use it, unless it leads to a *measurable* gain in 
performance, in which it should also be given a value equality fallback and a 
comment. And lastly, PEP8 should be changed to reflect this.

Wow... that got a bit long and I applaud you for getting this far! :-) Thanks 
for taking the time to read it.

So what are your thoughts about this, then?

Cheers!
/Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Difference between 'is' and '=='

2006-03-29 Thread Joel Hedlund
sorry

  You compare a module.CONSTANT to the result of an expression

s/an expression/a binary operation/

/joel

Joel Hedlund wrote:
If it weren't for the current CPython optimization (caching small 
integers) 
 
 
 This has already been covered elsewhere in this thread. Read up on it.
 
 
this code which it appears you would support writing

   if (flags  os.R_OK) is os.R_OK:
 
 
 I do not.
 
 You compare a module.CONSTANT to the result of an expression (flags  
 os.R_OK). 
 Expressions are not names bound to objects, the identity of which is what I'm 
 talking about. This example does not apply. Also, the identity check in my 
 example has a value equality fallback. Yours doesn't, so it really does not 
 apply.
 
   (I think you should give it up... you're trying to push a rope.)
 
 I'm not pushing anything. I just don't like being misquoted.
 
 Cheers,
 Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Difference between 'is' and '=='

2006-03-29 Thread Joel Hedlund
 For me, this means several things, and I'd really like to hear people's
 thoughts about them.

 you need to spend more time relaxing, and less time making up arbitrary
 rules for others to follow.

I'm very relaxed, thank you. I do not make up rules for others to follow. I ask 
for other peoples opinions so that I can reevaluate my views.

I do respect your views, as I clearly can see you have been helpful and 
constructive in earlier discussions in this newsgroup. So therefore if you 
think my statements are nonsense, there's a good chance you're right. And 
that's why I posted. To hear what other people think.

Sorry if I came off stiff and belligerent because that certainly wasn't ny 
intent.

  read the PEP and the documentation.

Always do.

  use is when you want object identity,
  and you're sure it's the right thing to do.  don't use it when you're not 
  sure.
  any other approach would be unpythonic.

Right.

Chill!
/Joel Hedlund




 
 /F
 
 
 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Difference between 'is' and '=='

2006-03-29 Thread Joel Hedlund
 [*] I discovered a neat feature I didn't know my editor had: grepping for 
 [c:python-keywordis 

Neat indeed. Which editor is that?

Thanks for a quick and comprehensive answer, btw.

Cheers!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Difference between 'is' and '=='

2006-03-28 Thread Joel Hedlund
 This does *not* also mean constants and such:
snip
  a = 123456789
  a == 123456789
 True
  a is 123456789
 False
  

I didn't mean that kind of constant. I meant named constants with defined 
meaning, as in the example that I cooked up in my post. More examples: os.R_OK, 
or more complex ones like mymodule.DEFAULT_CONNECTION_CLASS.

Sorry for causing unneccessary confusion.

Cheers!
/Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Difference between 'is' and '=='

2006-03-28 Thread Joel Hedlund
You should _never_ use 'is' to check for equivalence of value. Yes, due
to the implementation of CPython the behaviour you quote above does
occur, but it doesn't mean quite what you seem to think it does.
 
 
 /me not checking for value. I'm checking for identity. Suppose a is a
 constant. I want to check if b is the same constant.

/me too. That's what my example was all about. I was using identity to a known 
CONSTANT (in caps as per python naming conventions :-) to sidestep costly value 
equality computations.

 By doing an is instead of a == you *can* catch some errors.
 snip
 By
 testing with is you test for *that* integer, the one defined on your
 module and that shouldn't go out of it anyway.

I totally agree with you on this point. Anything that helps guarding against 
stealthed errors is a good thing by my standards.

Cheers!
/Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Difference between 'is' and '=='

2006-03-28 Thread Joel Hedlund
Not those kind of constants, but this one:
 
 
Python 2.4.2 (#2, Nov 20 2005, 17:04:48)
[GCC 4.0.3 2005 (prerelease) (Debian 4.0.2-4)] on linux2
Type help, copyright, credits or license for more information.

CONST = 123456789
a = CONST
a == CONST

True

a is CONST

True

 
 That's a little misleading, and goes back to the questions of what is
 assignment in Python? and What does it mean for an object to be
 mutable?
 
 The line a = CONST simply gives CONST a new name.  After that, a is
 CONST will be True no matter what CONST was.  Under some circumstances,
 I can even change CONST, and a is CONST will *still* be True.

Anyone who thinks it's a good idea to change a CONST that's not in a module 
that they have full control over must really know what they're doing or suffer 
the consequences. Most often, the consequences will be nasty bugs.

Cheers!
/Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Difference between 'is' and '=='

2006-03-28 Thread Joel Hedlund
 a is None
 
 is quicker than
 
 a == None

I think it's not such a good idea to focus on speed gains here, since they 
really are marginal (max 2 seconds total after 1000 comparisons):

  import timeit
  print timeit.Timer(a == None, a = 1).timeit(int(1e7))
4.19580316544
  print timeit.Timer(a == None, a = None).timeit(int(1e7))
3.20231699944
  print timeit.Timer(a is None, a = 1).timeit(int(1e7))
2.37486410141
  print timeit.Timer(a is None, a = None).timeit(int(1e7))
2.48372101784

Your observation is certainly correct, but I think it's better applied to more 
complex comparisons (say for example comparisons between gigantic objects or 
objects where value equality determination require a lot of nontrivial 
computations). That's where any real speed gains can be found. PEP8 tells me 
it's better style to write a is None and that's good enough for me. Otherwise 
I try to stay away from speed microoptimisations as much as possible since it 
generally results in less readable code, which in turn often results in an 
overall speed loss because code maintenance will be harder.

Cheers!
/Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Difference between 'is' and '=='

2006-03-28 Thread Joel Hedlund
 If it weren't for the current CPython optimization (caching small 
 integers) 

This has already been covered elsewhere in this thread. Read up on it.

 this code which it appears you would support writing
 
if (flags  os.R_OK) is os.R_OK:

I do not.

You compare a module.CONSTANT to the result of an expression (flags  os.R_OK). 
Expressions are not names bound to objects, the identity of which is what I'm 
talking about. This example does not apply. Also, the identity check in my 
example has a value equality fallback. Yours doesn't, so it really does not 
apply.

  (I think you should give it up... you're trying to push a rope.)

I'm not pushing anything. I just don't like being misquoted.

Cheers,
Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Difference between 'is' and '=='

2006-03-27 Thread Joel Hedlund
 is is like id(obj1) == id(obj2)
snip
 (Think of id as memory adresses.)

Which means that is comparisons in general will be faster than == 
comparisons. According to PEP8 (python programming style guidelines) you should 
use 'is' when comparing to singletons like None. I take this to also include 
constants and such. That allows us to take short cuts through known terrain, 
such as in the massive_computations function below:

--
import time

class LotsOfData(object):
 def __init__(self, *data):
 self.data = data
 def __eq__(self, o):
 time.sleep(2) # time consuming computations...
 return self.data == o.data

KNOWN_DATA = LotsOfData(1,2)
same_data = KNOWN_DATA
equal_data = LotsOfData(1,2)
other_data = LotsOfData(2,3)

def massive_computations(data = KNOWN_DATA):
 if data is KNOWN_DATA:
 return very quick answer
 elif data == KNOWN_DATA:
 return quick answer
 else:
 time.sleep(10) # time consuming computations...
 return slow answer

print Here we go!
print massive_computations()
print massive_computations(same_data)
print massive_computations(equal_data)
print massive_computations(other_data)
print Done.
--

Cheers,
Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: OT: unix newbie questions

2006-03-26 Thread Joel Hedlund
  * I'm using the tcsh shell and have no problems with it, but bash seems
  more popular - any reason to change? (I don't intend writing many shell
  scripts)

You can do this in bash:
$ python myprog  stdout.txt 2 stderr.txt

and have output to sys.stdout and sys.stderr go in separate files. Quite handy 
for separating output and debugging comments. I believe that this is impossible 
in tcsh (where you only can do $ python myprog  stdout_and_stderr.txt to 
catch stdout and stderr at the same time).

Also: bash_completions. It keeps track of arguments and options for commonly 
used programs and commands. If you type cd  and hit tab for completions you 
will only see directories, since bash_completions knows that this is all cd 
accepts. Don't know if tcsh has anything similar.

Cheers,
Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python has a new Logo

2006-03-24 Thread Joel Hedlund
That's not constructive.

I'd like to quote Rich Teer on this subject:

   ___
   /|  /|  |  |
   ||__||  |  Please do   |
  /   O O\__ NOT  |
 /  \ feed the|
/  \ \ trolls |
   /   _\ \ __|
  /|\\ \ ||
 / | | | |\/ ||
/   \|_|_|/   \__||
   /  /  \|| ||
  /   |   | /||  --|
  |   |   |// |  --|
   * _|  |_|_|_|  | \-/
*-- _--\ _ \ //   |
  /  _ \\ _ //   |/
*  /   \_ /- | - |   |
  *  ___ c_c_c_C/ \C_c_c_c

/Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: New-style Python icons

2006-03-21 Thread Joel Hedlund
   http://www.doxdesk.com/img/software/py/icons.png

Neat!
/Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Become another user

2006-03-21 Thread Joel Hedlund
 Look at the os module: http://docs.python.org/lib/os-file-dir.html
 
 This has various functions you may find useful depending on how you
 want to go about it, such as chmod and chown.

Correct me if I'm wrong, but doesn't this also require having a little chat 
with the admin to set things up so the server has permission to give away files 
to Joakim? Say, putting them in a common group or something?

Just making the files world readable may not be the best option (which I 
believe is the only option otherwise).

Cheers!
/Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Counting nested loop iterations

2006-03-17 Thread Joel Hedlund
 for index, color in enumerate(color
   for animal in zoo
   for color in animal):
 # the something more goes here.
 pass

I've been thinking about these nested generator expressions and list 
comprehensions. How come we write:

a for b in c for a in b

instead of

a for a in b for b in c

More detailed example follows below.

I feel the latter variant is more intuitive. Could anyone please explain the 
fault of my logic or explain how I should be thinking about this? Or point me 
to somewhere where I can read up on this?

Cheers,
Joel Hedlund



More detailed example:
  c = [[1,4,8],[2,5,7]]
  [a for b in c for a in b]
[1, 4, 8, 2, 5, 7]
  del a,b,c
  c = [[1,4,8],[2,5,7]]
  [a for a in b for b in c]

Traceback (most recent call last):
   File pyshell#30, line 1, in -toplevel-
 [a for a in b for b in c]
NameError: name 'b' is not defined



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Tried Ruby (or, what Python *really* needs or perldoc!)

2006-03-17 Thread Joel Hedlund
This release is as alpha as alpha gets. It's so alpha it
actually loops back around to zeta -- but it's a start, and I
think it's exactly what the Python community needs.
 
 
 Not to pick nits, but that should actually be ... so alpha that it actually
 loops back around to *OMEGA*.
 

I think he's using extended Greek++. That got seriously big around the late 
B.C's.

/Joel

Sorry for this garbage post, btw... Couldn't help myself... :-)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Counting nested loop iterations

2006-03-17 Thread Joel Hedlund
 a list comprehension works exactly like an ordinary for
 loop, except that the important thing (the expression) is moved to the
 beginning of the statement.

Right. Thanks!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: andmap and ormap

2006-03-14 Thread Joel Hedlund
 footnote: if you have a recent Python 2.5 build,

Who would have that? Is it a good idea to use a pre-alpha python version? Or 
any unrealeased python version for that matter? I was under the impression that 
the recommended way to go for meager developers in python like myself is to 
stick with the latest stable production release (2.4.2 at the time of writing I 
believe). Or should I start grabbing the Subversion trunk on a nightly basis?

Cheers!
/Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python IDE: great headache....

2006-03-13 Thread Joel Hedlund
 Anyone knows if this platform is a good one?

It's very good. It's comfortable, helpful and stable. Also looks good.

 Eclipse + Pydev does most, if not all, of your list - I am not sure what 
 you mean by conditional pause -  plus a whole lot more.  

Maybe he means conditional breakpoints? PyDev certainly has that.

 I like Eclipse, but lots of folks on the Python groups seem to hate it 
 with a passion.

Any ideas why?

 If you install Eclipse and try to use it without reading the Workbench 
 User Guide then you are not going to get anywhere.

Woah, easy now! I never read any Workbench User Guide and I'm doing just fine 
with PyDev. Fabio Zadrozny (PyDev developer) wrote an excellent startup guide 
for python programmers that includes installing and basic editing:

http://www.fabioz.com/pydev/manual_101_root.html

It's all I ever read and it was enough for me to get going with Eclipse + PyDev 
within 15 minutes on a WinXP machine. 

On a side note: with Ubuntulinux 5.10 it was more of a hassle, but that was 
just to get Eclipse running smoothly. I.e: an Eclipse/apt/Java problem. Once 
that was neatly in place, that guide above worked flawlessly.

Cheers!
/Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python IDE: great headache....

2006-03-13 Thread Joel Hedlund
 Sorry to offend, I was just extrapoloating from personal experience.

No worries, man. No offense taken :-)

 but I could not get going with Eclipse
 ...
 Even installing it the first time seemed to be a mystery.

Yeah I felt the same too when I first installed it. I had in fact given up 
using Eclipse, but then I found that starter guide I linked to in my last post. 
It really is excellent. It's thorough and to the point, and I really recommend 
it to people who are interested in PyDev.

Cheers,
Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: A bit OT: Python prompts display as nested mail quotes in Thunderbird

2006-03-10 Thread Joel Hedlund
 They already ARE plain text (I don't know of anyone submitting
 MIME/HTML enhanced content on this group).

I know.

 it would mean all other quoted text would not look quoted in your reader.

I.e. they would have '' chars at line start. That is *excatly* what I want and 
what I asked in my post. I don't need colored lines in the margin to understand 
quotes.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: A bit OT: Python prompts display as nested mail quotes in Thunderbird

2006-03-10 Thread Joel Hedlund
 Do you have the Quote Colors extension? 

I do now. :-)

 You can also disable the use of colors in the options, but that will 
 remove the colors for all messages.

Or I can tell it to display colored '' chars. Marvellous!

Thanks for the advice! You're a real help.

/Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: why use special config formats?

2006-03-10 Thread Joel Hedlund
I agree with Steve and I agree Sybren. 

Also:
This is a Bad Idea, since you should never add more complexity than needed. 
Imports, computation, IO and so on are generally not needed for program 
configuration, so standard configfile syntax should therefore not allow it. 
Otherwise you may easily end up with hard-to-debug errors, or even worse - 
weird program behavior. 

/Joel

-- 
http://mail.python.org/mailman/listinfo/python-list


A bit OT: Python prompts display as nested mail quotes in Thunderbird

2006-03-09 Thread Joel Hedlund
Hi

Sorry to bother you with my OT problems, but my newsgroup reader (Thunderbird) 
displays explicitly written python prompts as triple nested mail quotes (with 
lines in alternating colors in the margins). That's pretty tiresome to look at 
in a python newsgroup. Therefore, I'm looking for a way to display messages 
from this newsgroup *In Plain Text*.

I tried View  Message Body As  Plain Text but that did nothing. 

I'm not sure what to search for so I didn't get far neither with Google or the 
Mozilla Thunderbird FAQ or newsgroup.

Has anyone else encountered/solved this problem? Should I get another newsgroup 
reader? In that case, which? I'm running Thunderbird 1.0.7 on Ubuntulinux 5.10.

Thank you for your time,
Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Inter-module globals

2006-03-09 Thread Joel Hedlund
 Use a module and a class variables for that.

I think we could manage a little example too ;-) This one is a fictitious 
little game project where the user can define a custom graphics directory on 
the command line. 

Three files: game.py, graphics.py and common.py. 

The common.py file contains that class with class variables that jorge was 
talking about. Other modules import this as needed.

The contents of the files follow below.

So:
$ python game.py
/usr/local/mygame/data/gfx
$ python game.py --gfx-dir moo/cow
/usr/local/mygame/moo/cow
$ python game.py --gfx-dir /moo/cow
/moo/cow

Hope it helps!
/Joel Hedlund



main.py:

#!/usr/bin/python

import sys

import graphics
from common import Settings

try:
i = sys.argv.index('--gfx-dir')
except ValueError:
pass
else:
Settings.graphics_dir = sys.argv[i + 1]

print graphics.graphics_dir()




common.py:

class Settings(object):
game_dir = '/usr/local/mygame'
graphics_dir = 'data/gfx'




graphics.py:

import os

from common import Settings

def graphics_dir():
return os.path.join(Settings.game_dir, Settings.graphics_dir)








Jorge Godoy wrote:
 Anton81 [EMAIL PROTECTED] writes:
 
 
I want to use globals that are immediately visible in all modules. My
attempts to use global haven't worked. Suggestions?
 
 
 Use a module and a class variables for that.  Import your module and
 read/update class variables as you need them.
 
 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Checking function calls

2006-03-08 Thread Joel Hedlund
If you have access to the source of the function you want to call (and know 
what kind of data types it wants in its args) you can raise something else for 
bad parameters, eg:


class ArgTypeError(TypeError):
def __init__(self, arg):
self.arg = arg

def __str__(self):
return bad type for argument %r % self.arg

def moo(cow):
if not isinstance(cow, str):
raise ArgTypeError('cow')
print %s says moo! % cow
# this error is not caused by wrong arg type:
var = 1 + 's'

function = moo

for arg in [1, 'rose']:
try:
function(arg)
except ArgTypeError, e:
print e


Output:

bad type for argument 'cow'
rose says moo!

Traceback (most recent call last):
  File /merlot1/yohell/eraseme/test.py, line 23, in -toplevel-
make_noise('rose')
  File /merlot1/yohell/eraseme/test.py, line 13, in moo
raise TypeError
TypeError


Hope it helps
/Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: help in converting perl re to python re

2006-03-03 Thread Joel Hedlund
Hi


 the perl code finds a line that matches something like
 tag1sometext\tag1 in the line and then assign $variable the value
 of  sometext

No, but if you use a closing /tag1 instead of \tag1 it does. You had me 
scratching my head for a while there. :-)

This should do it in python:


#!/usr/bin/python

import re

regexp = re.compile(r(tag1)(.*)/\1)
line = tag1sometext/tag1
match = regexp.search(line) 

if match:
variable = match.group(2)



Good luck!
/Joel Hedlund


[EMAIL PROTECTED] wrote:
 hi
 
 i have some regular exp code in perl that i want to convert to python.
 
 
 if $line =~ m#(tag1)(.*)/\1#
{
  $variable = $2;
 }
 
 the perl code finds a line that matches something like
 
 tag1sometext\tag1 in the line and then assign $variable the value
 of  sometext
 
 how can i do an equivalent of that using re module?
 thanks
 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: help in converting perl re to python re

2006-03-03 Thread Joel Hedlund
 I'd go for
 regexp = re.compile(r(tag1)(.*?)/\1)

Indeed. I second that.

/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: editor for Python on Linux

2006-02-21 Thread Joel Hedlund
I really think that IDLE is one of the best around in Python source editing.
 
 For me, I find that IDLE is about the worse for editing Python sources.

Worse? Now that's harsh. I'm with billie on this one. I usually spend a day 
or so every 3 months trying to find a free python editor that surpasses IDLE. 
I've been doing it for 3 years now and for me, IDLE is still king of the hill. 

What I like about IDLE is that it is configurable, intuitive and rock stable, 
and I've come to realize that combination is rare indeed in the world of free 
editors. 

Another pro for IDLE is that you probably already have it installed, since it 
comes included in the standard python releases. If you decide to give IDLE a go 
you might also want to check out the latest subversion version of IDLE, since 
it has a bunch of really useful syntax helper updates.

Cheers!
/Joel Hedlund
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Best way of finding terminal width/height?

2006-02-15 Thread Joel Hedlund
 Sure.  I was going to do that yesterday, but I realized that I
 didn't know how/where to do it.  I assume there's a link
 somewhere at www.python.org, but I haven't had a chance to look
 yet.

It's already reported to the bug tracker:

http://www.python.org/sf/210599

Apparently, this has been around since 1.5.2. It's in fact also in the feature 
request PEP right now: 

http://www.python.org/peps/pep-0042.html

and it's listed as a Big project under the Standard library heading. I 
assume we're going to have to live with this for a while yet...

Take care everyone!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What editor shall I use?

2006-02-13 Thread Joel Hedlund
Lad wrote:
 What editor shall I use if my Python script must contain utf-8
 characters?

Also, don't overlook IDLE, the IDE that ships with python. I use it in my work 
every day. Once every three months or so I invest a day in looking for a better 
free python IDE/editor, and still after 3 years or so, I still haven't found 
anyhing that beats it. 

Good luck!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Best way of finding terminal width/height?

2006-02-09 Thread Joel Hedlund
 It didn't insert an EOF, it just caused read() to return
 prematurely.  You should call read() again until it receives
 a _real_ EOF and returns ''.

Copy that. Point taken.

 There appear to be a couple problems with this description:

  1) It says that read() in blocking mode without a size
 parameter it will read until EOF.  This is not what happens
 when reading a terminal that receives SIGWINCH, so you're
 right: read() it isn't working as described.

  2) It also says that it makes sense to continue to read a tty
 after you get an EOF.  That's not true.  Once you get an
 EOF on a tty, there's no point in reading it any more:
 you'll continue to get an EOF forever.

Should I post a bug about this?

/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Best way of finding terminal width/height?

2006-02-09 Thread Joel Hedlund
I realised there was funny indentation mishap in my previus email. My mail prog 
indented the following line 

signal.signal(signal.SIGWINCH, report_terminal_size_change)

while in fact in IDLE it's unindented, which makes a lot more sense:

signal.signal(signal.SIGWINCH, report_terminal_size_change)

Sorry about that.

/Joel

Joel Hedlund wrote:
 You might want to try just setting a flag in the signal handler
 to see if that prevents the I/O operations on stdin/stdout from
 being interrupted.
 
 
 Tried this:
 
 source
 
 import signal, os, sys
 from terminal_info import get_terminal_size
 
 terminal_size = get_terminal_size()
 
 _bTerminalSizeChanged = False
 
 def report_terminal_size_change(signum, frame):
global _bTerminalSizeChanged
_bTerminalSizeChanged = True
 
 def update_terminal_size():
global _bTerminalSizeChanged, terminal_size
terminal_size = get_terminal_size()
_bTerminalSizeChanged = False
signal.signal(signal.SIGWINCH, report_terminal_size_change)
 
 while True:
# Do lots of IO (I'm trying to provoke exceptions with signal)
open('/a/large/file').read()
#raw_input()
#sys.stdin.read()
#print open('/a/large/file').read()
   if _bTerminalSizeChanged:
update_terminal_size()
print terminal_size
 
 /source
 
 As before, the only IO case above that doesn't throw exceptions is the 
 uncommented one.
 
 Yup, that's the exception.  Standard practice is to catch it and
 retry the I/O operation.
 
 
 Hmm... I guess it's not that easy to retry IO operations on pipes and 
 streams (stdin/stdout in this case)... And I tend to lean pretty heavily 
 on those since I usually write UNIX style text filters.
 
 So in case I haven't missed something fundamental I guess my best option 
 is to accept defeat (of sorts :-) and be happy with picking a terminal 
 width at program startup.
 
 But anyway, it's been really interesting trying this out.
 Thank you Grant (och Jorgen) for all help and tips!
 /Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Replacing curses (Was: Re: Problem with curses and UTF-8)

2006-02-09 Thread Joel Hedlund
 The code for handling window resizing isn't jumping out at 
 me but I'll keep looking.

(...jumping out, rather unexpectedly!)

You might be interested in an ongoing discussion that I and Grant Edwards are 
holding in this newsgroup on the subject Best way of finding terminal 
width/height?.

Thread summary:

There's a function that Chuck Blake wrote for detecting the current terminal 
size, available here:

http://pdos.csail.mit.edu/~cblake/cls/cls.py

You'll only need the first two functions for this task (ioctl_GWINSZ and 
terminal_size).

To monitor changes in window size, have a look at the signal module (in 
standard library). You can attach a monitor function to the signal 
signal.SIGWINCH, like so:

signal.signal(signal.SIGWINCH, report_terminal_size_change)

or, in context:

---
#!/usr/bin/python
from cls import terminal_size

current_terminal_size = terminal_size()

_bTerminalSizeChanged = False

def report_terminal_size_change(signum, frame):
global _bTerminalSizeChanged
_bTerminalSizeChanged = True

def update_terminal_size():
global _bTerminalSizeChanged, current_terminal_size
current_terminal_size = terminal_size()
_bTerminalSizeChanged = False

signal.signal(signal.SIGWINCH, report_terminal_size_change) 

while True:
if _bTerminalSizeChanged:
update_terminal_size()
print current_terminal_size
--

Take care!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Best way of finding terminal width/height?

2006-02-08 Thread Joel Hedlund

 sys.stdin.read() will return when ... the
 underyling read() call is aborted by a signal.

Not return, really? Won't it just pass an exception? I thought that was what 
I was catching with the except IOError part there? I assumed that 
sys.stdin.read() would only return a value properly at EOF?

It looks to me as if sys.stderr.read() really gets an EOF at the final 
linebreak fed into the terminal prior to window size change, because the final 
unterminated line shows up on my shell prompt. Like so:

$ python winch.py
moo moo
cow cowmoo moo
$ cow cow

In this example I type moo moo[ENTER]cow cow on my keyboard and then resize the 
window.

Now that EOF has to come from somewhere (since there's no IOError or other 
exception, or the program wouldn't terminate nicely with nothing on stderr) and 
I'd like to point the blame at the terminal. Or is there something really fishy 
inside sys.stdin.read() or signal that puts EOFs into streams?

But anyway, as long as this behavior only shows up on interactive operation, 
the user will likely spot it anyway and can react to it. So I think this code 
would be pretty safe to use. What do you think?

 Resign the terminal will abort pending I/O operations on
 that terminal.  It won't terminal I/O operations pending on
 other devices/files.

What do you mean by abort? I can accept that aborting may lead to raising 
of IOError (which we can catch and retry), but not to arbitrary insertion of 
EOFs into streams (which we cannot distinguish from the real deal coming from 
the user).

Also: try your example and enter moo moo[ENTER]cow cow[RESIZE][ENTER][RESIZE]

Thanks again for your help.
/Joel

 
 
but it does return as soon as I enter more than one
line of text and then resize the window (one unterminated line
is ok). 

Example text to type in:
moo moo
cow cow

As soon as I have typed in something that includes a newline
charater through the keyboard and try to resize the terminal,
sys.stdin.read() will return whatever I put in no far and no
exception raised.
 
 
 Yup.  That does indeed appear to be the way it works. :)
 
 
Weird. Could it in fact my terminal that's screwing things up
for me?
 
 
 No.
 
 Try this out:
 
 --
 #!/usr/bin/python
 import signal, os, sys
 
 _bTerminalSizeChanged = False
 
 def report_terminal_size_change(signum, frame):
 global _bTerminalSizeChanged
 _bTerminalSizeChanged = True
 
 signal.signal(signal.SIGWINCH, report_terminal_size_change)
 
 while True:
 try:
 s = sys.stdin.read()
 if not s:
 break
 sys.stdout.write(s)
 except IOError:
 sys.stderr.write(IOError\n)
 if _bTerminalSizeChanged:
 sys.stderr.write(SIGWINCH recevied\n)
 _bTerminalSizeChanged = False
 --
 
 In that example, I handle IOError on write with the same
 exception handler as the one for read.  That may not be exactly
 what you want to do, but it does demonstrate what
 window-resizing does.
 
 When the window is resized, the SIGWINCH handler will be
 called.  A pending read() may abort with an IOError, or it may
 just return some buffered data.
 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Best way of finding terminal width/height?

2006-02-07 Thread Joel Hedlund
 You might want to try just setting a flag in the signal handler
 to see if that prevents the I/O operations on stdin/stdout from
 being interrupted.

Tried this:

source

import signal, os, sys
from terminal_info import get_terminal_size

terminal_size = get_terminal_size()

_bTerminalSizeChanged = False

def report_terminal_size_change(signum, frame):
global _bTerminalSizeChanged
_bTerminalSizeChanged = True

def update_terminal_size():
global _bTerminalSizeChanged, terminal_size
terminal_size = get_terminal_size()
_bTerminalSizeChanged = False

signal.signal(signal.SIGWINCH, report_terminal_size_change)

while True:
# Do lots of IO (I'm trying to provoke exceptions with signal)
open('/a/large/file').read()
#raw_input()
#sys.stdin.read()
#print open('/a/large/file').read()

if _bTerminalSizeChanged:
update_terminal_size()
print terminal_size

/source

As before, the only IO case above that doesn't throw exceptions is the 
uncommented one. 

 Yup, that's the exception.  Standard practice is to catch it and
 retry the I/O operation.

Hmm... I guess it's not that easy to retry IO operations on pipes and streams 
(stdin/stdout in this case)... And I tend to lean pretty heavily on those since 
I usually write UNIX style text filters.

So in case I haven't missed something fundamental I guess my best option is to 
accept defeat (of sorts :-) and be happy with picking a terminal width at 
program startup.

But anyway, it's been really interesting trying this out. 

Thank you Grant (och Jorgen) for all help and tips!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Best way of finding terminal width/height?

2006-02-07 Thread Joel Hedlund
 You just call the failed read() or write() again.  Unless
 there's some way that the read/write partially succeeded and
 you don't have any way to know how many bytes were
 read/written, If that's the case then Python's file object
 read and write would appear to be broken by design.

Wow... I tried to set up an example that would fail, and it didn't. It seems 
the test only fails if I use the keyboard to cram stuff into stdin, and not if 
stdin is a regular pipe. 

Could this perhaps be some kind of misbehavior on behalf of my terminal 
emulator (GNOME Terminal 2.12.0 in Ubuntulinux 5.10)?

Example follows:

winch.py

import signal, os, sys
from terminal_info import get_terminal_size

terminal_size = get_terminal_size()

_bTerminalSizeChanged = False

def report_terminal_size_change(signum, frame):
global _bTerminalSizeChanged
_bTerminalSizeChanged = True

def update_terminal_size():
global _bTerminalSizeChanged, terminal_size
terminal_size = get_terminal_size()
_bTerminalSizeChanged = False

signal.signal(signal.SIGWINCH, report_terminal_size_change)

# Retry IO operations until successful.
io_successful = False
while not io_successful:
try:
s = sys.stdin.read()
io_successful = True
except IOError:
pass

io_successful = False
while not io_successful:
try:
sys.stdout.write(s)
io_successful = True
except IOError:
pass

/winch.py

Then run the prog and pipe a large chunk of text into stdin, and redirect 
stdout to a file:

$ cat /a/large/text/file | python winch.py  copy.of.the.large.file

Now, what happens for me is exactly what I wanted. I can resize the window as 
much as I like, and a diff 

$ diff /a/large/text/file copy.of.the.large.file

comes up empty. A perfectly good copy.

However, if I do this instead (try to use keyboard to push stuff into stdin):

$ python winch.py  copy.of.the.large.file

I expect python not to return until I press Ctrl-D on my keyboard, but it does 
return as soon as I enter more than one line of text and then resize the window 
(one unterminated line is ok). 

Example text to type in:
moo moo
cow cow

As soon as I have typed in something that includes a newline charater through 
the keyboard and try to resize the terminal, sys.stdin.read() will return 
whatever I put in no far and no exception raised.

Weird. Could it in fact my terminal that's screwing things up for me?

/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Best way of finding terminal width/height?

2006-02-06 Thread Joel Hedlund
Thank you for a very quick, informative and concise response.

 BTW: don't forget to attach a handler to the window-size-change
 signal (SIGWINCH) so that you know when your terminal changes sizes

Do you mean something like this?

import signal, os
# terminal_info contains the example from my first post
from terminal_info import get_terminal_size

TERMINAL_SIZE = get_terminal_size()

def update_terminal_size(signum, frame):
global TERMINAL_SIZE
TERMINAL_SIZE = get_terminal_size()

signal.signal(signal.SIGWINCH, update_terminal_size)

while True:
# Do lots of IO (fishing for exceptions...)
open('/a/large/file').read()
print TERMINAL_SIZE

The docs for the signal module (http://docs.python.org/lib/module-signal.html) 
say that 

When a signal arrives during an I/O operation, it is possible that the I/O 
operation raises an exception after the signal handler returns. This is 
dependent on the underlying Unix system's semantics regarding interrupted 
system calls.


So this is what I'm trying to provoke in the final while loop. In this case I 
get no exceptions (hooray!). However, if I replace open('/a/large/file').read() 
with raw_input() I get EOFError (no errmsg), and even worse, if I replace it 
with sys.stdin.read() or even print open('/a/large/file').read() I get IOError: 
[Errno 4] Interrupted system call.

I do lots of IO in my work, and primarily with gigantic text files (welcome to 
bioinformatics :-). Protecting my code from this sort of error (i believe) will 
be quite hard, and probably won't look pretty. Or am I missing something?

Note though that these realtime updates aren't essential to me at the moment. 
Basically all I need is to find out for each run how much space I have so I can 
text wrap command line help text (as in myprog --help) as user friendly as 
possible. 

I run Ubuntu 5.10 btw, but I try to stay as cross-platform as I can. My 
priorities are Ubuntu, other linuxes, other unixes, Win, Mac.

 Homey don't do Windows, so you'll have to ask somebody else that.

:-)

 I don't know what you mean by leverage the termios module.

I have a hunch that we have 100% overlap there,
 
 No comprende.

I mean: I believe that for those environments where $COLS and $ROWS are set
then python will probably have access to the termios module as well, and for 
those environments that don't have $COLS and $ROWS set then python probably 
will not have access to the termios module either. So, in the latter case I'm 
back to square one, which is arbitrary guesswork.

  1) Just write a normal Unix-like text processing filter. 

Yes that's what I normally do. But I also like to hook them up with a --help 
option that shows usage and options and such, and I like that warm fuzzy 
feeling you get from seeing really readable user friendly help... :-)

 I would guess that the real answer is that so few people have
 ever wanted it that nobody has ever created a module and
 submitted it.  Feel free...  ;) 

I just might. I've got some stuff that people may benefit from (or possibly 
hate, I don't know ;-). If I ever sum up the courage to publish it, would it be 
a good idea to post the modules to python-dev@python.org, or is there some 
better route?

Thanks again for your time!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list


Best way of finding terminal width/height?

2006-02-05 Thread Joel Hedlund
Hi all!

I use python for writing terminal applications and I have been bothered 
by how hard it seems to be to determine the terminal size. What is the 
best way of doing this?

At the end I've included a code snippet from Chuck Blake 'ls' app in 
python. It seems to do the job just fine on my comp, but regrettably, 
I'm not sassy enough to wrap my head around the fine grain details on 
this one. How cross-platform is this? Is there a more pythonic way of 
doing this? Say something like:

from ingenious_module import terminal_info
cols, rows = terminal_info.size()

Thanks for your time (and thanks Chuck for sharing your code!)
/Joel Hedlund
IFM Bioinformatics
Linköping University

Chuck Blake's terminal_size code snippet:
(from http://pdos.csail.mit.edu/~cblake/cls/cls.py).

def ioctl_GWINSZ(fd):   TABULATION FUNCTIONS
 try:### Discover terminal width
 import fcntl, termios, struct, os
 cr = struct.unpack('hh',
fcntl.ioctl(fd, termios.TIOCGWINSZ, '1234'))
 except:
 return None
 return cr

def terminal_size():
 ### decide on *some* terminal size
 # try open fds
 cr = ioctl_GWINSZ(0) or ioctl_GWINSZ(1) or ioctl_GWINSZ(2)
 if not cr:
 # ...then ctty
 try:
 fd = os.open(os.ctermid(), os.O_RDONLY)
 cr = ioctl_GWINSZ(fd)
 os.close(fd)
 except:
 pass
 if not cr:
 # env vars or finally defaults
 try:
 cr = (env['LINES'], env['COLUMNS'])
 except:
 cr = (25, 80)
 # reverse rows, cols
 return int(cr[1]), int(cr[0])
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Best way of finding terminal width/height?

2006-02-05 Thread Joel Hedlund
  Which details?  We'd be happy to explain the code. Not that
  you need to understand the details to use the code.

OK, why '1234' in here, and what's termios.TIOCGWINSZ, and how should I 
have known this was the way too do it?
fcntl.ioctl(fd, termios.TIOCGWINSZ, '1234')

Am I interpreting C structs here, and if so - why is python giving me C 
structs? And what's 'hh' anyway?
struct.unpack('hh', ... )

Why 0, 1 and 2?
cr = ioctl_GWINSZ(0) or ioctl_GWINSZ(1) or ioctl_GWINSZ(2)

  I don't know if it will work on MS Windows or not.

Linux and unix are my main concerns, but it would be neat to know if it 
would work on Win/Mac.

What OS:es set the COLS/ROWS env vars? What OS:es can leverage the 
termios module? I have a hunch that we have 100% overlap there, and then 
this solution leaves me at square one (arbitrary choice of 80*25) for 
the others OS:es (or am I wrong?). How do Win/Mac people do this?

  What's unpythonic about the example you found?

Maybe I did bit of poor wording there, but In my experience python 
generally has a high level of abstraction, which provides linguistically 
appealing (as in in english) solutions to almost any problem. Like for 
example how os.path.isfile(s) tells me if my string s corresponds to a 
file. I guess that's what I mean really. I sort of expected to find 
something like my terminal_size() example in the built-in modules. I 
didn't expect to have to do that struct fcntl ioctl boogey to solve this 
relatively simple (?) problem.

Thanks for your help!
/Joel
-- 
http://mail.python.org/mailman/listinfo/python-list