Re: fast kdtree tree implementation for python 3?

2010-09-11 Thread _wolf
> Do you know about the kdtree implementation in biopython? I don't know
> if it is already available for Python 3, but for me it worked fine in
> Python 2.X.

i heard they use a brute-force approach and it's slow. that's just
rumors alright. also, judging from the classes list on
http://www.biopython.org/DIST/docs/api/module-tree.html, you will see
you can probably tune in to the latest radio moscow news using it. way
too much for my needs, i just want to find the nearest neighbor on a
2D-plane. but thanks for the suggestion.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: fast kdtree tree implementation for python 3?

2010-09-11 Thread _wolf
> Since you're looking for an implementation, I guess you won't be the one
> volunteering to maintain such code in the stdlib, would you?

this is indeed a problem. i am probably not the right one for this
kind of task.

however, i do sometimes feel like the standard library carries too
much cruft from yesteryear. things like decent image and sound
manipulation, fuzzy string comparison, fast asynchronous HTTP serving
and requesting are definitely things i believe a 2010 programming
language with batteries included should strive to provide.

one avenue to realize this goal could be to prioritize the packages in
pypi. pypi is basically a very good idea and has made things like
finding and installing packages much easier. however, it is also
organized like a dump pile. there are centuries old packages there few
people ever use.

i suggest to add aging (many old packages are good ones, but also
often display a crude form of inner organization; conversely, a
library not updated for a long time is unlikely to be a good answer to
your problem; aging works in both directions), popularity, and
community prioritization (where people vote for essential and relevant
solutions) to the standard library as well as to pypi; in other words,
to unify the two. batteries included is a very good idea, but there
are definitely some old and leaky batteries in there. sadly, since the
standard library modules are always included in each installation,
there are no figures on how much needed they are after all. one would
guess that were such figures available, the aifc library would come
near the end of a ranked listing.

if the community manages, by download figures and voting, to class
packages, a much clearer picture could emerge about the importance of
packages.

one could put python packages into:

* Class A all those packages without which python would not run (such
as sys and site); into

* Class B ('basics'), officially maintained packages; into

* Class C ('community'), packages that are deemed important or
desirable and which are open for community contributions (to make it
likely they get updated soon enough whenever needed); into

* Class D ('debut') all packages submitted to pypi and favorably
tested, reviewed and found relevant by a certain number of people;
into

* Class E ('entry') all packages submitted or found elsewhere on the
web, but not approved by the community; into

* Class F ('failure') all packages that were proposed but never
produced code, and all packages known to be not a good ideas to use
(see discussion going on at http://pypi.python.org/pypi/python-cjson).
Class F can help people to avoid going down the wrong path when
choosing software.

well this goes far beyond the kdtree question. maybe i'll make it a
proposal for a PEP.


-- 
http://mail.python.org/mailman/listinfo/python-list


fast kdtree tree implementation for python 3?

2010-09-11 Thread _wolf
does anyone have a suggestion for a ready-to-go, fast kdtree
implementation for python 3.1 and up, for nearest-neighbor searches? i
used to use the one from numpy/scipy, but find it a pain to install
for python 3. also, i'm trying to wrap the code from 
http://code.google.com/p/kdtree/
using cython, but i'm still getting errors.

i wish stuff like kdtree, levenshtein edit distance and similar things
were available in the standard library.
-- 
http://mail.python.org/mailman/listinfo/python-list


strange syntax error

2010-06-04 Thread _wolf
this may not be an earth-shattering deficiency of python, but i still
wonder about the rationale behind the following behavior: when i
run ::

  source = """
  print( 'helo' )
  if __name__ == '__main__':
print( 'yeah!' )

  #"""

  print( compile( source, '', 'exec' ) )

i get ::

File "", line 6
  #
  ^
  SyntaxError: invalid syntax

i can avoid this exception by (1) deleting the trailing ``#``; (2)
deleting or outcommenting the ``if __name__ == '__main__':\n
print( 'yeah!' )`` lines; (3) add a newline to very end of the
source.

moreover, if i have the source end without a trailing newline right
behind the ``print( 'yeah!' )``, the source will also compile without
error.

i could also reproduce this behavior with python 2.6, so it’s not new
to the 3k series.

i find this error to be highly irritating, all the more since when i
put above source inside a file and execute it directly or have it
imported, no error will occur—which is the expected behavior.

a ``#`` (hash) outside a string literal should always represent the
start of a (possibly empty) comment in a python source; moreover, the
presence or absence of a  ``if __name__ == '__main__'`` clause should
not change the interpretation of a soure on a syntactical level.

can anyone reproduce the above problem, and/or comment on the
phenomenon?

cheers
-- 
http://mail.python.org/mailman/listinfo/python-list


how to do asynchronous http requests with epoll and python 3.1

2010-03-24 Thread _wolf

i asked this question before on
http://stackoverflow.com/questions/2489780/how-to-do-asynchronous-http-requests-with-epoll-and-python-3-1
but without a definitive answer as yet.

can someone help me out? i want to do several simple http GET and POST
requests in the same process using Python 3.1 without using threading.


the original post:

there is an interesting page [http://scotdoyle.com/python-epoll-
howto.html][1] about how to do asnchronous / non-blocking / AIO http
serving in python 3.

there is the [tornado web server][2] which does include a non-blocking
http client. i have managed to port parts of the server to python 3.1,
but the implementation of the client requires [pyCurl][3] and [seems
to have problems][4] (with one participant stating how ‘Libcurl is
such a pain in the neck’, and looking at the incredibly ugly pyCurl
page i doubt pyCurl will arrive in py3+ any time soon).

now that epoll is available in the standard library, it should be
possible to do asynchronous http requests out of the box with python.
i really do not want to use asyncore or whatnot; epoll has a
reputation for being the ideal tool for the task, and it is part of
the python distribution, so using anything but epoll for non-blocking
http is highly counterintuitive (prove me wrong if you feel like it).

oh, and i feel threading is horrible. no threading. i use [stackless]
[5].

*people further interested in the topic of asynchronous http should
not miss out on this [talk by peter portante at PyCon2010][6]; also of
interest is [the keynote][7], where speaker antonio rodriguez at one
point emphasizes the importance of having up-to-date web technology
libraries right in the standard library.*


  [1]: http://scotdoyle.com/python-epoll-howto.html
  [2]: http://www.tornadoweb.org/
  [3]: http://pycurl.sourceforge.net/
  [4]:
http://groups.google.com/group/python-tornado/browse_thread/thread/276059a076593266/c49e8f834497271e?lnk=gst&q=httpclient+trouble+with+epoll#c49e8f834497271e
  [5]: http://stackless.com/
  [6]: http://python.mirocommunity.org/video/1501/pycon-2010-demystifying-non-bl
  [7]: http://python.mirocommunity.org/video/1605/pycon-2010-keynote-relentlessl
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is it possible to use re2 from Python?

2010-03-24 Thread _wolf

yes we can! http://github.com/facebook/pyre2

as pointed out by http://stackoverflow.com/users/219162/daniel-stutzbach

now gotta go and try it out.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is it possible to use re2 from Python?

2010-03-14 Thread _wolf

> There's a recent thread about this on the python-dev list,

pointers? i searched but didn’t find anything.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is it possible to use re2 from Python?

2010-03-14 Thread _wolf

i am afraid that thread goes straight perpendicular to what re2 is
supposed to be, or do. my suggestion for these folks would be to
create a new, clean interface to stop the violence that comes with the
Python ``re`` interface, and open the thing up so one can plug in
``re`` implementations as are needed. when i absolutely need a
feature, i can always go to the slower machine; simpler regular
expressions could be dealt with more efficiently.

-- 
http://mail.python.org/mailman/listinfo/python-list


Is it possible to use re2 from Python?

2010-03-14 Thread _wolf

i just discovered http://code.google.com/p/re2, a promising library
that uses a long-neglected way (Thompson NFA) to implement a regular
expression engine that can be orders of magnitudes faster than the
available engines of awk, Perl, or Python.

so i downloaded the code and did the usual sudo make install thing.
however, that action had seemingly done little more than adding /usr/
local/include/re2/re2.h to my system. there seemed to be some `*.a
file in addition, but then what is it with this *.a extension?

i would like to use re2 from Python (preferrably Python 3.1) and was
excited to see files like make_unicode_groups.py in the distro (maybe
just used during the build process?). those however were not deployed
on my machine.

how can i use re2 from Python?

(this message appeared before under
http://stackoverflow.com/questions/2439345/is-it-possible-to-use-re2-from-python
and, even earlier, http://groups.google.com/group/re2-dev/t/59b78327ec3cca0a)
-- 
http://mail.python.org/mailman/listinfo/python-list


whassup? builtins? python3000? Naah can't be right?

2010-01-31 Thread _wolf
dear pythoneers,

i would be very gladly accept any commentaries about what this
sentence, gleaned from 
http://celabs.com/python-3.1/reference/executionmodel.html,
is meant to mean, or why gods have decided this is the way to go. i
anticipate this guy named Kay Schluehr will have a say on that, or
maybe even the BDFL will care to pronounce ``__builtins__`` the
correct way to his fallovers, followers, and fellownerds::

  The built-in namespace associated with the execution of
  a code block is actually found by looking up the name
  __builtins__ in its global namespace; this should be a
  dictionary or a module (in the latter case the module’s
  dictionary is used). By default, when in the __main__
  module, __builtins__ is the built-in module builtins;
  when in any other module, __builtins__ is an alias for
  the dictionary of the builtins module itself.
  __builtins__ can be set to a user-created dictionary to
  create a weak form of restricted execution.

it used to be the case that there were at least two distinct terms,
‘builtin’ (in the singular) and ‘builtins’ (in the plural), some of
which existed both in module and in dict form (?just guessing?). now
there is only ‘builtins’, so fortunately the ambivalence between
singular and plural has gone—good riddance.

but why does ``__builtins__`` change its meaning depending on whether
this is the scope of the ‘script’ (i.e. the module whose name was
present, when calling ``python foobar.py``) or whether this is the
scope of a secondary module (imported or executed, directly or
indirectly, by ``foobar.py``)? i cannot understand the reasoning
behind this and find it highly confusing.

rationale: why do i care?—i want to be able to ‘export names to the
global namespace that were not marked private (by an underscore
prefix) in a python module that i execute via ``exec( compile( get
( locator ), locator, 'exec' ), R )`` where ``R`` is supposed to going
to hold the private names of said module’. it *is* a little arcane but
the basic exercise is to by-pass python’s import system and get similr
results... it is all about injecting names into the all-global and the
module-global namespaces.

still i get trapped by the above wordings in tzhe docs, and i have a
weird case of a vanishing names, so maybe some people will care to
share their thoughts.


love & ~flow



thank
-- 
http://mail.python.org/mailman/listinfo/python-list


preferred way to set encoding for print

2009-09-15 Thread _wolf
hi folks,

i am doing my first steps in the wonderful world of python 3.

some things are good.
some things have to be relearned.
some things drive me crazy.

sadly, i'm working on a windows box. which, in germany, entails that
python thinks it to be a good idea to take cp1252 as the default
encoding.

so just coz i got my box in germany means i can never print out a
chinese character? say what?

i have no troubles with people configuring their python installation
to use any encoding in the world, but wouldn't it have been less of a
surprise to just assume utf-8 for any file in/output? after all, it is
already the default for python source files as far as i understand.
someone might think they're clever to sniff into the system and make
the somehwat educated guess that this dude's using cp1252 for his
files. but they would be wrong.

so: how can i tell python, in a configuration or using a setting in
sitecustomize.py, or similar, to use utf-8 as a default encoding?
there used to be a trick to say `reload(sys);sys.setdefaultencoding
('utf-8')`, but that has no effect in py3.0.1. also, i cannot set
`sys.stdout.encoding`; is there a way to re-open that stream with a
different encoding?

in all, i believe it is quite unsettling to me to see that, on my py3
installation,

sys.getdefaultencoding() == 'utf-8'
sys.stdout.encoding == 'cp1252'
locale.getlocale() == (None, None)
locale.getdefaultlocale() == ('de_DE', 'cp1252')

which to me makes as much sense as a blackcurrant tart thrown into
space. worse,

locale.setlocale( locale.LC_ALL, locale.getdefaultlocale() )

results in

locale.Error: unsupported locale setting

this bloody thing doesn't accept its *own* output. attempts to feed
that locale beast with anything but the empty string or 'C' were all
doomed. it would take a very patient and eloquent person to explain
that in a credible fashion to me. my word for this is, 'broken'.

i would very much like to rid myself of these considerations. just say
it's all utf-8, wash'n'go.

my attempts of changing python's mind using the locale module have
failed so far. otherwise, i for one don't want to touch that locale
thing with a very long pole. as far as i can see, it does not work as
documented. the platform dependencies are also a clear OFF LIMITS sign
to me.

any suggestions?

cheers,

~flow

-- 
http://mail.python.org/mailman/listinfo/python-list


question about xrange performance

2009-04-17 Thread _wolf
lately i realized a slow running portion of my application, and a
quick profiling nourished the suspicion that, of all things, calls to
`xrange().__contains__` (`x in b` where `b = xrange(L,H)`) is the
culprit. to avoid any other influences, i wrote this test script with
class `xxrange` being a poor man’s `xrange` replacement:



class xxrange( object ):
  def __init__( self, start, stop ):
self.start  = start
self.stop   = stop
  def __contains__( self, x ):
return ( x == int( x ) ) and self.start <= x < self.stop

import cProfile
from random import randint
test_integers = [ randint 0, 5000 ) for i in xrange( 8000 ) ]
test_range_a  = xxrange( 1, 2 )
test_range_b  = xrange(  1, 2 )

def a():
  print test_range_a.__class__.__name__
  for x in test_integers:
x in test_range_a

def b():
  print test_range_b.__class__.__name__
  for x in test_integers:
x in test_range_b

cProfile.run('a()')
cProfile.run('b()')


now this is the output, surprise:


xxrange
 8003 function calls in 0.026 CPU seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno
(function)
10.0000.0000.0260.026 :1()
10.0120.0120.0260.026 xrange-profiler.py:18(a)
 80000.0140.0000.0140.000 xrange-profiler.py:9
(__contains__)
10.0000.0000.0000.000 {method 'disable' of
'_lsprof.Profiler' objects}


xrange
 3 function calls in 4.675 CPU seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno
(function)
10.0000.0004.6754.675 :1()
14.6754.6754.6754.675 xrange-profiler.py:23(b)
10.0000.0000.0000.000 {method 'disable' of
'_lsprof.Profiler' objects}


can it be that a simple diy-class outperforms a python built-in by a
factor of 180? is there something i have done the wrong way?
omissions, oversights? do other people get similar figures?

cheers
--
http://mail.python.org/mailman/listinfo/python-list


Re: forcing future re-import from with an imported module

2008-12-12 Thread _wolf
On Dec 11, 12:43 am, rdmur...@bitdance.com wrote:
> "Why can't you have the code that is doing the import [...]
> call a function [...] to produce [the] side effect [...]?
> Explicit is better than implicit.  A python programmer is
> going to expect that importing a module is idempotent"

you’re completely right that `import foo` with side effects may break
some expectations, but so do all `from __future__` imports. you’re
also right that another solution would be to come in from the other
side and explicitly call a function from the importing module. all of
this does not answer one question tho: why does deleting a module work
most of the time, but not in the case outlined in my first post,
above? why do we get to see this slightly strange error message there
complaining about ‘not finding’ a module ‘in sys.modules’—well, most
of the time, when a module is not in that cache, it will be imported,
but not in this case. why?

cheers & ~flow
--
http://mail.python.org/mailman/listinfo/python-list


Re: forcing future re-import from with an imported module

2008-12-10 Thread _wolf
On Dec 10, 1:46 pm, "Gabriel Genellina" <[EMAIL PROTECTED]>
wrote:
> En Tue, 09 Dec 2008 23:27:10 -0200, _wolf <[EMAIL PROTECTED]>
> escribió:
>
> > how can i say, approximately, "re-import the present module when it is
> > imported the next time, don’t use the cache" in a simple way? i do not
> > want to "reload" the module, that doesn’t help.
>
> I'd say you're using modules the wrong way then. The code inside a module
> is executed *once*, and that's by design. If you want to execute something
> more than once, put that code inside a function, and call it as many times
> as you want.
>
> --
> Gabriel Genellina

thanks for your answer. i am aware that imports are not designed to
have side-effects, but this is exactly what i want: to trigger an
action with `import foo`. you get foo, and doing this can have a side-
effect for the module, in roughly the way that a `from __future__
import with_statement` changes the interpretation of the current
module (of course, i do not intend to effect syntactic changes---my
idea is to look into the module namespace and modify it). think of it
as ‘metamodule programming’ (à la metaclass programming).

maybe import hooks are the way to go? somtimes it would be good if
there was a signalling system that broadcasts all kinds of system
state change.

cheers & ~flow

ok so the question is: how to make it so each import of a given module
has a side-effect, even repeated imports?



--
http://mail.python.org/mailman/listinfo/python-list


forcing future re-import from with an imported module

2008-12-09 Thread _wolf
following problem: i have a module importer_1 that first imports
importer_2, then importee. importer_2 also imports importee. as we all
know, follow-up imports are dealt out from the cache by python’s
import mechanism, meaning the importee file gets only cached once. i
can force module-level code in importee to be re-executed e.g. by
deleting importee from sys.modules. but in this case, code of shich
below, that does not work: if you delete importee from sys.modules
*from within importee*, you get a nasty "ImportError: Loaded module
importee not found in sys.modules" instead. but, the same line `import
sys; del sys.modules[ 'importee' ]` does what it says on the tin.

how can i say, approximately, "re-import the present module when it is
imported the next time, don’t use the cache" in a simple way? i do not
want to "reload" the module, that doesn’t help.

greets

_wolf



#--
# importer_1.py
#--

print 'this is importer 1'
import importer_2
# import sys; del sys.modules[ 'importee' ]
import importee
print 'importer 1 finished'

#--
# importer_2.py
#--

print 'this is importer 2'
import importee
print 'importer 2 finished'

#--
# importee.py
#--

print 'this is importee'
import sys; del sys.modules[ 'importee' ]
print 'importee finished'

#--
# Output
#--
this is importer 1
this is importer 2
this is importee
importee finished
Traceback (most recent call last):
  File "C:\temp\active_imports\importer_1.py", line 2, in 
import importer_2
  File "C:\temp\active_imports\importer_2.py", line 2, in 
import importee
ImportError: Loaded module importee not found in sys.modules
--
http://mail.python.org/mailman/listinfo/python-list


image scaling in cairo, python

2008-05-24 Thread _wolf
[also posted to: [EMAIL PROTECTED]

hi all,

i've heard cairo has become the image scling library for firefox3. is
that true? wonderful, i want to do that in python. there's a python
interface for cairo, right? i've used it before to do simple vector
stuff. seems to work. however, i haven't been able to find relevant
pointers via google.

so do you have any pointers on how to resize a raster image with
python using cairo? i've been jumping through hoops for a while now,
and i believe it should be easier.

cheers and ~flow

ps. related (long) posts:

http://groups.google.com/group/pyglet-users/browse_frm/thread/44253ad01d809da5/cd051e6bced271e1#cd051e6bced271e1

http://groups.google.com/group/comp.lang.python/browse_frm/thread/5df65d99cff0d7bb#
--
http://mail.python.org/mailman/listinfo/python-list


matploblib communication problem, graphics question

2008-05-23 Thread _wolf

i've had a hard time today to drill down on some infos
about matplotlib of http://matplotlib.sourceforge.net/.
this is an sf.net-managed project, its mailing lists are
managed by  gnu mailman in a pre-1994 version.

still there is a sf.net standard forum interface, which
however denies me to post. i've set up an account on
sourceforge---not an experience i need; boarding an
intercontinental flight is swift in comparison.

but having an account on sf.net is not enough to post to
a mailing list there, so i filled out a second longish
form to subscribe to the list thru the 
mailman interface. i had to re-enter my email address.

now what is left to me is to try writing directly to
[EMAIL PROTECTED] and then scoop
it up on 
http://sourceforge.net/mailarchive/forum.php?forum_name=matplotlib-users

as it is so incredibly hard to make oneself heard on
the matplotlib list, i realized that questions from
outsiders are maybe not welcome, so i do a cross-post
to reach more audience.

i need to do some raster-image scaling and i've been
hunting hi and lo for a python library that can do that.
so far choices are (in order of perceived aptness)::

  imagemagick of old,

  pythonware.com/products/pil,

  antigrain.com,

  matplotlib,

  cairo of cairographics.org.

Cairo is definitely may favorite here. i know with
certainty that cairo is good at scaling images, as
firefox3 is using it to achieve a smoothness and
readability in scaled images that rivals the quality
of safari’s.

but i have been unable to uncover any information
about raster-image scaling in cairo---can’t be, right?
an open source project that becomes part of firefox3
and i can’t find out how to use their flagship
functionality?

so i went to matplotlib. i now have these methods to
open image files with matplotlib::

  def get_image_jpg():
import Image
from pylab import *
import numpy
print dir( numpy )
from numpy import int8, uint8
# these lines are incredible -- just open that damn jpg. can be as
simple as `load(route)` -- ALL the pertinent
# information well be in time derived from the route and the
routed resource structure (the router, and the
# routee). pls someone giveme a `MATPLOBLIB.read()`, a
`MATPLOBLIB.load()`, or a `MATPLOBLIB.get()` already.
image = Image.open( image_locator )
rgb = fromstring( image.tostring(), uint8 ).astype( float ) /
255.0
# rgb = resize( rgb,( image.size[ 1 ], image.size[ 0 ], 3 ) )
rgb = resize( rgb,( 100, 150, 3 ) )
imshow( rgb, interpolation = 'nearest' )
axis( 'off' ) # don’tdisplaytheimageaxis
show()

  def get_image_png( image_locator ):
from pylab import imread as _read_png
from pylab import imshow as _show_image
from pylab import gray
from pylab import mean
a = imread( image_locator )
#generates a RGB image, so do
aa=mean(a,2) # to get a 2-D array
imshow(aa)
gray()

quite incredible, right? it can somehow be done, but
chances are you drown in an avalanche of boiler plate.
and sorry for the shoddy code, i copied it from their
website.

so they use pil to open an image file. pil’s image
scaling is 1994, and the package is hardly maintained
and not open. yuck. whenever you have a question
about imaging in python people say ‘pill’ like they
have swallowed one.

let’s face it, pil is a bad choice to do graphics.
here i did install pil, because matplotlib seemed to
be basically handling raster-images and image
transformations.

the matplotlib people have the nerve to put a short
doc to their root namespace items, as are, `axhspan`,
`cla`, `gcf`, and such more. this interface is
hardly usable. it shouldn’t be that hard to open an
image file in an image manipulation library. nobody
wants to maintain that kind of sphpaghetti.

i haven’t been succesful so far to find out how to
scale an image in cairo or matlotlib, or an other
alternative. please don’t sugggest doing it with
pil or imagemagick, i won’t answer.

is there any coherent python imaging interest group
out there? can i do it with pyglet maybe?

cheers & ~flow




--
http://mail.python.org/mailman/listinfo/python-list


Re: anti-spam policy for c.l.py?

2008-01-16 Thread _wolf
On Jan 16, 3:11 pm, Bruno Desthuilliers  wrote:
> Jeroen Ruigrok van der Werven a écrit :
>
> > -On [20080116 12:51], Bruno Desthuilliers ([EMAIL PROTECTED]) wrote:
> >> Apart from checking posts headers and complaining about the relevant
> >> ISPs, there's not much you can do AFAIK. This is usenet, not a 
> >> mailing-list.
>
> > It is both actually. [EMAIL PROTECTED] is linked to comp.lang.python due
> > to a news gateway.
>
> Yes, I know - but the OP explicitely mentionned c.l.py (re-read the
> title), not the ML.

technically correct, but the idea is of course to keep all those
archives relatively clean and informative. the new fad i've observed
seems to be to initiate whole threads where previously spam very often
stopped short of any second post.
-- 
http://mail.python.org/mailman/listinfo/python-list


anti-spam policy for c.l.py?

2008-01-16 Thread _wolf
this list has been receiving increasing amounts of nasty OT spam
messages for some time. are there any plans to prevent such messages
from appearing on the list or to purge them retrospectively?

_wolf
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: re-posting: web.py, incomplete

2006-03-04 Thread _wolf
ok, that does it! [EMAIL PROTECTED] a lot!

sorry first of all for my adding to the confusion when i jumped to
comment on that ``-u`` option thing---of course, **no -u option** means
**buffered**, positively, so there is buffering and buffering problems,
**with -u option** there is **no buffer**, so also no buffering
problems at that point. these add-option-to-switch-off things do get
confusing sometimes.

ok, so the ``#!/usr/local/bin/python -u`` does work. as noted on
http://www.imladris.com/Scripts/PythonForWindows.html: "Trying to run
python cgi scripts in the (default) buffered mode will either result in
a complete lack of return value from your cgi script (manifesting as a
blank html page) or a "premature end of script headers" error." funny
you don't get to see a lot of ``-u`` shebang lines these days tho. as
the document goes on to explain, there is a way to put ::

SetEnv PYTHONUNBUFFERED 1

into your apache ``httpd.conf``. this works for me! great!

now, thinking about this problem it strikes me it has never bitten me
before, where it should have. is there something special about the way
web.py cgi apps work that make this thing pop up? also, in theory,
since apache itself (as of v1.3) does no buffering of cgi output, then
when apache calls a cgi process, and that process happily buffers at
its own descretion, and then finalizes and terminates, then should the
remaining buffer not be output and sent to apache---as happens with a
script run from the command line? you do get to see all the output in
that case, at long last upon process termination, right? i'm wondering.

_wolf

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: re-posting: web.py, incomplete

2006-03-03 Thread _wolf
it does look like it, no? but i don't---at least i think i don't. in my
httpd conf it says
``AddHandler cgi-script .py``, and at the top of my script,
``#!/usr/local/bin/python``. standard, no ``-u`` here.

-- 
http://mail.python.org/mailman/listinfo/python-list


re-posting: web.py, incomplete

2006-03-02 Thread _wolf
hi all,

this is a re-posting of my question i asked a month or so ago. i
installed web.py, flups, and cheetah. when i copy'n'paste the sample
app from then http://webpy.org homepage ::

import web

urls = (
'/(.*)', 'hello'
)

class hello:
def GET(self, name):
i = web.input(times=1)
if not name: name = 'world'
for c in xrange(int(i.times)): print 'Hello,', name+'!'
# point (1)

if __name__ == "__main__": web.run(urls)

it does seem to work *but* when i call it with sth like

http://localhost/cgi-bin/webpyexample.py/oops?times=25

then the output is ::

Hello, oops!
Hello, oops!
<20 lines omitted/>
Hello, oops!
Hel

it is only after i insert, at point (1) as shown in the listing, ::

print ( ( ' ' * 100 ) + '\n' ) * 10

---print ten lines with a hundred space characters each---that
i do get to see all twenty-five helloes. so, this is likely a
character buffering problem. as far as i can see, it is a consistent
amount of the last 32 characters that is missing from the page
source. ok i thought so let's just output those 32 characters, ::

print ' ' * 32

but *this* will result in a last output line that is 8 characters
long, so someone swallows 24 characters. does look like
a buffer of fixed length. 

i'd like to bust that ghost.

_wolf

-- 
http://mail.python.org/mailman/listinfo/python-list