Re: using python interpreters per thread in C++ program

2009-09-08 Thread Graham Dumpleton
On Sep 8, 9:28 am, Mark Hammond  wrote:
> I was referring to the
> 'multiple interpreters in one process' feature of Python which is
> largely deprecated, ...

Can you please point to where in the documentation for Python it says
that support for multiple interpreters in one process is 'largely
deprecated'.

I know that various people would like the feature to go away, but I
don't believe I have ever seen an official statement from Guido or
other person in a position to make one, state that the official view
was that the API was deprecated.

Even in Python 3.1 the documentation for the APIs seems to merely
state some of the limitations and that it is a hard problem, even
still saying that problem would be addressed in future versions.

Graham
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: using python interpreters per thread in C++ program

2009-09-07 Thread Graham Dumpleton
On Sep 7, 6:47 pm, ganesh  wrote:
> On Sep 7, 3:41 pm, Graham Dumpleton 
> wrote:
>
> > On Sep 7, 3:42 pm, sturlamolden  wrote:
> > interpreters. The simplified GIL state API you mentioned only works
> > for threads operating in the main (first) interpreter created within
> > the process.
>
> I modified my program to have Py_Initialize and compilation of one
> Python function done in main() thread. Then I am calling only that
> function in callPyFunction() thread. But, this thread does not come
> out of PyGILState_Ensure() function.
>
> > The OP can do what they want, but they need to user lower level
> > routines for creating their own thread state objects and acquiring the
> > GIL against them.
>
> > Graham
>
> What are the "lower level routines" for creating own thread state
> objects & acquiring GILs.
> Also, where can I find more information about those routines?

Documentation is at:

  http://docs.python.org/c-api/init.html

Are you really using sub interpreters though? There is no evidence of
that in the code you posted earlier.

Sure you just don't understand enough about it and so are using the
wrong terminology and all you really want to do is run externally
created threads through the one interpreter.

Using sub interpreters is not for the feint of heart and not something
you want to do unless you want to understand Python C API internals
very well.

Graham

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: using python interpreters per thread in C++ program

2009-09-07 Thread Graham Dumpleton
On Sep 7, 3:42 pm, sturlamolden  wrote:
> On 7 Sep, 07:17, grbgooglefan  wrote:
>
> > What is best way to embed python in multi-threaded C++ application?
>
> Did you remeber to acquire the GIL? The GIL is global to the process
> (hence the name).
>
> void foobar(void)
> {
>     PyGILState_STATE state = PyGILState_Ensure();
>
>     /* Safe to use Python C API here */
>
>     PyGILState_Release(state);

You can't use that in this case as they specifically are talking about
a interpreter per thread, which implies creating additional sub
interpreters. The simplified GIL state API you mentioned only works
for threads operating in the main (first) interpreter created within
the process.

The OP can do what they want, but they need to user lower level
routines for creating their own thread state objects and acquiring the
GIL against them.

Graham
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python on the Web

2009-08-27 Thread Graham Dumpleton
On Aug 27, 1:02 pm, Phil  wrote:
> Thanks a lot for another response. I've never posted in groups like
> this before but the results are amazing.
>
> I will definitely consider trying mod_wsgi when I get a chance. I like
> the approach taken with it. It is unfortunate that I completely missed
> all Apache related material because I was using lighttpd. Is there no
> mod_wsgi for lighttpd? I guess I could always just Google that myself.

There is no mod_wsgi for lighttpd and suggest there never will be.
WSGI doesn't lend itself to working on top of an event driven system.
Someone did get a mod_wsgi going on nginx, which is also event driven,
but it has limitations due to possibility of blocking other traffic to
web server. See:

  http://blog.dscpl.com.au/2009/05/blocking-requests-and-nginx-version-of.html

Graham
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python on the Web

2009-08-26 Thread Graham Dumpleton
On Aug 27, 2:54 am, Phil  wrote:
> Thanks to everybody. I believe I am understanding things better.
>
> I have looked at the links that have been provided, although I have
> seen most of them in the past month or so that I've been looking into
> this stuff. I do agree with most of the things Armin stated in that
> NIH post. I agree with everybody in this thread so far. If I wanted to
> write an application, I would use an existing framework and wait for
> it to be ported to 3.x. However, I do not have the need to write a web
> application at this time, and creating blogs or other applications I
> do not need for fun is getting old.
>
> My reasoning for working on my own instead of following the 'NIH'
> concept or contributing to an existing framework is because I have
> experimented with many existing frameworks and I have figured out what
> I like/dislike, and have envisioned my own design that I feel would
> work potentially better for others, or at least newcomers. Things like
> this are fun for me, and I do not mind the challenge. I don't want to
> pollute the web with (sigh) 'another framework', but it would be fun
> for me to write it and get some feedback. I would love for people like
> you, Armin, and others who take a look at the various frameworks that
> pop up seemingly every day, to look at my (hypothetical) framework and
> just rip it apart with (constructive) criticism. That is just the way
> I do things, whether the community agrees with it or not. The reason I
> was asking about Python 3 on the web was just because I like some of
> the changes that have been made, and would like to use it for my
> framework. That is when I realized that I was absolutely clueless
> about the details of how Python, or any language, works on the web.
>
> Graham, regarding number 3 in your list of ways to host WSGI: I
> haven't really looked into mod_wsgi at all, but basically it sounds
> like the web server would be running this embedded module. That module
> would then serve the function of both FCGI and the 'WSGI Server' in my
> diagram? That actually sounds really neat. Unfortunately I missed this
> because I've been hooked on lighttpd, as the minimalist I am.
>
> Here are the things I am still confused with:
>
> 1) Why do I not want to consider running Python on the web with FCGI,
> without WSGI? You said 'no' straight up, no questions asked. I would
> imagine that there is a good reason of course, as you've been in this
> field for a long time.

Because FASTCGI is a wire protocol for socket communications and not a
programming interface. As such, you would only be creating much more
work for your self as you would need to implement a whole lot of code
to handle the protocol and then still put a usable interface on top of
it. You would also have to come up with what that usable interface
should be as well. WSGI already provides that low level interface.

> I just feel more comfortable understanding why.
> From my understanding, the only real purpose of WSGI is to remain
> independent of FCGI/SCGI/CGI/AJP (whatever that is) and the web server
> it is run on. However, 99.9% of the discussion I see with Python on
> the web is around FCGI.

99.9% of the discussion about Python on the web is not around FASTCGI.
Even if there is quite a bit of discussion, it is because
documentation on hosting Python on FASTCGI via flup is virtually non
existent and so many people have a lot of trouble getting it to work
due to peculiarities of different FASTCGI implementations. The
dicusssion is therefore because people have problems with it, or feel
the need to blog about how they finally got it to work. So, FASTCGI
may be the only way for commodity web hosting, but it certainly isn't
for self managed servers where mod_wsgi, mod_python and mod_proxy type
solutions are going to be preferred. The latter are better documented
or easier to setup and so why you possibly don't see as much
discussion. In other words, people who get things working easily don't
need to ask questions.

> So for example, lets say everybody used FCGI.
> All that would be left to deal with is web server independence. Now
> this is what I don't get, I thought that FCGI itself was supposed to
> be the protocol that deals with web server independence. Maybe I just
> need to re-read entire WSGI specification to understand, along with
> all the details of FCGI. There are just so many details regarding web
> servers, FCGI, and WSGI that it is hard to absorb it all and see how
> it works together. That is why I tried to create the diagram, but it
> doesn't provide enough details. And those are the details I am
> missing. I've been trying to find a simple diagram or explaination of
> the process a request takes to make a response, from HTTP all the way
> up to the application, to the the user.

FASTCGI fills a role, but is not essential. Personally I feel that
whole concept of FASTCGI/SCGI/AJP needs a refresh and modernised with
better hosting support for it.

> 2

Re: Python on the Web

2009-08-25 Thread Graham Dumpleton
A few additional comments on top of what others have said.

On Aug 26, 11:09 am, Phil  wrote:
> I've seen lots of web sites explaining everything, but for whatever
> reason I seem to not be picking something up.
> I am a graphical person, which is probably the reason I haven't found
> my answer.
> May somebody please confirm if my diagram accurately represents the
> stack, generally speaking.
>
> http://i26.tinypic.com/1fe82x.png
>
> Even if that is the case, I'm having a hard time understanding the
> differences. I guess wsgiref has absolutely nothing to do with FCGI/
> SCGI/CGI and simply receives and responds to HTTP requests following
> the WSGI specification?

Technically it receives and responses to request based on HTTP
specification, not WSGI specification. The underlying HTTP server
translates to and communicates with a Python web application using the
WSGI interface.

> Does it just spawn a new thread for each
> request? If so, then how is this any different than a production
> server with FCGI?

I would describe there as being four major ways that WSGI can be
hosted. These are:

1. Custom build HTTP/WSGI server written in Python. Production quality
examples are CherryPy WSGI server and Paste HTTP server. You shouldn't
use wsgiref for anything but very simple stuff.

2. Per request process execution by way of CGI and a CGI/WSGI adapter.
This could be under Apache or any other web server which supports CGI.

3. Module that embeds Python interpreter into a C based web server.
Example are mod_wsgi and mod_python for Apache. Note that mod_python
would infrequently be used to host WSGI and doesn't include its own
WSGI adapter. These days mod_wsgi for Apache would be used. Processes
in this would be persistent.

4. Module in a web server that allows one to communicate using a
custom protocol with a separate persistent web application process
hosting the web application through a WSGI interface. This convers
FASTCGI, SCGI and AJP. The mod_wsgi module for Apache has a hybrid
mode which work in a similar way but uses an  internal protocol.

Amongst these, there are many variations as far as number of process
and threads. For a bit of discussion about this in relation to
mod_wsgi read:

  http://code.google.com/p/modwsgi/wiki/ProcessesAndThreading

> The way I am understanding the 'Production' side of that picture is
> that the web server (eg. lighttpd) creates a single FCGI process.

FASTCGI isn't restricted to a single process, nor single threading.
Whether a particular implementation allows for the variations depends
on the implementation.

> The
> FCGI process is actually the entry point of the Framework/Application
> which sets up Flup's WSGIServer, being the interface between FCGI and
> the Framework/Application? What I mean is, it is just the code that
> the web server loads to start with, example...
>     from flup.server.fcgi import WSGIServer
>     from app import application
>     WSGIServer(application).run()
> ... Then for each HTTP request, Flup's WSGIServer creates a new thread
> to handle the request?
>
> As I've read elsewhere, "These days, FastCGI is never used directly.

Even back in time, I don't think it was really ever used as a generic
interface that people worked with directly, there was always a more
usable layer built on top of it.

> Just like mod_python it is only used for the deployment of WSGI
> applications.

The mod_python module isn't used just for WSGI applications and is
probably rarely used for them. This is because mod_python has its own
interface for building web applications. It also has abilities to hook
into Apache request handling phases, meaning it can do more than host
a a content handler/web application.

> As far as I understand, the main (or only?) reasoning
> for this is because WSGI makes Python applications easier to deploy
> without having to worry about whether using FCGI/SCGI/CGI.

WSGI provides for portability, it isn't necessarily easier to use than
mod_python.

> What would be involved to run Python on the web using FCGI without
> WSGI? I can feel the flames already.

No, you really don't want to do that.

> This isn't the only reason I want
> to know, but one reason is that I want to use Python 3.1 and as I
> understand, this will have to wait for the WSGI 2.0 specification to
> ensure time isn't wasted.

Then look at mod_wsgi. It already has support for Python 3.X. Some
aspects of how it implements WSGI 1.0 may change, but will not be too
much and details are being sorted out in the back rooms as we speak.
See:

  http://code.google.com/p/modwsgi/wiki/ChangesInVersion0300

The other option is CherryPy WSGI server as that is close to a Python
3.X release as well, as I perceive it.

I wouldn't bother waiting for WSGI 2.0. That is more of a pipe dream.
There will be an updated WSGI 1.0 for Python 3.0.

Graham

> I apologize if the questions are ridiculous. I've just recently got
> into web programming and it seems that in order for me to use Python,
> 

Re: Python on the Web

2009-08-25 Thread Graham Dumpleton
On Aug 26, 1:17 pm, alex23  wrote:
> Phil  wrote:
> > My interest in Python 3.1 was actually to develop a framework. Again,
> > I can feel the flames. :) I understand there are enough frameworks but
> > I actually have no applications that I wish to develop.
>
> No offense intended, but that's probably the worst approach to take.
>
> Frameworks created for the sake of creating a framework, as opposed to
> those written to meet a defined need, tend to be the worst examples of
> masturbatory coding.

I would in part actually disagree with that.

The problem with people creating frameworks to meet some defined need
is that they often implement only just enough of that framework to
meet that need and nothing more. End result is that the framework is
more often than not ever fleshed out enough to be of much use to
anyone else. Its existence though just pollutes the Internet with more
crap that one has to wade through.

Since there is already a plethora of good frameworks out there, if
writing an application, you are better of using one of the existing
frameworks. If interested in working at the framework level, you would
still be much better off looking at the existing frameworks, first
learn how they work and then consider contributing to them, rather
than implementing your own.

For some related reading, see:

  http://lucumr.pocoo.org/2009/7/30/nih-in-the-wsgi-world

As far as low level framework (or anti frameworks), suggest looking at
Werkzeug, Paste/Pylons and bobo.

I'll comment more on original message later.

Graham
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: web frameworks that support Python 3

2009-08-25 Thread Graham Dumpleton
On Aug 26, 12:19 pm, exar...@twistedmatrix.com wrote:
> On 01:41 am, a...@pythoncraft.com wrote:
>
>
>
>
>
> >In article
> >,
> >Graham Dumpleton   wrote:
> >>On Aug 24, 6:34=A0am, Sebastian Wiesner  wrote:
>
> >>>In any case, there is bottle [1], which provides a *very minimal*
> >>>framewo=
> >>rk
> >>>for WSGI web development. =A0Don't expect too much, it is really
> >>>small, a=
> >>nd
> >>>doesn't do much more than routing and minimal templating.
>
> >>>However, it is the only Python-3-compatible web framework I know of.
>
> >>>[1]http://bottle.paws.de/page/start
>
> >>There is one big flaw with your claim. That is the there is no WSGI
> >>specification for Python 3.0 as yet. Anything that claims to work with
> >>WSGI and Python 3.0 is just a big guess as far as how WSGI for Python
> >>3.0 may work.
>
> >Perhaps you meant "library" instead of "specification"?
>
> He meant specification.
>
> Python 3.x is different enough from any Python 2.x release that PEP 333
> no longer completely makes sense.  It needs to be modified to be
> applicable to Python 3.x.
>
> So, in the sense that there is no written down, generally agreed upon
> specification for what WSGI on Python 3.x means, there is no...
> specification.
>
> There is, however, apparently, a library. ;)

If you are talking about wsgiref then that was somewhat broken in
Python 3.0. In Python 3.1 it works for some definition of works. The
problem again being that since WSGI specification hasn't been updated
for Python 3.X, that how it works will likely not match what the
specification may eventually say. This will become more and more of a
problem if WSGI specification isn't updated. At the moment the
discussion is going around in circles, although, if I put my
optimistic face on, I would say it is a slow inward spiral. Not quite
a death spiral at least.

Graham
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: mod_python: Permission denied

2009-08-25 Thread Graham Dumpleton
On Aug 26, 8:43 am, David  wrote:
> Hello,
>
> I googled online however I did not find a clue my question. So I post
> it here.
>
> I created a mod_python CGI to upload a file and saves it in folder "/
> var/www/keyword-query/files/".  My code runs in root.
>
>      fileitem = req.form['file']
>
>    # Test if the file was uploaded
>    if fileitem.filename:
>
>       # strip leading path from file name to avoid directory traversal
> attacks
>       fname = os.path.basename(fileitem.filename)
>       # build absolute path to files directory
>       dir_path = os.path.join(os.path.dirname(req.filename), 'files')
>       f = open(os.path.join(dir_path, fname), 'wb', 1)
>
>       # Read the file in chunks
>       for chunk in fbuffer(fileitem.file):
>          f.write(chunk)
>       f.close()
>       message = 'The file "%s" was uploaded successfully' % fname
>
> I got:
>
>  File "/var/www/keyword-query/upload.py", line 30, in upload
>     f = open(os.path.join(dir_path, fname), 'wb', 1)
>
> IOError: [Errno 13] Permission denied: '/var/www/keyword-query/files/
> Defrosting.rtf'
>
> "Defrosting.rtf" is a file on the desktop of my Windows XP computer.
>
> Anybody knows what the problem is?
>
> Thanks for your replies.

Apache service likely running as a special user which doesn't have
write permission to your directory.

Graham
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How does the file.seek() work ?

2009-08-25 Thread Graham Dumpleton
On Aug 25, 5:37 am, Tim Chase  wrote:
> > I want the file pointer set to 100 and overwrite everything from there
> [snip]
> > def application(environ, response):
> >     query=os.path.join(os.path.dirname(__file__),'teemp')
> >     range=environ.get('HTTP_RANGE','bytes=0-').replace
> > ('bytes=','').split(',')
> >     offset=[]
> >     for r in range: offset.append(r.split('-'))
> >     with open(query,'w+') as f:
> >          f.seek(int(offset[0][0]))
> >          while True:
> >              chunk=environ['wsgi.input'].read(8192).decode('latin1')
> >              if not chunk: break
> >              f.write(chunk)
> >     f=open(query)
> >     l=str(os.fstat(f.fileno()).st_size)
> >     response('200 OK', [('Content-Type', 'text/plain'), ('Content-
> > Length', str(len(l)))])
> >     return [l]
>
> A couple items of note:
>
> - you don't open the file in binary mode -- seek is more reliable
> in binary mode :)

If my memory is right, if file is opened in binary mode, also wouldn't
need to be decoding the WSGI input stream as latin-1 to get a string.
Instead can just deal with bytes and write bytes to file.

Graham

> - if you want to lop off the rest of the file, use f.truncate()
>
> An example:
>
> # create the initial file
>  >>> f = file('zzz.zzz', 'wb+')
>  >>> f.write('abcdefghijklmnop')
>  >>> f.close()
>
>  >>> f = file('zzz.zzz', 'ab+')
>  >>> f.read() # show the existing content
> 'abcdefghijklmnop'
>  >>> f.seek(5) # seek to the desired offset
>  >>> f.truncate() # throw away everything after here
>  >>> f.write('zyx') # write the new data at pos=5
>  >>> f.close()
>
> # demonstrate that it worked
>  >>> f = file('zzz.zzz', 'rb')
>  >>> f.read()
> 'abcdezyx'
>  >>> f.close()
>
> > also why must I open the file a second time to know how big it is ?
>
> Likely the output has been buffered.  You can try using
>
>    f.flush() # write all the data to the disk first
>    size = os.fstat(f.fileno()).st_size
>
> which seems to do the trick for me.
>
> -tkc

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: web frameworks that support Python 3

2009-08-23 Thread Graham Dumpleton
On Aug 24, 6:34 am, Sebastian Wiesner  wrote:
> At Sunday 23 August 2009 22:13:16 you wrote:> I use Chinese and therefore 
> Unicode very heavily, and so Python 3 is
> > an unavoidable choice for me.
>
> Python 2.x supports Unicode just as well as Python 3.  Every common web
> framework works perfectly with unicode.
>
> In any case, there is bottle [1], which provides a *very minimal* framework
> for WSGI web development.  Don't expect too much, it is really small, and
> doesn't do much more than routing and minimal templating.
>
> However, it is the only Python-3-compatible web framework I know of.
>
> [1]http://bottle.paws.de/page/start

There is one big flaw with your claim. That is the there is no WSGI
specification for Python 3.0 as yet. Anything that claims to work with
WSGI and Python 3.0 is just a big guess as far as how WSGI for Python
3.0 may work.

I would therefore be a bit cautious with your claim.

Graham

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: does python have a generic object pool like commons-pool in Java

2009-07-16 Thread Graham Dumpleton
On Jul 16, 3:05 pm, John Nagle  wrote:
> alex23 wrote:
> > On Jul 16, 2:03 pm, John Nagle  wrote:
> >>      "fcgi" is an option for this sort of thing.  With "mod_fcgi" installed
> >> in Apache, and "fcgi.py" used to manage the Python side of the problem, you
> >> can have semi-persistent programs started up and shut down for you on the
> >> server side.
>
> > Hey John,
>
> > The environments in which I've been asked to develop webs apps using
> > Python have all utilisedmod_wsgi. Do you have any experience with
> >mod_wsgivs mod_fcgi, and if so, can you comment on any relevant
> > performance / capability / ease-of-use differences?
>
> > Cheers,
> > alex23
>
>     FCGI seems to be somewhat out of favor, perhaps because it isn't
> complicated enough.

I doubt that is the reason. It is out of favour for Python web hosting
at least, because web hosting companies provide really crappy support
for using Python with it and also don't like long lived processes.
Most FASTCGI deployments are configured in a way more appropriate for
PHP because that is what the default settings favour. This works to
the detriment of Python. Even when people setup FASTCGI themselves,
they still don't play with the configuration to optimise it for their
specific Python application. Thus it can still run poorly.

> It's a mature technology and works reasonably well.

But is showing its age. It really needs a refresh. Various of its
design decisions were from when network speeds and bandwidth were much
lower and this complicates the design. Making the protocol format
simpler in some areas would make it easier to implement. The protocol
could also be built on to make process management more flexible,
rather than relying on each implementation to come up with adhoc ways
of managing it.

Part of the problem with FASTCGI, or hosting of dynamic applications
in general, is that web servers primary focus is serving of static
files. Any support for dynamic web applications is more of a bolt on
feature. Thus, web applications using Python will always be at a
disadvantage. PHP manages okay because their model of operation is
closer to the one shot processing of static files. Python requires
persistent to work adequately with modern fat applications.

> It's been a puzzle to me that FCGI was taken out of the
> main Apache code base, because it gives production-level performance
> with CGI-type simplicity.

As far as I know FASTCGI support has in the past never been a part of
the Apache code base. Both mod_fastcgi and mod_fcgid were developed by
independent people and not under the ASF.

In Apache 2.3 development versions (to become 2.4 when released),
there will however be a mod_proxy_fcgi. The ASF has also taken over
management of mod_fcgid and working out how that may be incorporated
into future Apache versions.

>     WSGI has a mode for running Python inside the Apache process,
> which is less secure and doesn't allow multiple Python processes.

Depending on the requirements it is more than adequate and if
configured correctly gives better performance and scalability. Not
everyone runs in a shared hosting environment. The different modes of
mod_wsgi therefore give choice.

> That complicates mod_wsgi considerably,

No it doesn't. It is actual daemon mode that complicates things, not
embedded mode.

> and ties it very closely
> to specific versions of Python and Python modules.  As a result,
> the WSGI developers are patching at a great rate.

What are you talking about when you say 'As a result, the WSGI
developers are patching at a great rate'? As with some of your other
comments, this is leaning towards being FUD and nothing more. There is
no 'patching at a great rate' going on.

> I think the thing has too many "l33t features".

It is all about choice and providing flexibility. You may not see a
need for certain features, but it doesn't mean other people don't have
a need for it.

> I'd avoid "embedded mode".  "Daemon mode" is the way to go if you use WSGI.

I'd agree, but not sure you really understand the reasons for why it
would be preferred.

> I haven't tried WSGI;

Then why comment on it as if you are knowledgeable on it.

> I don't need the grief of a package under heavy development.

More FUD.

Graham
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: FW: [Tutor] Multi-Threading and KeyboardInterrupt

2009-06-11 Thread Graham Dumpleton
On Jun 12, 3:35 pm, Dennis Lee Bieber  wrote:
> On Thu, 11 Jun 2009 08:44:24 -0500, "Strax-Haber, Matthew (LARC-D320)"
>  declaimed the following in
> gmane.comp.python.general:
>
> > I sent this to the Tutor mailing list and did not receive a response.
> > Perhaps one of you might be able to offer some sagely wisdom or pointed
> > remarks?
>
> > Please reply off-list and thanks in advance. Code examples are below in
> > plain text.
>
>         Sorry -- you post to a public forum, expect to get the response on a
> public forum...

Bbit off topic, but if you are a proponent of public forums, how come
your post on Google Groups shows:

"""Note: The author of this message requested that it not be archived.
This message will be removed from Groups in 6 days (Jun 19, 3:35
pm)."""

I always find it a tad annoying that people have this, as you loose
posts from the record of a discussion and if that post carries useful
information, it is lost.

Am also curious as to what mechanism for posting you use that allows
you to set an expiration time on the post anyway. Are you using old
usenet news reader or something?

Graham
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: mod_python and xml.dom.minidom

2009-05-13 Thread Graham Dumpleton
On May 12, 1:59 am, dpapathanasiou 
wrote:
> For the record, and in case anyone else runs into this particular
> problem, here's how resolved it.
>
> My original xml_utils.py was written this way:
>
> from xml.dom import minidom
>
> def parse_item_attribute (item, attribute_name):
>     item_doc = minidom.parseString(item)
>     ...
>
> That version worked under the python interpreter, but failed under
> both mod_python andmod_wsgiapache modules with an error ("Parent
> module 'xml.dom' not loaded").
>
> I found that changing the import statement and the minidom reference
> within the function resolved the problem.
>
> I.e., after rewriting xml_utils.py this way, it works under both
> apache modules as well as in the python interpreter:
>
> import xml.dom.minidom
>
> def parse_item_attribute (item, attribute_name):
>     item_doc = xml.dom.minidom.parseString(item)
>     ...

FWIW, have just seen someone else raising an issue where something
caused problems unless a full package path was used. In that case it
was the 'email' package.

The common thing between these two packages is that they do funny
stuff with sys.modules as part of import.

For 'email' package it is implementing some sort of lazy loader and
aliasing thing to support old names. For 'xml.dom' it seems to replace
the current module with a C extension variant on the fly if the C
extension exists.

Were you getting this issue with xml.dom showing on first request all
the time, or only occasionally occurring? If the latter, were you
running things in a multithreaded configuration and was the server
being loaded with lots of concurrent requests?

For your particular Python installation, does the '_xmlplus' module
exist? Ie., can you import it as '_xmlplus' or 'xml.doc._xmlplus'?

Graham
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: mod_python and xml.dom.minidom

2009-05-10 Thread Graham Dumpleton
On May 10, 3:40 am, Paul Boddie  wrote:
> On 9 Mai, 01:36, dpapathanasiou  wrote:
>
>
>
> > Apache's configure utility (I'm using httpd version 2.2.11) doesn't
> > explicitly describe an expat library option.
>
> > Also, if libexpat is version 1.95.2, wouldn't I have to get version
> > 2.0 to be compatible with pyexpat?
>
> The aim would be to persuade Apache to configure itself against the
> same Expat library that pyexpat is using, which would involve the
> headers and libraries referenced during the pyexpat configuration
> process, although I seem to recall something about pyexpat bundling
> its own version of Expat - that would complicate matters somewhat.
>
> > If anyone has any advice or suggestions, I'd appreciate hearing them.
>
> Expat might be getting brought into Apache via mod_dav:
>
> http://www.webdav.org/mod_dav/install.html
>
> Perhaps disabling mod_dav when configuring Apache might drop Expat
> from Apache's library dependencies.

The OP was using Python 2.5, so shouldn't be an issue because pyexpat
properly name space prefixes its version of expat. See:

  http://code.google.com/p/modwsgi/wiki/IssuesWithExpatLibrary

where explicitly says that only applies to Python prior to Python 2.5.

His problem is therefore likely to be something completely different.

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Cannot start a thread in atexit callback

2009-05-06 Thread Graham Dumpleton
On May 6, 3:18 pm, "Gabriel Genellina"  wrote:
> En Tue, 05 May 2009 23:52:25 -0300, Zac Burns  escribió:
>
> > It seems that one cannot start a thread in an atexit callback.
>
> > My use case is that I have a IO heavy callback that I want to run in a
> > thread so that other callbacks can finish while it's doing it's thing
> > to save on exit time.
>
> Try creating the thread when the program begins, but locked. And release  
> the lock when your programs is about to finish.

FWIW, from Python 2.5 (??) onwards, a shutdown of non daemonized
threads is performed as a separate step before atexit callbacks. Not
sure if other aspects of threading may be undone at that point and so
prevent startup of new ones.

The code in threading module which is called is:

class _MainThread(Thread):

...

def _exitfunc(self):
self._Thread__stop()
t = _pickSomeNonDaemonThread()
if t:
if __debug__:
self._note("%s: waiting for other threads", self)
while t:
t.join()
t = _pickSomeNonDaemonThread()
if __debug__:
self._note("%s: exiting", self)
self._Thread__delete()

_shutdown = _MainThread()._exitfunc

The call to this is done from WaitForThreadShutdown() in main.c. Which
is in turn called from Py_Main() just prior to calling Py_Finalize().

WaitForThreadShutdown();

Py_Finalize();

This is all different to older versions of Python which shutdown such
threads from an actual atexit handler. This caused various ordering
issues.

Anyway, the point of this is that it would need to be a daemonized
thread to begin with. :-)

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: return functions

2009-05-02 Thread Graham Dumpleton
On May 3, 6:44 am, gert  wrote:
> Aldo i like the cerrypywsgiserver very much, i do not like the tools
> that go with it
> so i am stuck with a configuration file that looks like this
>
> http://pastebin.com/m4d8184bc
>
> After 152 line I finally arrived to a point where i was thinkig "thats
> it, this is like going to work on a uni cycle and is just plain
> ridicules"
>
> So how can i generate some functions something like
>
> def generate(url,type): # where type could be htm js css orwsgi

You seem to have finally discovered that when using Apache/mod_wsgi,
Apache does a level of URL matching to filesystem based resources.
This isn't automatic in normal WSGI servers unless you use a WSGI
middleware that does the mapping for you. :-)

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: eval(WsgiApplication)

2009-05-02 Thread Graham Dumpleton
On May 2, 10:15 pm, Дамјан Георгиевски  wrote:
> >> > How do I do this in python3?
>
> >> What's wrong with importing it?
>
> > The problem is that my wsgi files have a wsgi extention for mod_wsgi
> > use
> ..
> > mod_wsgi has a .wsgi handler because it is recommended to rename the
> > wsgi file with wsgi extensions to avoid double imports
> > cherrypy server has a dispatcher class
>
> You can either use .py extension for the wsgi files OR use a custom
> importer that can import your .wsgi 
> fileshttp://docs.python.org/library/modules.html

You don't have to go to such an extreme if it is only for one file to
be used as root WSGI application. Can use something like:

  def load_script(filename, label='__wsgi__'):
module = imp.new_module(label)
module.__file__ = filename
execfile(filename, module.__dict__)
return module

  module = load_script('/some/path/file.wsgi')

  application = module.application

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Module caching

2009-04-04 Thread Graham Dumpleton
On Apr 4, 10:41 am, Jon Clements  wrote:
> On 3 Apr, 23:58, Aaron Scott  wrote:
>
> > > are you an experienced python programmer?
>
> > Yeah, I'd link to think I'm fairly experienced and not making any
> > stupid mistakes. That said, I'm fairly new to working with mod_python.
>
> > All I really want is to have mod_python stop caching variables. This
> > seems like it should be easy enough to do, but I can't for the life of
> > me find information on how to do it.
>
> > Aaron
>
> Umm... Well, mod_python works for long running processes that don't
> really store data, but return it on demand... so keeping module level
> variables around is going to be a gotcha.
>
> It's a kludge, but setting MaxRequestsPerChild to 1 in the Apache
> config basically forces a reload of everything for every request...
> that might be worth a go -- but it's nasty...

They may as well use CGI then. Personally I would never recommend
MaxRequestsPerChild be set to 1.

Anyway, this person also posted on mod_python list. One of the things
I highlighted there was that mod_python for some configurations is
multithreaded and as such they may not be properly protecting
variables if they are storing them at global scope. They haven't
responded to any comments about it on mod_python list. They were also
told to read:

  http://www.dscpl.com.au/wiki/ModPython/Articles/TheProcessInterpreterModel

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Upgrade Python on a Mac

2009-03-30 Thread Graham Dumpleton
On Mar 31, 3:07 am, 7stud  wrote:
> On Mar 3, 4:01 am, Graham Dumpleton 
> wrote:
>
>
>
> > On Mar 3, 8:53 am, Rey Bango  wrote:
>
> > > Hi,
>
> > > I'd like to upgrade the installed version of Python that came standard
> > > on OS X (Leopard) with either 2.6.1 or 3.0.1. Before I stick my foot
> > > in it, I just wanted to get a better understanding of the process.
>
> > > If I download the disk image installer from 
> > > here:http://www.python.org/download/
> > > will it allow me to upgrade my existing version or is it more involved
> > > (eg: making a new build).
>
> > > I've looked through the python.org page for upgrade instructions for a
> > >Macand haven't found it.
>
> > > Any help would be appreciated.
>
> > Beware of the official Python binary installers for MacOS X if wanting
> > to do Python web development.
>
> > Based on feedback these installers have only been compiled for 32 bit
> > architectures. This makes them useless if you want to run mod_python
> > ormod_wsgiwith Apache that comes with MacOS X as it runs as 64 bit
> > and relies on the Python framework having 64 bit, which these
> > installers do not provide.
>
> > If this is going to affect you, build from source code. Configure
> > options required would be, as an example:
>
> > ./configure --prefix=/usr/local/python-3.0  \
> >  --enable-framework=/usr/local/python-3.0/frameworks \
> >  --enable-universalsdk=/ MACOSX_DEPLOYMENT_TARGET=10.5 \
> >  --with-universal-archs=all
>
> Which of the following is the "official Python binary installer for
> MacOS X"?
>
> -
> Python-2.6.1.tar.bz2
>
> python-2.6.1-macosx2008-12-06.dmg
> -
>
> and is the problem with 3.0 specifically or all versions?
>
> > Note that not all MacPorts installers have been both 32/64 bit either.
> > Not sure if they have fixed this issue.

I am talking about the binary dmg installer. Although the source by
default only installs 32 bit as well when compiled.

This issue affects all versions available.

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Safe to call Py_Initialize() frequently?

2009-03-29 Thread Graham Dumpleton
On Mar 30, 4:35 am, a...@pythoncraft.com (Aahz) wrote:
> [p&e]
>
> In article 
> ,
> Graham Dumpleton   wrote:
>
>
>
>
>
> >In mod_wsgi however, Apache will completely unload the mod_wsgi module
> >on a restart. This would also mean that the Python library is also
> >unloaded from memory. When it reloads both, the global static
> >variables where information was left behind have been lost and nulled
> >out. Thus Python when initialised again, will recreate the data it
> >needs.
>
> >So, for case where Python library unloaded, looks like may well suffer
> >a memory leak regardless.
>
> >As to third party C extension modules, they aren't really an issue,
> >because all that is done in Apache parent process is Py_Initialize()
> >and Py_Finalize() and nothing else really. Just done to get
> >interpreter setup before forking child processes.
>
> >There is more detail on this analysis in that thread on mod_wsgi list
> >at:
>
> Missing reference?

It was in an earlier post. Yes I knew I forget to add it again, but
figured people would read the whole thread.

  http://groups.google.com/group/modwsgi/browse_frm/thread/65305cfc798c088c

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Safe to call Py_Initialize() frequently?

2009-03-23 Thread Graham Dumpleton
On Mar 23, 10:00 pm, Mark Hammond  wrote:
> On 23/03/2009 12:14 PM, Graham Dumpleton wrote:
>
> > On Mar 21, 10:27 am, Mark Hammond  wrote:
> >> Calling
> >> Py_Initialize and Py_Finalize multiple times does leak (Python 3 has
> >> mechanisms so this need to always be true in the future, but it is true
> >> now for non-trivial apps.
>
> > Mark, can you please clarify this statement you are making. The
> > grammar used makes it a bit unclear.
>
> Yes, sorry - s/this need to/this need not/
>
> > Are you saying, that effectively by design, Python 3.0 will always
> > leak memory upon Py_Finalize() being called, or that it shouldn't leak
> > memory and that problems with older versions of Python have been fixed
> > up?
>
> The latter - kindof - py3k provides an enhanced API that *allows*
> extensions to be 'safe' in this regard, but it doesn't enforce it.
> Modules 'trivially' ported from py2k will not magically get this ability
> - they must explicitly take advantage of it.  pywin32 is yet to do so
> (ie, it is a 'trivial' port...)
>
> I hope this clarifies...

Yes, but ...

There still may be problems. The issues is old, but suspect that
comments in the issue:

  http://bugs.python.org/issue1856

maybe still hold true.

That is, that there are some things that Python doesn't free up which
are related to Python simplified GIL state API. Normally this wouldn't
matter as another call to Py_Initialize() would see existing data and
reuse it. So, doesn't strictly leak memory in that sense.

In mod_wsgi however, Apache will completely unload the mod_wsgi module
on a restart. This would also mean that the Python library is also
unloaded from memory. When it reloads both, the global static
variables where information was left behind have been lost and nulled
out. Thus Python when initialised again, will recreate the data it
needs.

So, for case where Python library unloaded, looks like may well suffer
a memory leak regardless.

As to third party C extension modules, they aren't really an issue,
because all that is done in Apache parent process is Py_Initialize()
and Py_Finalize() and nothing else really. Just done to get
interpreter setup before forking child processes.

There is more detail on this analysis in that thread on mod_wsgi list
at:

Graham

--
http://mail.python.org/mailman/listinfo/python-list


Re: Safe to call Py_Initialize() frequently?

2009-03-22 Thread Graham Dumpleton
On Mar 21, 10:27 am, Mark Hammond  wrote:
> Calling
> Py_Initialize and Py_Finalize multiple times does leak (Python 3 has
> mechanisms so this need to always be true in the future, but it is true
> now for non-trivial apps.

Mark, can you please clarify this statement you are making. The
grammar used makes it a bit unclear.

Are you saying, that effectively by design, Python 3.0 will always
leak memory upon Py_Finalize() being called, or that it shouldn't leak
memory and that problems with older versions of Python have been fixed
up?

I know that some older versions of Python leaked memory on Py_Finalize
(), but if this is now guaranteed to always be the case and nothing
can be done about it, then the final death knell will have been rung
on mod_python and also embedded mode of mod_wsgi. This is because both
those systems rely on being able to call Py_Initialize()/Py_Finalize()
multiple times. At best they would have to change how they handle
initialisation of Python and defer it until sub processes have been
forked, but this will have some impact on performance and memory
usage.

So, more information appreciated.

Related link on mod_wsgi list about this at:

  
http://groups.google.com/group/modwsgi/browse_frm/thread/65305cfc798c088c?hl=en

Graham


--
http://mail.python.org/mailman/listinfo/python-list


Re: Safe to call Py_Initialize() frequently?

2009-03-22 Thread Graham Dumpleton
On Mar 21, 2:35 pm, roschler  wrote:
> On Mar 20, 7:27 pm, Mark Hammond  wrote:
>
> > On 21/03/2009 4:20 AM, roschler wrote:
>
> > Calling Py_Initialize() multiple times has no effect.  Calling
> > Py_Initialize and Py_Finalize multiple times does leak (Python 3 has
> > mechanisms so this need to always be true in the future, but it is true
> > now for non-trivial apps.
>
> > > If it is not a safe approach, is there another way to get what I want?
>
> > Start a new process each time?
>
> > Cheers,
>
> > Mark
>
> Hello Mark,
>
> Thank you for your reply.  I didn't know that Py_Initialize worked
> like that.
>
> How about using Py_NewInterpreter() and Py_EndInterpreter() with each
> job?  Any value in that approach?  If not, is there at least a
> reliable way to get a list of all active threads and terminate them so
> before starting the next job?  Starting a new process each time seems
> a bit heavy handed.

Using Py_EndInterpreter() is even more fraught with danger. The first
problem is that some third party C extension modules will not work in
sub interpreters because they use simplified GIL state API. The second
problem is that third party C extensions often don't cope well with
the idea that an interpreter may be destroyed that it was initialised
in, with the module then being subsequently used again in a new sub
interpreter instance.

Given that it is one operation per second, creating a new process, be
it a completely fresh one or one forked from existing Python process,
would be simpler.

Graham

--
http://mail.python.org/mailman/listinfo/python-list


Re: Do python.org MacOS X dmg installers still only provide 32 bit Python framework?

2009-03-09 Thread Graham Dumpleton
On Mar 9, 6:14 pm, "Martin v. Löwis"  wrote:
> Graham Dumpleton wrote:
> > I'd rather not have to download and install them as I don't want to be
> > installing them into my actual system, so can someone please tell me
> > whether the MacOS X dmg installers provided fromwww.python.orgare
> > still not full universal builds. That is, that the Python framework
> > component only contains 32 bit architecture images and not also 64 bit
> > architecture images.
>
> Yes, that is still the case.
>
> > If this still is an issue, then I'll log a bug report so that full
> > support is included in the future.
>
> Please, no. Trust that the people creating them are aware that it is
> desirable to provide 64-bit binaries. Also trust that there are severe
> technical problems that prevented that from happening so far.

Does this technical problem go beyond the lack of 64 bit safe tcl/tk,
which is what I understood used to be part of the problem?

Graham

--
http://mail.python.org/mailman/listinfo/python-list


Do python.org MacOS X dmg installers still only provide 32 bit Python framework?

2009-03-08 Thread Graham Dumpleton
I'd rather not have to download and install them as I don't want to be
installing them into my actual system, so can someone please tell me
whether the MacOS X dmg installers provided from www.python.org are
still not full universal builds. That is, that the Python framework
component only contains 32 bit architecture images and not also 64 bit
architecture images.

Am interested in Python 2.5.4, 2.6.1 and 3.0.1.

The output below is actually from Python provided with MacOS X 10.5
(Leopard), but is representative of what I would want to see, just for
the python.org one under /Library instead.

$ file /System/Library/Frameworks/Python.framework/Versions/Current/
Python
/System/Library/Frameworks/Python.framework/Versions/Current/Python:
Mach-O universal binary with 4 architectures
/System/Library/Frameworks/Python.framework/Versions/Current/Python
(for architecture ppc7400): Mach-O dynamically linked shared library
ppc
/System/Library/Frameworks/Python.framework/Versions/Current/Python
(for architecture ppc64):   Mach-O 64-bit dynamically linked shared
library ppc64
/System/Library/Frameworks/Python.framework/Versions/Current/Python
(for architecture i386):Mach-O dynamically linked shared library
i386
/System/Library/Frameworks/Python.framework/Versions/Current/Python
(for architecture x86_64):  Mach-O 64-bit dynamically linked shared
library x86_64

So, do the official Python builds have 64 bit support in the framework
library?

And yes I do know that the 'python' executable itself on MacOS X 10.5
(Leopard) standard Python is only 32 bit, but whether the framework
has 64 bit is an issue because when you use mod_python or mod_wsgi
with standard MacOS X Apache, Python is run as 64 bit and so the 64
bit code needs to be in the framework library. Am getting tired of
having to explain to people that it is the python.org installers that
are deficient and to compile from source code or use Apple version
instead.

If this still is an issue, then I'll log a bug report so that full
support is included in the future.

Thanks.

Graham

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python3 on the Web

2009-03-05 Thread Graham Dumpleton
On Mar 6, 4:13 am, Johannes Permoser  wrote:
> Hi,
>
> I wanted to learn Python from scratch and start off with Version 3.
> Since I already know PHP very well, I thought it would be nice to start
> off with a small web-project.
>
> But what's the way to bring python3 to the Web?
> mod_python isn't available, cgi is said to be slow, mod_wsgi looks
> complicated...

Is it WSGI you really mean is complicated or mod_wsgi itself? They are
not the same thing, with mod_wsgi just being one implementation of
WSGI.

In comparison to mod_python the mod_wsgi package is simpler to install
and setup, but then if you really meant WSGI as a concept then it is a
different matter and yes without using some higher level WSGI
framework or toolkit, then WSGI can be a bit more daunting and might
appear more complicated, or at least less helpful, than mod_python as
far as getting started. This is because mod_python is actually two
parts. These parts are the low level web server interface, akin to
WSGI level, and its higher level handlers. The mod_wsgi package
doesn't have the higher level handlers as expected you would use any
WSGI capable package for that.

As others have said, perhaps start out with Python 2.X for now if you
are just getting into this. This is because little work has been done
yet on getting any of the Python web frameworks/toolkits running on
Python 3.0 even if mod_wsgi is already ready (subversion copy) to host
them on Apache.

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Shared library Python on Mac OS X 64-bit

2009-03-05 Thread Graham Dumpleton
On Mar 6, 6:24 am, Uberman  wrote:
> Graham Dumpleton wrote:
>
> > I don't understand the problem, you can say where it installs the
> > framework, it doesn't have to be under /Library, so can be in your
> > special SDK folder. For example:
>
> > ./configure --prefix=/usr/local/python-2.5.4  \
> >  --enable-framework=/usr/local/python-2.5.4/frameworks \
> >  --enable-universalsdk=/ MACOSX_DEPLOYMENT_TARGET=10.5 \
> >  --with-universal-archs=all
>
> > This would put stuff under /usr/local/python-2.5.4.
>
> While that looked promising, Graham, it didn't actually address my needs.
> "--with-universal-archs=all" produces binaries for "ppc" and "i386".  I need
> 64-bit binaries (i.e., "x86_64").

That configure line should produce 64 bit framework library to link
against. It will not produce a 64 bit 'python' executable, but you
will never achieve that at the moment without hacking the Python build
scripts. This is because the Python build scripts deliberately remove
64 bit support out of the 'python' executable even though they remain
in the Python framework library. From memory this is apparently done
because tcl/tk libraries aren't 64 bit safe. If you look you will find
some comment about it in the Python build scripts.

End result is that you end up with a 64 bit framework that can at
least be quite happily linked into an embedded system compiled as 64
bit. I know this works as I do it all the time for mod_wsgi and
mod_python. Something that is required because OS version of Apache on
MacOS X runs as 64 bit. If you don't trust that I know what I am
saying, look at:

  http://code.google.com/p/modwsgi/wiki/InstallationOnMacOSX

which is a description of all the stupid things that can happen with
32/64 bit on MacOS X, so have had a lot to do with this. One of the
main problems is people using MacPorts versions of stuff, including
compilers, which do not support generation of 64 bit objects.

> Also, after building with your settings,
> there are no shared libraries produced.  All I get is the static-link library,
> "libpython2.5.a", which is what I'm getting with every other configuration as
> well.

A framework library doesn't have a .so or .dylib if that is what you
are expecting to find. As example, on my PowerPC MacOS X I have:

  /usr/local/python-3.0-trunk/frameworks/Python.framework/Versions/
Current/Python

Note, no extension. This is the actual framework library:

$ file /usr/local/python-2.5.2/frameworks/Python.framework/Versions/
Current/Python
/usr/local/python-2.5.2/frameworks/Python.framework/Versions/Current/
Python: Mach-O dynamically linked shared library ppc

Because I am on 32 bit PowerPC haven't actually enabled 64 bit
architectures.

So, find that file in your framework installation and run 'file' on it
to see what architectures it really provides. Ignore what 'file' gives
you for 'python' executable, as as I said before, it has had 64 bit
architectures stripped out by Python build scripts. Whether that has
changed for Python 3.0 though I haven't checked.

> So, indeed, I now know that I needn't place frameworks into default locations,
> but that doesn't really get me any closer to producing a 64-bit Python shared
> library.  Thanks for trying, though.

It can work. So either you are looking for the wrong thing, or there
is something broken about your environment or which compiler you are
using.

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Shared library Python on Mac OS X 64-bit

2009-03-03 Thread Graham Dumpleton
On Mar 4, 2:29 am, Uberman  wrote:
> Graham Dumpleton wrote:
> > Why don't you want to use MacOS X Framework libraries? It is the
> > better installation method.
>
> Because I'm not installing Python, I'm building it.  If I were just interested
> in installing Python, I wouldn't care whether it was static or shared 
> libraries.
>
> This is all very specific to my product.  We are not just OS X, but Windows
> and Linux as well.  Because of this, we use an SDK/ folder that has all the
> third-party dependencies contained within it (in a non-platform way).
> Frameworks are OS X-specific.  I build Python within its distribution folder,
> and then package that same folder up into an archive that gets deposited into
> the SDK/ folder.  The product then builds against that.

I don't understand the problem, you can say where it installs the
framework, it doesn't have to be under /Library, so can be in your
special SDK folder. For example:

./configure --prefix=/usr/local/python-2.5.4  \
 --enable-framework=/usr/local/python-2.5.4/frameworks \
 --enable-universalsdk=/ MACOSX_DEPLOYMENT_TARGET=10.5 \
 --with-universal-archs=all

This would put stuff under /usr/local/python-2.5.4.

The only thing am not sure about though is what happens to some MacOS
X .app stuff it tries to install. I vaguely remember it still tried to
install them elsewhere, so may have to disable them being installed
somehow.

Graham

--
http://mail.python.org/mailman/listinfo/python-list


Re: Upgrade Python on a Mac

2009-03-03 Thread Graham Dumpleton
On Mar 3, 8:53 am, Rey Bango  wrote:
> Hi,
>
> I'd like to upgrade the installed version of Python that came standard
> on OS X (Leopard) with either 2.6.1 or 3.0.1. Before I stick my foot
> in it, I just wanted to get a better understanding of the process.
>
> If I download the disk image installer from 
> here:http://www.python.org/download/
> will it allow me to upgrade my existing version or is it more involved
> (eg: making a new build).
>
> I've looked through the python.org page for upgrade instructions for a
> Mac and haven't found it.
>
> Any help would be appreciated.

Beware of the official Python binary installers for MacOS X if wanting
to do Python web development.

Based on feedback these installers have only been compiled for 32 bit
architectures. This makes them useless if you want to run mod_python
or mod_wsgi with Apache that comes with MacOS X as it runs as 64 bit
and relies on the Python framework having 64 bit, which these
installers do not provide.

If this is going to affect you, build from source code. Configure
options required would be, as an example:

./configure --prefix=/usr/local/python-3.0  \
 --enable-framework=/usr/local/python-3.0/frameworks \
 --enable-universalsdk=/ MACOSX_DEPLOYMENT_TARGET=10.5 \
 --with-universal-archs=all

Note that not all MacPorts installers have been both 32/64 bit either.
Not sure if they have fixed this issue.

Graham

--
http://mail.python.org/mailman/listinfo/python-list


Re: Shared library Python on Mac OS X 64-bit

2009-03-02 Thread Graham Dumpleton
On Mar 3, 12:25 pm, Uberman  wrote:
> I'm trying to build a 64-bit version of Python 2.5.1 on Mac OS X 10.5.6 64-bit
> (Intel processor).  The configure line I'm using is:
>
> ./configure --enable-shared --disable-framework --disable-toolbox-glue
> OPT="-fast -arch x86_64 -Wall -Wstrict-prototypes -fno-common -fPIC"
> LDFLAGS="-arch x86_64"
>
> The system builds, but it absolutely refuses to build the shared libraries.  I
>  keep getting the 'libpython2.5.a' file, and not the needed *.dylib files.
> Anybody know how to get this thing to produce shared and not static libraries?
>   A link to examples or documentation that shows the correct configure
> parameters?  I would have thought "--enable-shared" would do it, but I guess
> I'm wrong.

Why don't you want to use MacOS X Framework libraries? It is the
better installation method.

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing module and os.close(sys.stdin.fileno())

2009-02-21 Thread Graham Dumpleton
On Feb 22, 12:52 pm, Joshua Judson Rosen  wrote:
> Graham Dumpleton  writes:
>
> > On Feb 21, 4:20 pm, Joshua Judson Rosen  wrote:
> > > Jesse Noller  writes:
>
> > > > On Tue, Feb 17, 2009 at 10:34 PM, Graham Dumpleton
> > > >  wrote:
> > > > > Why is the multiprocessing module, ie., multiprocessing/process.py, in
> > > > > _bootstrap() doing:
>
> > > > >  os.close(sys.stdin.fileno())
>
> > > > > rather than:
>
> > > > >  sys.stdin.close()
>
> > > > > Technically it is feasible that stdin could have been replaced with
> > > > > something other than a file object, where the replacement doesn't have
> > > > > a fileno() method.
>
> > > > > In that sort of situation an AttributeError would be raised, which
> > > > > isn't going to be caught as either OSError or ValueError, which is all
> > > > > the code watches out for.
>
> > > > I don't know why it was implemented that way. File an issue on the
> > > > tracker and assign it to me (jnoller) please.
>
> > > My guess would be: because it's also possible for sys.stdin to be a
> > > file that's open in read+*write* mode, and for that file to have
> > > pending output buffered (for example, in the case of a socketfile).
>
> > If you are going to have a file that is writable as well as readable,
> > such as a socket, then likely that sys.stdout/sys.stderr are going to
> > be bound to it at the same time.
>
> Yes.
>
> > If that is the case then one should not be using close() at all
>
> If you mean stdin.close(), then that's what I said :)

Either. The problem is that same, it close for both read and write and
if was expecting to still be able to write because used for stdout or
stderr, then will not work.

> > as it will then also close the write side of the pipe and cause
> > errors when code subsequently attempts to write to
> > sys.stdout/sys.stderr.
>
> > In the case of socket you would actually want to use shutdown() to
> > close just the input side of the socket.
>
> Sure--but isn't this "you" the /calling/ code that set the whole thing
> up? What the /caller/ does with its stdio is up to /him/, and beyond
> the scope of the present discourse. I can appreciate a library forking
> and then using os.close() on stdio (it protects my files from any I/O
> the subprocess might think it wants to do with them), but I think I
> might be even more annoyed if it *shutdown my sockets*

Ah, yeah, forgot that shutdown does end to end shutdown rather than
just that file object reference. :-)

Graham

> than if it
> caused double-flushes (there's at least a possibility that I could
> cope with the double-flushes by just ensuring that *I* flushed before
> the fork--not so with socket.shutdown()!)
>
> > What this all means is that what is the appropriate thing to do is
> > going to depend on the environment in which the code is used. Thus,
> > having the behaviour hard wired a certain way is really bad. There
> > perhaps instead should be a way of a user providing a hook function to
> > be called to perform any case specific cleanup of stdin, stdout and
> > stderr, or otherwise reassign them.
>
> Usually, I'd say that that's what the methods on the passed-in object
> are for. Though, as I said--the file-object API is lacking, here :(
>
> > > As such, I'd recommend against just using .close(); you might use
> > > something like `if hasattr(sys.stdin, "fileno"): ...'; but, if your
> > > `else' clause unconditionally calls sys.stdin.close(), then you still
> > > have double-flush problems if someone's set sys.stdin to a file-like
> > > object with output-buffering.
>
> > > I guess you could try calling that an `edge-case' and seeing if anyone
> > > screams. It'd be sort-of nice if there was finer granularity in the
> > > file API--maybe if file.close() took a boolean `flush' argument
>
> --
> Don't be afraid to ask (Lf.((Lx.xx) (Lr.f(rr.

--
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing module and os.close(sys.stdin.fileno())

2009-02-21 Thread Graham Dumpleton
On Feb 21, 4:20 pm, Joshua Judson Rosen  wrote:
> Jesse Noller  writes:
>
> > On Tue, Feb 17, 2009 at 10:34 PM, Graham Dumpleton
> >  wrote:
> > > Why is the multiprocessing module, ie., multiprocessing/process.py, in
> > > _bootstrap() doing:
>
> > >  os.close(sys.stdin.fileno())
>
> > > rather than:
>
> > >  sys.stdin.close()
>
> > > Technically it is feasible that stdin could have been replaced with
> > > something other than a file object, where the replacement doesn't have
> > > a fileno() method.
>
> > > In that sort of situation an AttributeError would be raised, which
> > > isn't going to be caught as either OSError or ValueError, which is all
> > > the code watches out for.
>
> > I don't know why it was implemented that way. File an issue on the
> > tracker and assign it to me (jnoller) please.
>
> My guess would be: because it's also possible for sys.stdin to be a
> file that's open in read+*write* mode, and for that file to have
> pending output buffered (for example, in the case of a socketfile).

If you are going to have a file that is writable as well as readable,
such as a socket, then likely that sys.stdout/sys.stderr are going to
be bound to it at the same time. If that is the case then one should
not be using close() at all as it will then also close the write side
of the pipe and cause errors when code subsequently attempts to write
to sys.stdout/sys.stderr.

In the case of socket you would actually want to use shutdown() to
close just the input side of the socket.

What this all means is that what is the appropriate thing to do is
going to depend on the environment in which the code is used. Thus,
having the behaviour hard wired a certain way is really bad. There
perhaps instead should be a way of a user providing a hook function to
be called to perform any case specific cleanup of stdin, stdout and
stderr, or otherwise reassign them.

That this is currently in the _bootstrap() function, which does other
important stuff, doesn't exactly make it look like it is easily
overridden to work for a specific execution environment which is
different to the norm.

> There's a general guideline, inherited from C, that one should ensure
> that the higher-level close() routine is invoked on a given
> file-descriptor in at most *one* process after that descriptor has
> passed through a fork(); in the other (probably child) processes, the
> lower-level close() routine should be called to avoid a
> double-flush--whereby buffered data is flushed out of one process, and
> then the *same* buffered data is flushed out of the (other)
> child-/parent-process' copy of the file-object.
>
> So, if you call sys.stdin.close() in the child-process in
> _bootstrap(), then it could lead to a double-flush corrupting output
> somewhere in the application that uses the multiprocessing module.
>
> You can expect similar issues with just about /any/ `file-like objects'
> that might have `file-like semantics' of buffering data and flushing
> it on close, also--because you end up with multiple copies of the same
> object in `pre-flush' state, and each copy tries to flush at some point.
>
> As such, I'd recommend against just using .close(); you might use
> something like `if hasattr(sys.stdin, "fileno"): ...'; but, if your
> `else' clause unconditionally calls sys.stdin.close(), then you still
> have double-flush problems if someone's set sys.stdin to a file-like
> object with output-buffering.
>
> I guess you could try calling that an `edge-case' and seeing if anyone
> screams. It'd be sort-of nice if there was finer granularity in the
> file API--maybe if file.close() took a boolean `flush' argument

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing module and os.close(sys.stdin.fileno())

2009-02-18 Thread Graham Dumpleton
On Feb 19, 1:16 pm, Jesse Noller  wrote:
> On Tue, Feb 17, 2009 at 10:34 PM, Graham Dumpleton
>
>
>
>  wrote:
> > Why is the multiprocessing module, ie., multiprocessing/process.py, in
> > _bootstrap() doing:
>
> >  os.close(sys.stdin.fileno())
>
> > rather than:
>
> >  sys.stdin.close()
>
> > Technically it is feasible that stdin could have been replaced with
> > something other than a file object, where the replacement doesn't have
> > a fileno() method.
>
> > In that sort of situation an AttributeError would be raised, which
> > isn't going to be caught as either OSError or ValueError, which is all
> > the code watches out for.
>
> > Graham
> > --
> >http://mail.python.org/mailman/listinfo/python-list
>
> I don't know why it was implemented that way. File an issue on the
> tracker and assign it to me (jnoller) please.

Created as:

  http://bugs.python.org/issue5313

I don't see option to assign, so you are on nosy list to start with.

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Will multithreading make python less popular?

2009-02-17 Thread Graham Dumpleton
On Feb 16, 9:27 pm, Michele Simionato 
wrote:
> On Feb 16, 10:34 am, rushen...@gmail.com wrote:
>
>
>
> > Hi everybody,
> > I am an engineer. I am trying to improve my software development
> > abilities. I have started programming with ruby. I like it very much
> > but i want to add something more. According to my previous research i
> > have designed a learning path for myself. It's like something below.
> >       1. Ruby (Mastering as much as possible)
> >       2. Python (Mastering as much as possible)
> >       3. Basic C++ or Basic Java
> > And the story begins here. As i search on the net,  I have found that
> > because of the natural characteristics of python such as GIL, we are
> > not able to write multi threaded programs. Oooops, in a kind of time
> > with lots of cpu cores and we are not able to write multi threaded
> > programs. That is out of fashion. How a such powerful language doesn't
> > support multi threading. That is a big minus for python. But there is
> > something interesting, something like multi processing. But is it a
> > real alternative for multi threading. As i searched it is not, it
> > requires heavy hardware requirements (lots of memory, lots of cpu
> > power). Also it is not easy to implement, too much extra code...
>
> multiprocessing is already implemented for you in the standard
> library.
> Of course it does not require heavy hardware requirements.
>
> > After all of that, i start to think about omiting python from my
> > carrier path and directly choosing c++ or java. But i know google or
> > youtube uses python very much. How can they choose a language which
> > will be killed by multi threading a time in near future. I like python
> > and its syntax, its flexibility.
>
> > What do you think about multi threading and its effect on python. Why
> > does python have such a break and what is the fix. Is it worth to make
> > investment of time and money to a language it can not take advantage
> > of multi cores?
>
> You can take advantage of multi cores, just not with threads but with
> processes,
> which BTW is the right way to go in most situations. So (assuming you
> are not
> a troll) you are just mistaken in thinking that the only way to
> use multicores is via multithreading.

It is also a mistaken belief that you cannot take advantage of multi
cores with multiple threads inside of a single process using Python.

What no one seems to remember is that when calls are made into Python
extension modules implemented in C code, they have the option of
releasing the Python GIL. By releasing the Python GIL at that point,
it would allow other Python threads to run at the same time as
operations are being done in C code in the extension module.

Obviously if the extension module needs to manipulate Python objects
it will not be able to release the GIL, but not all extension modules
are going to be like this and could have quite sizable sections of
code that can run with the GIL released. Thus, you could have many
threads running at the same time in sections of C code, at same time
as well as currently delegated thread within Python code.

A very good example of this is when embeddeding Python inside of
Apache. So much stuff is actually done inside of Apache C code with
the GIL released, that there is more than ample opportunity for
multiple threads to be running across cores at the same time.

Graham

--
http://mail.python.org/mailman/listinfo/python-list


multiprocessing module and os.close(sys.stdin.fileno())

2009-02-17 Thread Graham Dumpleton
Why is the multiprocessing module, ie., multiprocessing/process.py, in
_bootstrap() doing:

  os.close(sys.stdin.fileno())

rather than:

  sys.stdin.close()

Technically it is feasible that stdin could have been replaced with
something other than a file object, where the replacement doesn't have
a fileno() method.

In that sort of situation an AttributeError would be raised, which
isn't going to be caught as either OSError or ValueError, which is all
the code watches out for.

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: best way to serve wsgi with multiple processes

2009-02-11 Thread Graham Dumpleton
On Feb 12, 9:19 am, Robin  wrote:
> On Feb 11, 7:59 pm, Graham Dumpleton 
> wrote:
>
>
>
> > On Feb 11, 8:50 pm, Robin  wrote:
>
> > > Hi,
>
> > > I am building some computational web services using soaplib. This
> > > creates a WSGI application.
>
> > > However, since some of these services are computationally intensive,
> > > and may be long running, I was looking for a way to use multiple
> > > processes. I thought about using multiprocessing.Process manually in
> > > the service, but I was a bit worried about how that might interact
> > > with a threaded server (I was hoping the thread serving that request
> > > could just wait until the child is finished). Also it would be good to
> > > keep the services as simple as possible so it's easier for people to
> > > write them.
>
> > > I have at the moment the following WSGI structure:
> > > TransLogger(URLMap(URLParser(soaplib objects)))
> > > although presumably, due to the beauty of WSGI, this shouldn't matter.
>
> > > As I've found with all web-related Python stuff, I'm overwhelmed by
> > > the choice and number of alternatives. I've so far been using cherrypy
> > > and ajp-wsgi for my testing, but am aware of Spawning, twisted etc.
> > > What would be the simplest [quickest to setup and fewest details of
> > > the server required - ideally with a simple example] and most reliable
> > > [this will eventually be 'in production' as part of a large scientific
> > > project] way to host this sort of WSGI with a process-per-request
> > > style?
>
> > In this sort of situation one wouldn't normally do the work in the
> > main web server, but have a separarte long running daemon process
> > embedding mini web server that understands XML-RPC. The main web
> > server would then make XML-RPC requests against the backend daemon
> > process, which would use threading and or queueing to handle the
> > requests.
>
> > If the work is indeed long running, the backend process would normally
> > just acknowledge the request and not wait. The web page would return
> > and it would be up to user to then somehow occassionally poll web
> > server, manually or by AJAX, to see how progres is going. That is,
> > further XML-RPC requests from main server to backend daemon process
> > asking about progress.
>
> > I do't believe the suggestions about fastcgi/scgi/ajp/flup or mod_wsgi
> > are really appropriate as you don't want this done in web server
> > processes as then you are at mercy of web server processes being
> > killed or dying when part way through something. Some of these systems
> > will do this if requests take too long. Thus better to offload real
> > work to another process.
>
> Thanks - in this case I am contrained to use SOAP (I am providing SOAP
> services using soaplib so they run as a WSGI app). I choose soaplib
> becuase it seems the simplest way to get soap services running in
> Python (I was hoping to get this setup quickly).
> So I am not really able to get into anything more complex as you
> suggest... I have my nice easy WSGI app soap service, I would just
> like it to run in a process pool to avoid GIL.

You can still use SOAP, you don't have to use XML-RPC, they are after
all just an interprocess communications mechanism.

> Turns out I can do that
> with apache+mod_wsgi and daemon mode, or flup forked server (I would
> probably use ajp - so flup is in a seperate process to apache and
> listens on some local port, and apache proxies to that using the ajp
> protocol). I'm not sure which one is best... for now I'm continuing to
> just develop on cherrypy on my own machine.

In mod_wsgi daemon mode the application is still in a distinct
process. The only dfference is that Apache is acting as the process
supervisor and you do not have to install a separate system such as
supervisord or monit to start up the process and ensure it is
restarted if it crashes, as Apache/mod_wsgi will do that for you. You
also don't need flup when using mod_wsgi as it provides everything.

> I suspect I will use ajp forked flup, since that only requires
> mod_proxy and mod_proxy_ajp which I understand come with standard
> apache and the system administrators will probably be happier with.

The Apache/mod_wsgi approach actually has less dependencies. For it
you only need Apache+mod_wsgi. For AJP you need Apache+flup+monit-or-
supervisord. Just depends on which dependencies you think are easier
to configure and manage. :-)

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: best way to serve wsgi with multiple processes

2009-02-11 Thread Graham Dumpleton
2009/2/12 alex goretoy :
> GAE (Google App Engine) uses WSGI for webapps. You don't have to overhead of
> managing a server and all it's services this way as well. Just manage dns
> entries. Although, there are limitations depending on your project needs of
> what libs you need to use.

GAE is not suitable as they kill off any requests that take more than
a set time. That time isn't that long, so can't support long running
requests.

Graham

> appengine.google.com
>
> -Alex Goretoy
> http://www.goretoy.com
>
>
>
> On Wed, Feb 11, 2009 at 1:59 PM, Graham Dumpleton
>  wrote:
>>
>> On Feb 11, 8:50 pm, Robin  wrote:
>> > Hi,
>> >
>> > I am building some computational web services using soaplib. This
>> > creates a WSGI application.
>> >
>> > However, since some of these services are computationally intensive,
>> > and may be long running, I was looking for a way to use multiple
>> > processes. I thought about using multiprocessing.Process manually in
>> > the service, but I was a bit worried about how that might interact
>> > with a threaded server (I was hoping the thread serving that request
>> > could just wait until the child is finished). Also it would be good to
>> > keep the services as simple as possible so it's easier for people to
>> > write them.
>> >
>> > I have at the moment the following WSGI structure:
>> > TransLogger(URLMap(URLParser(soaplib objects)))
>> > although presumably, due to the beauty of WSGI, this shouldn't matter.
>> >
>> > As I've found with all web-related Python stuff, I'm overwhelmed by
>> > the choice and number of alternatives. I've so far been using cherrypy
>> > and ajp-wsgi for my testing, but am aware of Spawning, twisted etc.
>> > What would be the simplest [quickest to setup and fewest details of
>> > the server required - ideally with a simple example] and most reliable
>> > [this will eventually be 'in production' as part of a large scientific
>> > project] way to host this sort of WSGI with a process-per-request
>> > style?
>>
>> In this sort of situation one wouldn't normally do the work in the
>> main web server, but have a separarte long running daemon process
>> embedding mini web server that understands XML-RPC. The main web
>> server would then make XML-RPC requests against the backend daemon
>> process, which would use threading and or queueing to handle the
>> requests.
>>
>> If the work is indeed long running, the backend process would normally
>> just acknowledge the request and not wait. The web page would return
>> and it would be up to user to then somehow occassionally poll web
>> server, manually or by AJAX, to see how progres is going. That is,
>> further XML-RPC requests from main server to backend daemon process
>> asking about progress.
>>
>> I do't believe the suggestions about fastcgi/scgi/ajp/flup or mod_wsgi
>> are really appropriate as you don't want this done in web server
>> processes as then you are at mercy of web server processes being
>> killed or dying when part way through something. Some of these systems
>> will do this if requests take too long. Thus better to offload real
>> work to another process.
>>
>> Graham
>> --
>> http://mail.python.org/mailman/listinfo/python-list
>
>
--
http://mail.python.org/mailman/listinfo/python-list


Re: best way to serve wsgi with multiple processes

2009-02-11 Thread Graham Dumpleton
On Feb 11, 8:50 pm, Robin  wrote:
> Hi,
>
> I am building some computational web services using soaplib. This
> creates a WSGI application.
>
> However, since some of these services are computationally intensive,
> and may be long running, I was looking for a way to use multiple
> processes. I thought about using multiprocessing.Process manually in
> the service, but I was a bit worried about how that might interact
> with a threaded server (I was hoping the thread serving that request
> could just wait until the child is finished). Also it would be good to
> keep the services as simple as possible so it's easier for people to
> write them.
>
> I have at the moment the following WSGI structure:
> TransLogger(URLMap(URLParser(soaplib objects)))
> although presumably, due to the beauty of WSGI, this shouldn't matter.
>
> As I've found with all web-related Python stuff, I'm overwhelmed by
> the choice and number of alternatives. I've so far been using cherrypy
> and ajp-wsgi for my testing, but am aware of Spawning, twisted etc.
> What would be the simplest [quickest to setup and fewest details of
> the server required - ideally with a simple example] and most reliable
> [this will eventually be 'in production' as part of a large scientific
> project] way to host this sort of WSGI with a process-per-request
> style?

In this sort of situation one wouldn't normally do the work in the
main web server, but have a separarte long running daemon process
embedding mini web server that understands XML-RPC. The main web
server would then make XML-RPC requests against the backend daemon
process, which would use threading and or queueing to handle the
requests.

If the work is indeed long running, the backend process would normally
just acknowledge the request and not wait. The web page would return
and it would be up to user to then somehow occassionally poll web
server, manually or by AJAX, to see how progres is going. That is,
further XML-RPC requests from main server to backend daemon process
asking about progress.

I do't believe the suggestions about fastcgi/scgi/ajp/flup or mod_wsgi
are really appropriate as you don't want this done in web server
processes as then you are at mercy of web server processes being
killed or dying when part way through something. Some of these systems
will do this if requests take too long. Thus better to offload real
work to another process.

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Sloooooowwwww WSGI restart

2009-01-29 Thread Graham Dumpleton
On Jan 30, 9:53 am, Ron Garret  wrote:
> In article <498171a5$0$3681$426a7...@news.free.fr>,
>  Bruno Desthuilliers 
>
>  wrote:
> > Ron Garret a écrit :
> > > In article ,
> > >  Aleksandar Radulovic  wrote:
> > (snip)
> > >> Secondly, why are you restarting apache after code changes? In normal
> > >> circumstances, you shouldn't have to do that.
>
> > > I thought (and experiment confirms) that only the main WSGI app file
> > > gets reloaded automatically when it changes, not the libraries.
>
> > Depends on how you configure mod_wsgi. Read the part about "Process
> > Reloading Mechanism" here:
> >http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode
>
> I'm running Debian Etch, which means I have an older version of
> mod_wsgi, which means I don't have the process reloading mechanism.

Back port available at:

  http://packages.debian.org/etch-backports/libapache2-mod-wsgi

Graham

--
http://mail.python.org/mailman/listinfo/python-list


Re: More mod_wsgi weirdness: process restarts on redirect

2009-01-29 Thread Graham Dumpleton
On Jan 30, 11:01 am, Ron Garret  wrote:
> In article ,
>  Joshua Kugler  wrote:
>
> > Ron Garret wrote:
> > > My question is: is this supposed to be happening?  Or is this an
> > > indication that something is wrong, and if so, what?
>
> > You are probably just hitting a different instance of Apache, thus the
> > different process ID.
>
> Yep, that's what it turned out to be.  I thought I had a
> WSGIDaemonProcess processes=1 directive in my config, but I had it in
> the wrong place (a different vhost) so it wasn't actually active.
> http://code.google.com/p/modwsgi/wiki/ProcessesAndThreading
> But that leaves me wondering why the redirect would reliably trigger
> switching processes.  The reason I thought that I had the correct
> configuration and only had one process is that when I reloaded the
> non-redirected page I *always* got the same process ID.  How 
> doesmod_wsgidecide which process  (and which thread for that matter) to use?

Details on process/threading in mod_wsgi available at:

  http://code.google.com/p/modwsgi/wiki/ProcessesAndThreading

When using WSGIDaemonProcess directive, if you want a single process
it is better to allow it to default to a single process and not have
'processes=1'. As soon as you say 'processes=1' it will trigger
wsgi.multiprocess to be True rather than default of False. This may
sound counter intuitive, but is a little back door to allow
wsgi.multiprocess to be set to True somehow when distributing an
application across a cluster of machines where it does need to be True
even if each machine only has a single process for that application.
Tthat wsgi.multiprocess is True will not usually matter unless you are
trying to use debugging middleware that require that there only be a
single process.

As to why you were getting a different process, because you were
actually running in embedded mode due to WSGIDaemonProcess/
WSGIProcessGroup being in wrong context, then what process was used
was really up to Apache and how it works. Specifically it can have
multiple processes that can listen on the HTTP port (80). Because only
one should be listening at a time it uses a cross process mutex lock
to mediate access. When a process handles a request, it gives up the
lock. If using worker MPM then another thread in same process may get
lock, or for either worker MPM or prefork MPM, then another process
could get it. Which actually gets it is a bit indeterminate as simply
depends on which process the operating system lets have the lock next.
So, there is no strict rule one can say as to who would get it next.

Graham
Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Sloooooowwwww WSGI restart

2009-01-29 Thread Graham Dumpleton
On Jan 29, 8:15 pm, Aleksandar Radulovic  wrote:
> Graham,
>
> On Thu, Jan 29, 2009 at 1:16 AM, Graham Dumpleton
>
>  wrote:
> > Sorry, you are wrong to assume that an Apache restart is not be
> > required.
> > If you are usingmod_wsgiembedded mode, or mod_python, then a code
> > change will always require a full restart of Apache.
>
> I am running several middleware apps I'm working on under mod_python
> (simple setup using mod_python.publisher handler) and so far, haven't
> had the reason to restart apache at all.

The automatic module reloading in mod_python when using
mod_python.publisher only applies to the publisher code files or those
imported via the mod_python module importer and which is also a
candidate for reloading.

Basically, any module or package installed on sys.path is not a
candidate for reloading. Thus if you installed your code outside of
your document tree in a directory in sys.path and those code files
were changed, then no automatic reload would occur.

For further information see documentation for import_module() in:

  http://www.modpython.org/live/current/doc-html/pyapi-apmeth.html

In other words, it is not universal that any code change will be
automatically detected and a reload occur. There are also various
caveats on what mod_python module importer does, as it is reloading
modules into an existing process and not restarting the whole process.
If you are not careful, weird things can happen.

> > Thus, the conjecture that Apache/mod_wsgicannot be used and that
> > CherryPy WSGI server or Paster server must be used when developing a
> > Python web application is false. If usingmod_wsgithen daemon mode
>
> Not sure what (or whom) you're referring to.

It was a general statement. There are various people on different
forums and irc channels who keep saying that a full Apache restart is
required with Apache/mod_wsgi when making code changes. Am just
stating for the record that that isn't true.

> IMO, developing TG/Pylons/Django
> apps is much convinient with embedded web server (cherrypy or paster) as
> it is possible to do rapid development without resorting to restarts.

In the case of Django, it uses a single non threaded process, thus is
not an adequate test of either a multithread or multiprocess
environment. Something which Apache/mod_wsgi provides.

CherryPy WSGI server although it provides multithreading doesn't have
a multiprocess option and it itself doesn't have a reload feature but
depends on some layer on top to manage that from what I remember.

So, depends on how closely you want your development environment to
mirror production so that issues are picked up sooner, rather than
only at the point of deployment to a production system when you are
under pressure.

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Sloooooowwwww WSGI restart

2009-01-28 Thread Graham Dumpleton
On Jan 29, 8:44 am, Aleksandar Radulovic  wrote:
> Hi there,
>
> On Wed, Jan 28, 2009 at 9:35 PM, Ron Garret  wrote:
> > I'm running a WSGI app under apache/mod_wsgiand I've noticed that
>
> Off the bat, there's no reason to run an app under apache/mod_wsgi
> while developing it,
> ie. if u use Pylons or TurboGears, there's an easier way to serve the
> app (using paster
> or cherrypy).
>
> Secondly, why are you restarting apache after code changes? In normal
> circumstances,
> you shouldn't have to do that.

Sorry, you are wrong to assume that an Apache restart is not be
required.

If you are using mod_wsgi embedded mode, or mod_python, then a code
change will always require a full restart of Apache.

If you are using mod_wsgi daemon mode, you need to at least touch the
WSGI script file for an automatic restart of that specific application
to occur. Changes to arbitrary Python code files are not detected.

If using mod_wsgi daemon mode, you can also set up a separate
background thread to monitor for arbitrary code changes as described
for Django in:

  http://blog.dscpl.com.au/2008/12/using-modwsgi-when-developing-django.html

That page references original documentation on mod_wsgi wiki for bulk
of information.

Thus, the conjecture that Apache/mod_wsgi cannot be used and that
CherryPy WSGI server or Paster server must be used when developing a
Python web application is false. If using mod_wsgi then daemon mode
would of course be preferred though, with touching the WSGI script
file being safest option to ensure updates across multiple files
picked up at same time, but if you really want completely automated
restarts, use the recipe in mod_wsgi documentation.

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Sloooooowwwww WSGI restart

2009-01-28 Thread Graham Dumpleton
On Jan 29, 8:35 am, Ron Garret  wrote:
> I'm running a WSGI app under apache/mod_wsgiand I've noticed that
> whenever I restart the server after making a code change it takes a very
> long time (like a minute) before the script is active again.  In other
> words, I do an apachectl restart, reload the page in my browser, and one
> minute later it finally comes up.  During this time CPU usage is
> essentially zero.  Loading all the code manually into a python
> interpreter is virtually instantaneous, and all subsequence interactions
> with the app are very fast.
>
> Does anyone have any ideas what might be going on or how to debug this?

The better place to discuss this is the mod_wsgi list.

  http://groups.google.com/group/modwsgi?hl=en

As to the problem, you need to distinguish between whether it is
Apache that is taking a long time to restart and ready to handle
requests, or whether the delay is on the first subsequent request made
against your WSGI application.

When Apache restarts, it doesn't by default load your WSGI
application, it only does that the first time a request comes in
directed at it. Thus, if after you restart Apache you do a request of
a static file, do you get a response straight away? If it doesn't
respond straight away, then it is an issue with Apache restart and not
mod_wsgi as your WSGI application wouldn't be involved.

If the static file request is fine and only first request to WSGI
application is slow, then likely that problem is to do with the
startup cost of loading your WSGI application and initialising it upon
first request. As you don't say what Python web framework/application
you are using, hard to say what the issue may be there.

If it is an Apache issue and not mod_wsgi, as pointed out already,
could be that you have Apache setup to do reverse DNS lookups on
client IP address and that isn't working properly. This often happens
for Windows people.

So, narrow down where problem is occurring.

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Recommendation for a small web framework like Perl's CGI::Application to run as CGI?

2009-01-27 Thread Graham Dumpleton
On Jan 28, 11:28 am, James Mills  wrote:
> On Wed, Jan 28, 2009 at 10:15 AM, excord80  wrote:
> > Well, let's see. I don't need a templating library, since -- as you
> > pointed out -- I can just use Python's own. I don't need a db
> > interface (can just make my own dbapi calls if needed). Don't need url
> > mapping (can just use mod_rewrite rules in my .htaccess to point at my
> > cgi scripts). Don't think I need any i80n. And I don't need an admin
> > interface. ... It would seem that I don't need a whole lot at the
> > moment.
>
> One option is to configure Apache withmod_wsgiand just
> use WSGI. Fairly simple really and much like CGI.

Worth highlighting with mod_wsgi is that you also don't have to have
everything inside of one WSGI application and can still use CGI like
approach of lots of WSGI script files. That is, you are relying on
Apache dispatching of URLs for file based resources and thus no Python
specific dispatcher needed, nor mod_rewrite rules.

Because each WSGI script files is given their own persistent sub
interpreter by default, with this approach you probably though want to
force them to all run in same sub interpreter to reduce memory usage.
This can be done using WSGIApplicationGroup directive.

For further information see:

  
http://code.google.com/p/modwsgi/wiki/ConfigurationGuidelines#The_Apache_Alias_Directive

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: wsgi silently swallows errors

2009-01-19 Thread Graham Dumpleton
On Jan 20, 8:03 am, Jean-Paul Calderone  wrote:
> On Mon, 19 Jan 2009 12:15:29 -0800, Ron Garret  wrote:
> >Consider the following wsgi app:
>
> >def application(env, start_response):
> >  start_response('200 OK',[('Content-type','text/plain')])
> >  yield "hello"
> >  x=1/0
> >  yield "world"
>
> >The result of this is that the web browser displays "hello" and an error
> >message ends up in the web log.  But there is no other indication that
> >an error has occurred.
>
> >Is there any way to get WSGI to not silently swallow errors that occur
> >after start_response has been called?
>
> WSGI is a specification, not a piece of software.  The specification isn't
> swallowing the error, some piece of software is.  What WSGI container are
> you using?

Not Apache/mod_wsgi at least, as message would show in Apache error
logs.

[Tue Jan 20 09:03:19 2009] [info] [client ::1] mod_wsgi (pid=271,
process='wsgi', application='dangermouse:8224|/wsgi/scripts/
swallow.py'): Loading WSGI script '/usr/local/wsgi/scripts/swallow.py'
[Tue Jan 20 09:03:19 2009] [error] [client ::1] mod_wsgi (pid=271):
Exception occurred processing WSGI script '/usr/local/wsgi/scripts/
swallow.py'
[Tue Jan 20 09:03:19 2009] [error] [client ::1] Traceback (most recent
call last)
[Tue Jan 20 09:03:19 2009] [error] [client ::1]   File "/usr/local/
wsgi/scripts/swallow.py", line 7, in application[Tue Jan 20 09:03:19
2009] [error] [client ::1] x=1/0
[Tue Jan 20 09:03:19 2009] [error] [client ::1] ZeroDivisionError:
integer division or modulo by zero

Would just need to make sure you look in the correct log if have
virtual host specific error logs.

Graham

--
http://mail.python.org/mailman/listinfo/python-list


Re: WSGI question: reading headers before message body has been read

2009-01-18 Thread Graham Dumpleton
On Jan 19, 6:43 am, Petite Abeille  wrote:
> On Jan 18, 2009, at 8:01 PM, Ron Garret wrote:
>
> > def application(environ, start_response):
> >    status = "200 OK"
> >    headers = [('Content-Type', 'text/html'), ]
> >    start_response(status, headers)
> >    if int(environ['CONTENT_LENGTH'])>1000: return 'File too big'
>
> How would that work for chunked transfer-encoding?

Chunked transfer encoding on request content is not supported by WSGI
specification as WSGI requires CONTENT_LENGTH be set and disallows
reading more than defined content length, where CONTENT_LENGTH is
supposed to be taken as 0 if not provided.

If using Apache/mod_wsgi 3.0 (currently in development, so need to use
subversion copy), you can step outside what WSGI strictly allows and
still handle chunked transfer encoding on request content, but you
still don't have a CONTENT_LENGTH so as to check in advance if more
data than expected is going to be sent.

If wanting to know how to handle chunked transfer encoding in
mod_wsgi, better off asking on mod_wsgi list.

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: WSGI question: reading headers before message body has been read

2009-01-18 Thread Graham Dumpleton
On Jan 19, 6:01 am, Ron Garret  wrote:
> I'm writing a WSGI application and I would like to check the content-
> length header before reading the content to make sure that the content
> is not too big in order to prevent denial-of-service attacks.  So I do
> something like this:
>
> def application(environ, start_response):
>     status = "200 OK"
>     headers = [('Content-Type', 'text/html'), ]
>     start_response(status, headers)
>     if int(environ['CONTENT_LENGTH'])>1000: return 'File too big'

You should be returning 413 (Request Entity Too Large) error status
for that specific case, not a 200 response.

You should not be returning a string as response content as it is very
inefficient, wrap it in an array.

> But this doesn't seem to work.  If I upload a huge file it still waits
> until the entire file has been uploaded before complaining that it's
> too big.
>
> Is it possible to read the HTTP headers in WSGI before the request
> body has been read?

Yes.

The issue is that in order to avoid the client sending the data the
client needs to actually make use of HTTP/1.1 headers to indicate it
is expecting a 100-continue response before sending data. You don't
need to handle that as Apache/mod_wsgi does it for you, but the only
web browser I know of that supports 100-continue is Opera browser.
Clients like curl do also support it as well though. In other words,
if people use IE, Firefox or Safari, the request content will be sent
regardless anyway.

There is though still more to this though. First off is that if you
are going to handle 413 errors in your own WSGI application and you
are using mod_wsgi daemon mode, then request content is still sent by
browser regardless, even if using Opera. This is because the act of
transferring content across to mod_wsgi daemon process triggers return
of 100-continue to client and so it sends data. There is a ticket for
mod_wsgi to implement proper 100-continue support for daemon mode, but
will be a while before that happens.

Rather than have WSGI application handle 413 error cases, you are
better off letting Apache/mod_wsgi handle it for you. To do that all
you need to do is use the Apache 'LimitRequestBody' directive. This
will check the content length for you and send 413 response without
the WSGI application even being called. When using daemon mode, this
is done in Apache child worker processes and for 100-continue case
data will not be read at all and can avoid client sending it if using
Opera.

Only caveat on that is the currently available mod_wsgi has a bug in
it such that 100-continue requests not always working for daemon mode.
You need to apply fix in:

  http://code.google.com/p/modwsgi/issues/detail?id=121

For details on LimitRequestBody directive see:

  http://httpd.apache.org/docs/2.2/mod/core.html#limitrequestbody

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Parent module not loaded error

2009-01-14 Thread Graham Dumpleton
On Jan 14, 9:41 pm, Ståle Undheim  wrote:
> On Jan 14, 11:31 am, Graham Dumpleton 
> wrote:
>
>
>
> > On Jan 14, 9:20 pm, Ståle Undheim  wrote:
>
> > > I have a pretty strange error that I can't figure out the cause off.
> > > This is in a Django app.
>
> > > I am using berkelydb, with secondary databases for indexing. The
> > > secondary databases are associated with a callback that uses cPickle
> > > to serialize index values. The problem is that cPickle.dumps(value)
> > > fails when I run it throughmod_wsgior mod_python on the deployment
> > > server (Ubuntu Linux 8.10), but works fine when when I use Djangos
> > > runserver on the same computer. Also, on my own computer (Ubuntu Linux
> > > 8.10), it works fine through mod_python.
>
> > > Each time I call on cPickle.dumps(value) I get:
> > > SystemError("Parent module 'd4' not loaded",)
>
> > > Using sys.exc_info() I get no type or traceback, just that exception.
> > > The callstack as far as I can figure is:
>
> > > django
> > > view
> > > bsddb.put
> > > indexer callback
> > > cPickle.dumps
>
> > > cPickle.loads works fine within the callback, I can also use
> > > cPickle.dumps() outside the indexer callback. But inside the callback,
> > > I need to use normal pickle instead. I just want to know why. I am
> > > assuming it has something to do with the fact that I go from python to
> > > C (bsddb) back to python (indexer callback) and back to C (cPickle).
> > > It is still strange then that cPickle.loads() work, but not
> > > cPickle.dumps(). I am also only storing integer values when I get this
> > > error, so no fancy list, dictionaries or objects.
>
> > Where is module 'd4'? Is that one of yours defined inside of your
> > Django project?
>
> d4 is the root project name. This message occurs in d4/views.py.
>
> Here is how I set up my wsgi.py - which is in the d4/ folder:
>   sys.path.append(path.join(path.dirname(__file__), '..'))
>
>   os.environ['DJANGO_SETTINGS_MODULE'] = 'd4.settings'
>   application = django.core.handlers.wsgi.WSGIHandler()
>
> > One of the problems with Django runserver is that the parent of the
> > site directory and the site directory are effectively starting points
> > for searching for modules, even though neither is explicitly listed in
> > sys.path. The parent is used because runserver does momentarily add
> > parent to sys.path to import site package root, after which it removes
> > directory from sys.path. The site directory itself is used because for
> > runserver that is the current working directory and imports look there
> > automatically.
>
> > End result is that modules inside of site can be imported either via
> > site package root, or direct. For example, if d4.py was at same level
> > as settings.py file and site was called mysite, you could import it as
> > mysite.d4 or just d4. This is bad because means you could end up with
> > two copies of a module imported under the different name references.
>
> Everywhere in the code, the imports refer to d4.something.something.
> All urls.py files refer to the full package names. The test code where
> I am running this is actually quit minimal.
>
> > If they way the pickle was done in application was such that reference
> > was via d4, and then when you deployed to mod_python or mod_wsgi you
> > only added parent directory of site to Python path, then it will not
> > be able to find d4, since at that point would only be found as
> > mysite.d4.
>
> I use "import cPickle as pickle" at the top off my views file. So I
> don't reference imports from other files.
>
> > If this is the case, add the site directory to Python path by setting
> > PythonPath for mod_python or sys.path in WSGI script file for
> > mod_wsgi. Better still, fix up your code so module references in code,
> > ie., urls.py, or anywhere else, are always done by site package root
> > reference and not direct. Ie., such that it uses mysite.d4 and not
> > just d4.
>
> > Would that seem about right?
>
> Unfortunatly, at least to me, it seems that all code is allready
> correctly using packagenames. The views.py file where I am testing
> this is a simple file that doesn't depend on anything else in the
> project. It only imports a set of system modules for testing and info.
> The only "dependency" is really the urls.py file, which does refer to
> all the views as "d4.views.somemethod".

As a test, just prior to where unpickle is done, do:

  import sys
  import d4

  print >> sys.stderr, "d4.__name__", dr.__name__
  print >> sys.stderr, "d4.__file__", dr.__file__

This will just confirm that manual import works and what values of
those attributes are. The messages will end up in Apache error log for
that virtual host or main server error log as appropriate.

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Parent module not loaded error

2009-01-14 Thread Graham Dumpleton
On Jan 14, 9:20 pm, Ståle Undheim  wrote:
> I have a pretty strange error that I can't figure out the cause off.
> This is in a Django app.
>
> I am using berkelydb, with secondary databases for indexing. The
> secondary databases are associated with a callback that uses cPickle
> to serialize index values. The problem is that cPickle.dumps(value)
> fails when I run it throughmod_wsgior mod_python on the deployment
> server (Ubuntu Linux 8.10), but works fine when when I use Djangos
> runserver on the same computer. Also, on my own computer (Ubuntu Linux
> 8.10), it works fine through mod_python.
>
> Each time I call on cPickle.dumps(value) I get:
> SystemError("Parent module 'd4' not loaded",)
>
> Using sys.exc_info() I get no type or traceback, just that exception.
> The callstack as far as I can figure is:
>
> django
> view
> bsddb.put
> indexer callback
> cPickle.dumps
>
> cPickle.loads works fine within the callback, I can also use
> cPickle.dumps() outside the indexer callback. But inside the callback,
> I need to use normal pickle instead. I just want to know why. I am
> assuming it has something to do with the fact that I go from python to
> C (bsddb) back to python (indexer callback) and back to C (cPickle).
> It is still strange then that cPickle.loads() work, but not
> cPickle.dumps(). I am also only storing integer values when I get this
> error, so no fancy list, dictionaries or objects.

Where is module 'd4'? Is that one of yours defined inside of your
Django project?

One of the problems with Django runserver is that the parent of the
site directory and the site directory are effectively starting points
for searching for modules, even though neither is explicitly listed in
sys.path. The parent is used because runserver does momentarily add
parent to sys.path to import site package root, after which it removes
directory from sys.path. The site directory itself is used because for
runserver that is the current working directory and imports look there
automatically.

End result is that modules inside of site can be imported either via
site package root, or direct. For example, if d4.py was at same level
as settings.py file and site was called mysite, you could import it as
mysite.d4 or just d4. This is bad because means you could end up with
two copies of a module imported under the different name references.

If they way the pickle was done in application was such that reference
was via d4, and then when you deployed to mod_python or mod_wsgi you
only added parent directory of site to Python path, then it will not
be able to find d4, since at that point would only be found as
mysite.d4.

If this is the case, add the site directory to Python path by setting
PythonPath for mod_python or sys.path in WSGI script file for
mod_wsgi. Better still, fix up your code so module references in code,
ie., urls.py, or anywhere else, are always done by site package root
reference and not direct. Ie., such that it uses mysite.d4 and not
just d4.

Would that seem about right?

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: mod_python: delay in files changing after alteration

2009-01-06 Thread Graham Dumpleton
On Jan 6, 2:39 am, "psaff...@googlemail.com" 
wrote:
> Maybe this is an apache question, in which case apologies.
>
> I am runningmod_python3.3.1-3 on apache 2.2.9-7. It works fine, but
> I find that when I alter a source file during development, it
> sometimes takes 5 seconds or so for the changes to be seen. This might
> sound trivial, but when debugging tens of silly errors, it's annoying
> that I have to keep hitting refresh on my browser waiting for the
> change to "take". I'm guessing this is just a caching issue of some
> kind, but can't figure out how to switch it off. Any suggestions?
>
> The entry in my apache2.conf looks like this:
>
> 
>    SetHandlermod_python
>    PythonHandlermod_python.publisher
>    PythonDebug On
> 

If the change is to a Python module installed on sys.path the change
would never be reloaded by a process. If you are seeing a delay, it is
probably only because the request is being handled by a different
Apache child process that has never loaded the code before. This is
all because Apache is a multiprocess web server on UNIX.

Thus, any changes to modules/packages installed on sys.path require a
full restart of Apache to ensure they are loaded by all Apache child
worker processes.

So, which code files are you actually modifying, ie., where do they
exist and how are they imported?

Graham

--
http://mail.python.org/mailman/listinfo/python-list


Re: mod_pylite?

2009-01-01 Thread Graham Dumpleton
On Jan 2, 2:28 pm, excord80  wrote:
> On Jan 1, 9:12 pm, s...@pobox.com wrote:
>
> >     
> > >>http://broadcast.oreilly.com/2008/12/five-features-perl-5-needs-now.html
>
> >     >> and he mentions a neat-looking project called ``mod_perlite``. It
> >     >> sounds like it will be very handy. Anyone working on a
> >     >> ``mod_pylite``?  Has it been done before, maybe under a different
> >     >> name?
>
> > It's kind of hard to tell.  There's very little description of how
> > mod_perlite would be different than mod_perl other than it would be more
> > lightweight, presumably as mod_php somehow is.  That hardly seems like a
> > well-defined requirement document.
>
> > Doesmod_wsgifit the bill?http://www.rkblog.rk.edu.pl/w/p/mod_wsgi/
>
> I'm not sure if it fits the bill or not. The bill is two-fold:
>
> 1. The Apache module should present little risk to the admin who
> installs it. That is, it should not expose Apache's innards.
>
> 2. The Apache module should keep a Python instance running; run, for
> example, ``foo.py`` when a user accesses (for example) 
> ``http://www.example.com/path/to/foo.py?baz=88``; pass baz=88 to foo.py in the
> usual way; and return whatever html that script spits out.
>
> I'm not familiar with php or ``mod_php``, but I suspect that setup
> does something very similar to what's described above. This might
> explain why it's so blasted easy to deploy php scripts and create
> small and simple sites (and even not-so-small/simple sites) with it.
>
> Doesmod_wsgifit that bill? I don't know. The docs seem to be 
> athttp://code.google.com/p/modwsgi/w/list. Many of those are named
> "ChangesInVersion". I don't see any that named anything like
> "Introduction" or "BasicUsage" or "SimpleUsageLikeCGI" or even
> "Tutorial". So, my guess is that ``mod_wsgi`` doesn't fit the bill.

Have you looked up what the WSGI specification for Python even is?

  http://www.python.org/dev/peps/pep-0333/
  http://www.wsgi.org/wsgi/Learn_WSGI

Did you also read the front page of the wiki for mod_wsgi and follow
the main links it gives on the front page?

  http://code.google.com/p/modwsgi/
  http://code.google.com/p/modwsgi/wiki/InstallationInstructions
  http://code.google.com/p/modwsgi/wiki/DeveloperGuidelines

If you understand what WSGI is, then you will realise that mod_wsgi is
a very slim adapter for Apache that allows one to host any WSGI
application. In the way one normally uses it, the internals of Apache
are not exposed and do not need to be as the whole point of WSGI is
that it is a portable interface for hosting Python web applications on
various web hosting solutions and not Apache specifically.

Back to whether it is equivalent to mod_perlite, that really depends
on what mod_perlite does. If mod_perlite tries to enforce some sort of
page template system like how PHP does then no mod_wsgi is not
equivalent, as that isn't what WSGI itself is about. All WSGI is about
is providing a most minimal interface for communicating with the web
server, everything else has to be done by the application running on
top of it. Thus WSGI and mod_wsgi is a light as it can get, perhaps
even lighter than mod_perlite could be as it may have to embody the
templating system, form handling, session management etc etc etc.

This doesn't mean you couldn't use mod_wsgi to effectively achieve the
same thing though, it just means your most minimal templating system,
ie., like PHP or even closer to traditional Python CGI, needs to be
implemented as an application on top of WSGI. Rather than do the
dispatch in your WSGI application though, you can still use Apache to
do the dispatching to individual file based resource files with the
right configuration. How this could be done has been answered a number
of times on mod_wsgi list as others have already wanted to know about
how to do a PHP like solution in Python and so it has been discussed.

If you want to talk more about this, come over to the mod_wsgi list on
Google Groups.

Graham


--
http://mail.python.org/mailman/listinfo/python-list


Re: Need help getting MoinMoin to run under WSGI

2008-12-28 Thread Graham Dumpleton
On Dec 28, 7:22 pm, Ron Garret  wrote:
> In article ,
>  Ron Garret  wrote:
>
>
>
> > I successfully installed MoinMoin as a CGI according to the instructions
> > on the moinmo.in site.  But when I tried to switch over to running it
> > under wsgi it failed thusly:
>
> > [Sat Dec 27 21:44:14 2008] [error] [client 66.214.189.2] Traceback (most
> > recent call last):
> > [Sat Dec 27 21:44:14 2008] [error] [client 66.214.189.2]   File
> > "/www/wikis/genesisgroup/moin.wsgi", line 49, in ?
> > [Sat Dec 27 21:44:14 2008] [error] [client 66.214.189.2]     from
> > MoinMoin.server.server_wsgi import WsgiConfig, moinmoinApp
> > [Sat Dec 27 21:44:14 2008] [error] [client 66.214.189.2] ImportError: No
> > module named MoinMoin.server.server_wsgi
>
> > The problem, I believe, is that I have both Python 2.4 and 2.5 installed
> > (it's a Debian box) and MM is installed under 2.5 but WSGI is using 2.4.  
> > I tried to fix this by setting WSGIPythonHome but to no avail.  I can't
> > figure out what to set it to.  The instructions say:
>
> > "the WSGIPythonHome directive should be used to specify the exact
> > location of the Python installation corresponding to the version of
> > Python compiled against"
>
> > I have two problems with this.  First, I didn't compilemod_wsgi, I got
> > it pre-built as a Debian module.  Second, what does "the exact location
> > of the Python installation" even mean?  Python2.5 is spread out in at
> > least three different places: /usr/local/bin, /usr/lib/python2.5, and
> > /usr/local/lib/python2.5.  I've tried setting WSGIPythonHome to all of
> > those (and a few other things as well) and nothing worked.
>
> > Also, "the version of Python compiled against" seems very odd.  What
> > does that mean?  Surely I don't have to recompilemod_wsgievery time I
> > change to a new version of Python?
>
> > Help!  Thanks!
>
> > rg
>
> So never mind, I figured it out.  I did indeed have to recompilemod_wsgifrom 
> source to get it to use Python 2.5.  (That turned out to
> be a major hassle.  I had to do it twice.  The first time through it
> made Apache dump core.  I still don't know why.)
>
> Seems to be working now though.

It probably crashed the first time because you didn't do a full stop
of Apache and instead just did a reload. Doing a reload may not unload
the Python libraries from Apache process correctly and so would have
been a mix of code for different versions of Python in same process.

Graham

--
http://mail.python.org/mailman/listinfo/python-list


Re: mod_python resources

2008-12-20 Thread Graham Dumpleton
On Dec 20, 2:47 pm, "Anjanesh Lekshminarayanan" 
wrote:
> Same requirement here.
> But isnt there any mod_python for Python 3.0 ?
> Or do we need to build it from source ourselves ?
>
> I was hoping there would bemod_wsgibinaries for Python 3.0.

At this stage it looks like there will not be a mod_python for Python
3.0.

If you want the ability to run Python embedded in Apache like
mod_python did, use a framework that can host on top of WSGI and host
it on mod_wsgi instead. The version of mod_wsgi in subversion
repository already supports Python 3.0.

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: mod_python resources

2008-12-17 Thread Graham Dumpleton
On Dec 17, 11:10 am, Дамјан Георгиевски  wrote:
> > I'm trying again because I'm stubborn. Maybe the fourth time will be
> > the charm...
>
> > Are there any good tutorials out there for setting up Apache with
> > mod_python?
>
> mod_python is depreceated, nobody uses it. usemod_wsgihttp://www.modwsgi.org/

The mod_python package is not deprecated, although it could be said to
be sleeping at the moment. You'll also probably still find that more
new people choose mod_python over mod_wsgi. This is because it has the
more obvious name to look for when Googling. It also has publisher and
PSP high level handler which are still attractive to many as they are
more lightweight and easier to get into than the large WSGI
frameworks. Finally, the Django folks still recommend in their
documentation to use mod_python.

Anyway, if wanting to host a WSGI capable application, using mod_wsgi
would be the more obvious choice. If wanting to write your own
framework, or work at low level, basing it on WSGI rather than
mod_python specific APIs would certainly be a better long term
direction to take.

Graham

--
http://mail.python.org/mailman/listinfo/python-list


Re: mod_python and files directory

2008-12-06 Thread Graham Dumpleton
On Dec 6, 1:52 am, "mete bilgin" <[EMAIL PROTECTED]> wrote:
> Hi all,
> I try to make a websevice with python and mod_python. İ try to make a po
> files, but i can not reach them in the page. When i ask the page like "
> os.listdir('.') " but i want to get files directory, what can i do? sorry
> for my bad describe of that. Thanks a lot...

The current working directory in Apache can be anything. You must
supply an absolute path to all directories/files you are trying to
access/use.

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python/C API: using from a shared library

2008-11-26 Thread Graham Dumpleton
On Nov 26, 10:29 pm, Robie Basak <[EMAIL PROTECTED]> wrote:
> On 2008-11-25, Robie Basak <[EMAIL PROTECTED]> wrote:
>
> > If I use dlopen() to open a shared library that I've written, and that
> > shared library tries to use the Python/C API, then it fails. I've
> > reduced the problem to the test case below. The error is:
>
> > ImportError: /usr/lib/python2.5/lib-dynload/time.so: undefined symbol:
> > PyExc_ValueError
>
> I've submitted a bug for this. Seehttp://bugs.python.org/issue4434for
> an more detailed explanation and a workaround.

It isn't a bug in Python. You need to link the Python shared library
to your shared library properly. You appear not to be doing this.

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Apache & mod_python: I don't receive anything with POST method

2008-11-26 Thread Graham Dumpleton
On Nov 27, 12:21 am, [EMAIL PROTECTED] wrote:
> Hi,
>
> I'm using a simple form to make possible the users of our site upload
> files.
>
> 
>     
>     
>     
>         
>         
>     
>     
> 
>
> The "upload.py" looks like this:
>
> from mod_python import apache, util;
>
> def index(req):
>     form = util.FieldStorage(req, keep_blank_values=1)
>     try:
>         # form is empty here
>         # return form --> I get "{}"
>         ufile = form.get('upfile', None)
>
>         if not form.has_key('upfile'):
>             return ":( No 'upfile' key"
>
>         # some checks. I never get beyond here
>
>         ufile = form['upfile']
>         if ufile.file:
>             return ufile.file.name
>         else:
>             return ":( It's not a file"
>     except Exception, e:
>         return 'Fail: ' + str(e)
>
> I'm getting an empty 'form'. No 'upfile' key at all. I've tried to add
> some other text fields but the result is the same: empty. If I use GET
> method with text fields, it works properly.
>
> Currently I'm using:
> Apache 2.2.9 (initially I used Apache 2.2.3 too)
> mod_python 3.3.1 (initially I used mod_python 3.2.10 too)
> Python 2.5.2

Which is the correct result for the code you are using.

The problem is that you appear to be using mod_python.publisher which
does its own form handling before you are even getting a chance, thus
it is consuming the request content.

For how to handle forms in mod_python.publisher see:

http://webpython.codepoint.net/mod_python_publisher_forms

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Build of extension module depending on external lib fails on Solaris 10

2008-11-21 Thread Graham Dumpleton
On Nov 22, 2:07 am, "Eric Brunel" <[EMAIL PROTECTED]> wrote:
> Hello all,
>
> I've got a brand new Solaris 10 computer and I'm trying to build Python  
> and extension modules for it. The Python build didn't have any problem and  
> I have a working Python interpreter. But I can't succeed to build  
> extension modules depending on external libraries: The compilation works,  
> and object files are produced, but the link always fails.
>
> Here is an example of a failing setup.py file:
> 
>  from distutils.core import setup, Extension
>
> setup(
>    name='spam',
>    version='1.0',
>    ext_modules=[
>      Extension('spam', ['spam.c'], library_dirs=['.'], libraries=['spam'])
>    ]
> )
> 
>
> The 'spam' external module is basically a copy of the one appearing in the  
> 'Extending and Embedding' manual, except it also contains a call to a  
> function in libspam.a, which just does a printf.
>
> Here is the result of the command 'python setup.py build':
> 
> running build
> running build_ext
> building 'spam' extension
> gcc -shared build/temp.solaris-2.10-i86pc-2.6/spam.o -L. -lspam -o  
> build/lib.solaris-2.10-i86pc-2.6/spam.so
> Text relocation remains                         referenced
>      against symbol                  offset      in file
>                            0xa         ./libspam.a(spamlib.o)
> printf                              0xf         ./libspam.a(spamlib.o)
> ld: fatal: relocations remain against allocatable but non-writable sections
> collect2: ld returned 1 exit status
> error: command 'gcc' failed with exit status 1
> 
>
> It seems the problem lies in the order on which the object files and  
> libraries appear in the link command line, because if I run the same  
> command with the libraries before the object files:
> gcc -shared -L. -lspam build/temp.solaris-2.10-i86pc-2.6/spam.o -o  
> build/lib.solaris-2.10-i86pc-2.6/spam.so
> the link works without problem and a working shared object file is  
> produced.
>
> Did anybody have this kind of problem? Is it a bug, or am I doing  
> something wrong? And is there a workaround?

See workaround in:

  http://code.google.com/p/modwsgi/wiki/InstallationOnSolaris

Different package, but same issue, so use same workaround.

Graham

Graham

--
http://mail.python.org/mailman/listinfo/python-list


Re: What happened to python-dev's Google Group?

2008-10-23 Thread Graham Dumpleton
On Oct 24, 12:58 pm, Robert Kern <[EMAIL PROTECTED]> wrote:
> Giampaolo Rodola' wrote:
> >http://groups.google.com/group/python-dev2
> > It seems it no longer exists. What happened?
>
> I don't know, but something happened to the numpy-discussion Google Group
> gateway, too. Maybe there was a mass culling of such gateways that weren't 
> being
> maintained, or something like that.

Not this again.

This happened back in August, with a whole range of groups seemingly
vanishing, including the group for web.py. They came back eventually,
but of course Google doesn't say anything about what happened.

  
http://groups.google.com/group/Is-Something-Broken/browse_frm/thread/bfe5e1d3c9ac958a

A more recent thread complaining about most recent disappearances is
at:

  
http://groups.google.com/group/Is-Something-Broken/browse_frm/thread/119ef1b00796f058#

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: OS 10.5 build 64 bits

2008-10-23 Thread Graham Dumpleton
On Oct 24, 5:28 am, Robin Becker <[EMAIL PROTECTED]> wrote:
> M.-A. Lemburg wrote:
>
> igure script.
>
> > The config options --with-universal-archs is used for this. In theory
> > you could build a 4-way binary for Intel,PPC/32-bit,64-bit.
> > Default is 32-bit only.
>
> 
>
> apparently this issue is known
>
> http://bugs.python.org/issue1619130
>
> but I still don't know how to get the configure script to make 64 bits
> only. In the past I have done the configure and then messed with the
> resulting Makefiles.

Comments in the following may or may not be useful:

  http://code.google.com/p/modwsgi/wiki/InstallationOnMacOSX
  
http://developer.apple.com/releasenotes/OpenSource/PerlExtensionsRelNotes/index.html

The latter only works for Apple supplied Python as I understand it.

Graham

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python and M2Crypto question

2008-09-07 Thread Graham Dumpleton
On Sep 7, 11:07 pm, Bojan Mihelac <[EMAIL PROTECTED]> wrote:
> Hi all!
>
> I am trying to install M2Crypto to work on OSX10.5 apache
> (mod_python). Error I receive:
>
> Error was: dlopen(/Library/WebServer/eggs/M2Crypto-0.18.2-py2.5-
> macosx-10.5-i386.egg-tmp/M2Crypto/__m2crypto.so, 2): no suitable image
> found.  Did find:
>         /Library/WebServer/eggs/M2Crypto-0.18.2-py2.5-macosx-10.5-i386.egg-
> tmp/M2Crypto/__m2crypto.so: no matching architecture in universal
> wrapper
>
> I guess that have to do something with x64 architecture but I am
> stucked and not able to find a way and to make this work. M2Crypto lib
> works good stand alone.

See:

  
http://code.google.com/p/modwsgi/wiki/InstallationOnMacOSX#Missing_Code_For_Architecture

This is mod_wsgi documentation, but same issue applies to mod_python.

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: How to check in CGI if client disconnected

2008-08-25 Thread Graham Dumpleton
On Aug 25, 5:49 pm, Vishal <[EMAIL PROTECTED]> wrote:
> Hi Graham,
>
>    Thanks for the reply. In my case, it's the other way round. I need
> to check if the amount of data sent is equal to the file size i want
> to send. However, the question is - when do i check this? Currently, i
> am unable to call any cleanup code before exit.

Best you will do for writing, is to catch exceptions around the call
outputing the data. If an exception occurs then a problem has
obviously occurred.

Graham

> Regards,
>
> -vishal.
>
> On Aug 25, 11:44 am, Graham Dumpleton <[EMAIL PROTECTED]>
> wrote:
>
> > On Aug 25, 4:26 pm, Vishal <[EMAIL PROTECTED]> wrote:
>
> > > Hi,
>
> > >   Thanks for the replies. In my case, the cgi is sending a large file
> > > to the client. In case the the stop button is pressed on the browser
> > > to cancel the download, i want to do some cleanup action. It's all one-
> > > way transfer in this case, so i can't expect the client to send
> > > anything to me. I read somewhere that apache sends the SIGTERM signal
> > > to a cgi when the client disconnects. However, my cgi is not getting
> > > the signal - is there a way to have the cgi catch and handle the
> > > SIGTERM?
>
> > > I tried using the signal module
>
> > > ---
> > > def sigtermHandler(signum, frame):
> > >     # do some cleanup
>
> > > signal.signal(signal.SIGTERM, sigtermHandler)
>
> > > ---
>
> > > But even this doesn't work.
>
> > Have you considered simply checking to see if the amount of POST
> > content read matches the inbound Content-Length specified in the CGI
> > environment. If your processing of POST content finds less than what
> > was meant to be sent, then likely that the client browser aborted
> > request before all content could be sent.
>
> > Graham
>
> > > Regards,
>
> > > -vishal.
> > > On Aug 25, 2:58 am, "Gabriel Genellina" <[EMAIL PROTECTED]>
> > > wrote:
>
> > > > En Sun, 24 Aug 2008 17:51:36 -0300, Wojtek Walczak <[EMAIL PROTECTED]> 
> > > > escribió:
>
> > > > > On Sun, 24 Aug 2008 17:21:52 -0300, Gabriel Genellina wrote:
> > > > >>>    I am writing a CGI to serve files to the caller. I was wondering 
> > > > >>> if
> > > > >>> there is any way to tell in my CGI if the client browser is still
> > > > >>> connected. If it is not, i want to execute some special code before
> > > > >>> exiting.
>
> > > > >>>    Is there any way to do this? Any help on this is appreciated :)
>
> > > > >> I don't think so. A CGI script runs once per request, and exits. The 
> > > > >> server may find that client disconnected, but that may happen after 
> > > > >> the script finished.
>
> > > > > I am not a web developer, but I think that the only way is to
> > > > > set a timeout on server side. You can't be sure that the client
> > > > > disconnected, but you can stop CGI script if there's no
> > > > > action on client side for too long.
>
> > > > Which kind of client action? Every link clicked or form submitted 
> > > > generates a different request that triggers a CGI script; the script 
> > > > starts, reads its parameters, do its task, and exits. There is no "long 
> > > > running process" in CGI - the whole "world" must be recreated on each 
> > > > request (a total waste of resources, sure).
>
> > > > If processing takes so much time, it's better to assign it a "ticket" - 
> > > > the user may come back later and see if its "ticket" has been finished, 
> > > > or the system may send an email telling him.
>
> > > > --
> > > > Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: How to check in CGI if client disconnected

2008-08-24 Thread Graham Dumpleton
On Aug 25, 4:26 pm, Vishal <[EMAIL PROTECTED]> wrote:
> Hi,
>
>   Thanks for the replies. In my case, the cgi is sending a large file
> to the client. In case the the stop button is pressed on the browser
> to cancel the download, i want to do some cleanup action. It's all one-
> way transfer in this case, so i can't expect the client to send
> anything to me. I read somewhere that apache sends the SIGTERM signal
> to a cgi when the client disconnects. However, my cgi is not getting
> the signal - is there a way to have the cgi catch and handle the
> SIGTERM?
>
> I tried using the signal module
>
> ---
> def sigtermHandler(signum, frame):
>     # do some cleanup
>
> signal.signal(signal.SIGTERM, sigtermHandler)
>
> ---
>
> But even this doesn't work.

Have you considered simply checking to see if the amount of POST
content read matches the inbound Content-Length specified in the CGI
environment. If your processing of POST content finds less than what
was meant to be sent, then likely that the client browser aborted
request before all content could be sent.

Graham

> Regards,
>
> -vishal.
> On Aug 25, 2:58 am, "Gabriel Genellina" <[EMAIL PROTECTED]>
> wrote:
>
> > En Sun, 24 Aug 2008 17:51:36 -0300, Wojtek Walczak <[EMAIL PROTECTED]> 
> > escribió:
>
> > > On Sun, 24 Aug 2008 17:21:52 -0300, Gabriel Genellina wrote:
> > >>>    I am writing a CGI to serve files to the caller. I was wondering if
> > >>> there is any way to tell in my CGI if the client browser is still
> > >>> connected. If it is not, i want to execute some special code before
> > >>> exiting.
>
> > >>>    Is there any way to do this? Any help on this is appreciated :)
>
> > >> I don't think so. A CGI script runs once per request, and exits. The 
> > >> server may find that client disconnected, but that may happen after the 
> > >> script finished.
>
> > > I am not a web developer, but I think that the only way is to
> > > set a timeout on server side. You can't be sure that the client
> > > disconnected, but you can stop CGI script if there's no
> > > action on client side for too long.
>
> > Which kind of client action? Every link clicked or form submitted generates 
> > a different request that triggers a CGI script; the script starts, reads 
> > its parameters, do its task, and exits. There is no "long running process" 
> > in CGI - the whole "world" must be recreated on each request (a total waste 
> > of resources, sure).
>
> > If processing takes so much time, it's better to assign it a "ticket" - the 
> > user may come back later and see if its "ticket" has been finished, or the 
> > system may send an email telling him.
>
> > --
> > Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Problem with Python Server Pages (PSP)

2008-07-23 Thread Graham Dumpleton
On Jul 22, 1:54 pm, [EMAIL PROTECTED] wrote:
> On Jul 22, 5:18 am, Graham Dumpleton <[EMAIL PROTECTED]>
> wrote:
>
>
>
> > On Jul 21, 9:42 pm, [EMAIL PROTECTED] wrote:
>
> > > Hi,
>
> > > I am facing a very basic problem with PSP. I have installedmod_python
> > > (in fedora Core 1), added the lines required for loading Python
> > > modules and handling PSP pages. I have created a hello.psp page. But
> > > when I try to view this hello.psp page, all Python code are getting
> > > displayed.
>
> > > The said page is stored at /var/www/html/psp/hello.psp. I guess this
> > > is some configuration problem with Apache, but not able to figure out
> > > the exact problem. I have tried putting those configuration lines for
> > > psp in both httpd.conf and python.conf files. But still it is not
> > > working.
>
> > > The Python module (mod_python) is getting loaded. Because when I
> > > telnet to my server, I can find that in the headers.
>
> > > These are the versions of the softwares:
> > > Apache: 2.0.47
> > > Python: 2.2.3mod_python: 3.0.3
>
> > > Thnaks for all your suggestions.
>
> > What is the Apache configuration snippet you are using to enable
> >mod_pythonand PSP file handling?
>
> > Graham- Hide quoted text -
>
> > - Show quoted text -
>
> Hi Graham,
>
> The configuration used in httpd.conf file looks like:
> 
>     AddHandler .psp .psp_
>     PythonHandler modules/python
>     PythonDebug On
> 

Go read the documentation properly.

http://www.modpython.org/live/current/doc-html/hand-psp.html

What is PythonHandler set to?

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python Embedding Thread

2008-07-23 Thread Graham Dumpleton
On Jul 23, 12:15 pm, "Jaimy Azle" <[EMAIL PROTECTED]> wrote:
> <[EMAIL PROTECTED]> wrote:
> > I fixed the code. This code snippet runs in a seperate thread:
>
> > PyObject *dict=NULL;
> > PyGILState_STATE state = PyGILState_Ensure();
> > dict = CreateMyGlobalDictionary();
>
> > PyRun_String(, Py_file_input, dict, dict);
>
> > ReleaseGlobalDictionary(dict);
>
> > But it still does not work... :-/
>
> Have you initialize interpreter with PyEval_InitThreads? look 
> athttp://www.python.org/doc/1.5.2/api/threads.htmlfor more information.
>
> Oh, btw... I did use python in a bit different scenario than you've
> described. Since you attempt to run different script per-host thread, you
> might need python multiple interpreter support, I suggest you take a 
> lookmod_pythonimplementation.

Please don't look at mod_python. Current versions of mod_python don't
use Python simplified GIL APIs correctly.

  http://issues.apache.org/jira/browse/MODPYTHON-217

Look at mod_wsgi instead, it is closer to the mark, although still has
some not quite cruft in there to workaround mistakes in mod_python for
case where mod_python and mod_wsgi are being loaded together. :-(

Graham

--
http://mail.python.org/mailman/listinfo/python-list


Re: Problem with Python Server Pages (PSP)

2008-07-21 Thread Graham Dumpleton
On Jul 21, 9:42 pm, [EMAIL PROTECTED] wrote:
> Hi,
>
> I am facing a very basic problem with PSP. I have installedmod_python
> (in fedora Core 1), added the lines required for loading Python
> modules and handling PSP pages. I have created a hello.psp page. But
> when I try to view this hello.psp page, all Python code are getting
> displayed.
>
> The said page is stored at /var/www/html/psp/hello.psp. I guess this
> is some configuration problem with Apache, but not able to figure out
> the exact problem. I have tried putting those configuration lines for
> psp in both httpd.conf and python.conf files. But still it is not
> working.
>
> The Python module (mod_python) is getting loaded. Because when I
> telnet to my server, I can find that in the headers.
>
> These are the versions of the softwares:
> Apache: 2.0.47
> Python: 2.2.3mod_python: 3.0.3
>
> Thnaks for all your suggestions.

What is the Apache configuration snippet you are using to enable
mod_python and PSP file handling?

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Error importing modules with mod_python

2008-07-21 Thread Graham Dumpleton
On Jul 22, 3:30 am, Aaron Scott <[EMAIL PROTECTED]> wrote:
> I've installedmod_python, and everything seems to be working, but it
> fails when I try to import another file into the file that's actually
> producing the output. I have these lines at the top of index.py:
>
> frommod_pythonimport apache
> from storylab import *
>
> ... and in the directory where index.py resides (/htdocs/python/), I
> have a directory called "storylab". Inside that directory is
> __init__.py. When I try to execute /htdocs/python/index.py, I get the
> following error:
>
> ---
>
> MOD_PYTHONERROR
> ProcessId:      828
> Interpreter:    'localhost'
> ServerName:     'localhost'
> DocumentRoot:   'C:/htdocs'
> URI:            '/python/index.py'
> Location:       None
> Directory:      'C:/htdocs/python/'
> Filename:       'C:/htdocs/python/index.py'
> PathInfo:       ''
> Phase:          'PythonHandler'
> Handler:        'index'
>
> Traceback (most recent call last):
>
>   File "C:\Python25\lib\site-packages\mod_python\importer.py", line
> 1537, in HandlerDispatch
>     default=default_handler, arg=req, silent=hlist.silent)
>
>   File "C:\Python25\lib\site-packages\mod_python\importer.py", line
> 1202, in _process_target
>     module = import_module(module_name, path=path)
>
>   File "C:\Python25\lib\site-packages\mod_python\importer.py", line
> 296, in import_module
>     log, import_path)
>
>   File "C:\Python25\lib\site-packages\mod_python\importer.py", line
> 680, in import_module
>     execfile(file, module.__dict__)
>
>   File "C:\htdocs\python\index.py", line 2, in 
>     from storylab import *
>
> ImportError: No module named storylab
>
> ---
>
> What am I doing wrong? Any insight would be greatly appreciated.

You can't put Python packages in same directory as handler scripts
managed by mod_python. See documentation for import_module() in:

  http://www.modpython.org/live/current/doc-html/pyapi-apmeth.html

Graham


--
http://mail.python.org/mailman/listinfo/python-list


Re: https in pylons

2008-07-18 Thread Graham Dumpleton
On Jul 18, 9:50 pm, [EMAIL PROTECTED] wrote:
> Hello,
>
> I have a question about framework pylons - how to run(in paster)
> webpages over https? Is it possible, or not?

If Paste server that is uses doesn't already support HTTPS, then run
Pylons under Apache/mod_wsgi, or just run Pylons with Paste server
behind Apache/mod_proxy. In other words, use Apache to handle the
HTTPS side of things.

Graham

--
http://mail.python.org/mailman/listinfo/python-list


Re: Problem with MySQLdb and mod_python

2008-07-18 Thread Graham Dumpleton
On Jul 18, 3:28 pm, John Nagle <[EMAIL PROTECTED]> wrote:
> Cyril Bazin wrote:
> > Thanks for your reply
>
> > The apache log contains lines like :
>
> > [Tue Jul 15 23:31:01 2008] [notice]mod_python(pid=11836,
> > interpreter='www.toto.fr'):Importing module
> > '/usr/local/apache2/htdocs/intranet/courrier/test.py'
> > [Tue Jul 15 23:31:02 2008] [notice] child pid 11836 exit signal
> > Segmentation fault (11)
> > [Tue Jul 15 23:31:19 2008] [notice]mod_python(pid=11764,
> > interpreter='www.toto.fr'):Importing module
> > '/usr/local/apache2/htdocs/intranet/courrier/test.py'
> > [Tue Jul 15 23:31:19 2008] [notice] child pid 11764 exit signal
> > Segmentation fault (11)
>
> > I think the problem comes from the MySQLdb module.
> > If I can't find another solution, I think I will downgrade the MySQLdb
> > version to 1.2.1
>
>     Sounds like version hell.  mod_python and MySQLdb have to be
> compiled with exactly the same compiler for this to work.

Use of compatible compilers applies to anything you want to use
together. This is nothing specific to mod_python, so this comment is a
bit misleading. These days with with GNU C everywhere, it is hardly
and issue, and was usually only an issue with C++ code and not C code
anyway.

>    mod_python is usually troublesome.   Python doesn't really have
> quite enough isolation to run multiple unrelated instances reliably.

The isolation issue is nothing to do with Python itself. Isolation is
an issue in this case, but most likely comes about because the OP is
trying to use both PHP and mod_python together in the same Apache
instance.

In particular, the PHP package is likely loading a MySQL module and it
is linked against a different version of the MySQL client libraries
than what the Python MySQL package is wanting.

People like to blame mod_python for these problems, but it can equally
be attributed to PHP. In practice the reason it shows up as a
mod_python issue is that PHP tries to preload a lot of stuff and so
manages to load its version of shared libraries first. Python with its
lazy loading comes in second, and so conflicts will occur. If
mod_python preloaded stuff like PHP did and this was occurring before
PHP got a chance, it would be the other way around and mod_python
would work fine and PHP would instead be what crashes all the time.

> We use FCGI, which has the isolation of CGI but doesn't reload the
> application for every transaction.  Also, it's easier to debug if
> CPython is crashing.

With the reason that FCGI works being that the processes, even if they
are spawned by Apache, use a fork/exec, thus meaning they have a clean
memory space when starting up.

In summary, look at what version of MySQL libraries are used by PHP
modules and ensure that Python MySQL module is compiled against the
same version.

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Beginner question

2008-05-25 Thread Graham Dumpleton
On May 26, 4:13 am, howa <[EMAIL PROTECTED]> wrote:
> Hi,
>
> Just want to try mod_python but it is more complicated then I
> expected...
>
> I just followed the tutorial 
> on:http://www.modpython.org/live/mod_python-2.7.8/doc-html/inst-testing
>
> E.g.
>
> URL =http://www.example.com/mptest.py
>
> It return
>
> ImportError: No module named mptest
>
> 1. If I removed addHandler mod_python .py and PythonHandler mptest, I
> can see the SOURCE CODE
>
> 2. The PythonHandler mod_python.testhandler seems return correct
> result, showing I am using python 2.4.3
>
> any idea?

Why are you using the documentation from version 2.7.8 of mod_python
when you are using a much newer version?

Also read:

  http://www.dscpl.com.au/wiki/ModPython/Articles/GettingModPythonWorking

Graham



--
http://mail.python.org/mailman/listinfo/python-list


Re: scaling problems

2008-05-20 Thread Graham Dumpleton
On May 20, 2:00 pm, James A. Donald <[EMAIL PROTECTED]> wrote:
> > > 2.  It is not clear to me how a python web application scales.  Python
> > > is inherently single threaded, so one will need lots of python
> > > processes on lots of computers, with the database software handling
> > > parallel accesses to the same or related data.  One could organize it
> > > as one python program for each url, and one python process for each
> > > http request, but that involves a lot of overhead starting up and
> > > shutting down python processes.  Or one could organize it as one
> > > python program for each url, but if one gets a lot of http requests
> > > for one url, a small number of python processes will each sequentially
> > > handle a large number of those requests.  What I am really asking is:
> > > Are there python web frameworks that scale with hardware and how do
> > > they handle scaling?
>
> Reid Priedhorsky
>
> > This sounds like a good match for Apache withmod_python.
>
> I would hope that it is, but the question that I would like to know is
> how does mod_python handle the problem - how do python programs and
> processes relate to web pages and http requests when one is using mod_python, 
> and what happens when one has quite a lot of web pages and
> a very large number of http requests?

Read:

  http://blog.dscpl.com.au/2007/09/parallel-python-discussion-and-modwsgi.html
  http://code.google.com/p/modwsgi/wiki/ProcessesAndThreading

They talk about multi process nature of Apache and how GIL is not as
big a deal when using it.

The latter document explains the various process/threading modes when
using Apache/mod_wsgi. The embedded modes described in that
documentation also apply to mod_python.

The server is generally never the bottleneck, but if you are paranoid
about performance, then also look at relative comparison of mod_wsgi
and mod_python in:

  http://code.google.com/p/modwsgi/wiki/PerformanceEstimates

Graham

--
http://mail.python.org/mailman/listinfo/python-list


Re: Arch problems--how do I build PIL to be 64 bit so it plays nicely on OS X?

2008-05-17 Thread Graham Dumpleton
On May 17, 6:16 am, lampshade <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I'm using python + django to do some web design and I would really
> like to use the python image library as part of this.  There seems to
> be a problem, however, with apache andmod_pythonbeing 64 bit while
> my python image library(PIL) is only 32 bit.
>
> Does anyone have experience with building the correct architectures so
> that OS X 10.5 plays nicely?  I think, when it comes down to it, I
> just need PIL at this point to be x86_64 so that it plays with apache
> andmod_python.
>
> Any advice, hand-holding, or sage wisdom?

See:

  http://code.google.com/p/modwsgi/wiki/InstallationOnMacOSX

and also the document it references:

  
http://developer.apple.com/releasenotes/OpenSource/PerlExtensionsRelNotes/index.html

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Install Python MySQL db module?

2008-05-13 Thread Graham Dumpleton
On May 14, 10:58 am, Rick Dooling <[EMAIL PROTECTED]> wrote:
> On May 13, 7:29 pm, Con <[EMAIL PROTECTED]> wrote:
>
> > Hi, how does properly install the Python MySQL db module for Mac OS
> > X?  I was only able to locate the Win32 modules.
>
> > Thanks in advance,
>
> > -Conrad
>
> I tried this a couple of weeks ago using macports and had problems.
>
> See, for example:
>
> http://www.davidcramer.net/code/57/mysqldb-on-leopard.html
>
> I read about several people who went in and edited the source files to
> work out the bugs on Leopard, but instead I just changed my Python
> code to use SQLite for the moment until someone fixes MySQLdb. There's
> a nice Perl script that will convert your MySQL db to an SQLite db if
> you're interested.
>
> RD

Because Python executable on Leopard has been stripped of 64 bit
architectures and thus runs as a 32 bit process, the 64 bit problems
only come up when trying to use the MySQLdb module under mod_wsgi or
mod_python with Apache (presuming you got mod_python to build as 64
bit).

If you are really stuck and can't get MySQLdb module to build as 64
bit capable, then you can force Apache to run as a 32 bit process and
avoid the problems.

For more details, see section 'Thinning The Apache Executable' of:

  http://code.google.com/p/modwsgi/wiki/InstallationOnMacOSX

Also see contributed comments to the page, which explains a way of
making Apache run as 32 bit by changing plist files rather than
thinning the Apache executable.

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: where do I begin with web programming in python?

2008-05-01 Thread Graham Dumpleton
On May 2, 7:45 am, Christian Heimes <[EMAIL PROTECTED]> wrote:
> jmDesktop schrieb:
>
> > I have been to the main python site, but am still confused.  I have
> > been using .net, so it may be obvious how to do this to everyone
> > else.  I am aware there are various frameworks (Django, Pylons, etc.),
> > but I would like to know how to create web pages without these.  If I
> > havemod_pythonor fastcgi on apache, where do I start?  I don't have
> > clue where to begin to create a web page from scratch in python.  I am
> > sure I will want to access database, etc., all the "normal" stuff, I
> > just want to do it myself as opposed to the frameworks, for learning.
>
> I highly recommend WSGI instead ofmod_pythonor (fast)cgi. I've heard
> only bad things aboutmod_pythonover the past years and CGI is totally
> old school.
>
> Check out Python Paste, CherryPy and Django. You can also try the Zope,
> Zope3 and Plone world but Zope is usually for larger and complex
> applications.
>
> Most frameworks come with their own little web server for development, too.

I'd also suggest avoiding coding anything directly to mod_python and
instead base things on WSGI. You can still run it on mod_python with a
suitable adapter, but you can also run it with mod_wsgi, mod_fastcgi,
or using pure Python web servers such as the one in Paste as well.

For a low level nuts and bolts (anti framework) approach I'd suggest
looking at:

  http://dev.pocoo.org/projects/werkzeug/

This gives you all the basic components, but it is really up to you as
to how you put them together, which seems to be what you want to be
able to do.

Graham

--
http://mail.python.org/mailman/listinfo/python-list


Re: apache module: python and web programming, easy way...?

2008-04-28 Thread Graham Dumpleton
On Apr 28, 7:42 pm, bvidinli <[EMAIL PROTECTED]> wrote:
> is there any apache module, you know, that i can just install with apt-get,
> then put my .py file, and run it ?

http://www.modwsgi.org
http://www.modpython.org

The mod_wsgi module supports WSGI (http://www.wsgi.org) specification
which is where Python web framework hosting is heading whereas
mod_python has its own specific API which results in your application
only being able to be hosted with it and nothing else.

Graham
--
http://mail.python.org/mailman/listinfo/python-list


Re: Multiple independent Python interpreters in a C/C++ program?

2008-04-12 Thread Graham Dumpleton
On Apr 13, 3:05 am, sturlamolden <[EMAIL PROTECTED]> wrote:
> On Apr 11, 6:24 pm, [EMAIL PROTECTED] wrote:
>
> > Do I wind up with two completely independent interpreters, one per thread?
> > I'm thinking this doesn't work (there are bits which aren't thread-safe and
> > are only protected by the GIL), but wanted to double-check to be sure.
>
> You can create a new subinterpreter with a call to Py_NewInterpreter.
> You get a nwe interpreter, but not an independent one. The GIL is a
> global object for the process. If you have more than one interpreter
> in the process, they share the same GIL.
>
> In tcl, each thread has its own interpreter instance and no GIL is
> shared. This circumvents most of the problems with a global GIL.
>
> In theory, a GIL private to each (sub)interpreter would make Python
> more scalable. The current GIL behaves like the BKL in earlier Linux
> kernels. However, some third-party software, notably Apache'smod_python, is 
> claimed to depend on this behaviour.

I wouldn't use mod_python as a good guide on how to do this as it
doesn't properly use PyGILState_Ensure() for main interpreter like it
should. If you want an example of how to do this, have a look at code
for mod_wsgi instead. If you want it to work for Python 3.0 as well as
Python 2.X, make sure you look at mod_wsgi source code from subversion
repository trunk as tweak had to be made to source to support Python
3.0. This is because in Python 3.0 it is no longer sufficient to hold
only the GIL when using string/unicode functions, you also need a
proper thread state to be active now.

Do note that although multiple sub interpreters can be made to work,
destroying sub interpreters within the context of a running process,
ie., before the process ends, can be a cause for various problems with
third party C extension modules and thus would advise that once a sub
interpreter has been created, you keep it and use it for the life of
the process.

Graham


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problem with mod_python/3.3.1 and apache

2008-03-31 Thread Graham Dumpleton
On Apr 1, 8:03 am, NccWarp9 <[EMAIL PROTECTED]> wrote:
> Hello,
>
> im using Apache HTTPD 2.2.8 with mod_python/3.3.1 Python/2.4.3 on
> Windows and having truble starting pythone, any help would be
> appreciated
> .
> Im getting this error:
>
> [Mon Mar 31 23:53:03 2008] [error] make_obcallback: could not import
> mod_python.apache.\n
> 'import site' failed; use -v for traceback
> 'import site' failed; use -v for traceback
> ImportError: No module named mod_python.apache
> [Mon Mar 31 23:53:03 2008] [error] make_obcallback: Python path being
> used "['C:Windowssystem32python24.zip', '', 'c:xampp\\\
> \pythonDLLs', 'c:xampppythonlib', 'c:xampp\\\
> \pythonlibplat-win', 'c:xampppythonliblib-tk',
> 'C:xamppapachebin']".
> [Mon Mar 31 23:53:03 2008] [error] get_interpreter: no interpreter
> callback found.
> [Mon Mar 31 23:53:03 2008] [error] [client 127.0.0.1] python_handler:
> Can't get/create interpreter., referer:http://localhost/python/
> [Mon Mar 31 23:53:25 2008] [error] make_obcallback: could not import
> mod_python.apache.\n
> ImportError: No module named mod_python.apache
>
> thx

See:

  http://www.modpython.org/pipermail/mod_python/2008-March/025022.html

Also search the www.modpython.org site for other discussions/
suggestions in the mod_python mailing list.

Graham
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Beta testers needed for a high performance Python application server

2008-03-25 Thread Graham Dumpleton
On Mar 26, 11:00 am, Damjan <[EMAIL PROTECTED]> wrote:
> >> I'm looking for beta testers for a high performance, event-driven Python
> >> application server I've developed.
>
> >> About the server: the front end and other speed-critical parts of the
> >> server are written in portable, multithreaded C++.
> ...
> > Why not just put it on the net somewhere and tell us where it is?
> > People aren't generally going to want to help or even look at it if
> > you treat it like a proprietary application. So, put the documentation
> > and code up somewhere for all to see.
> > BTW, multiprocess web servers such as Apache can quite happily make
> > use of multiple cores. Even within a single Apache multithread process
> > it can still use multiple cores quite happily because all the
> > underlying network code and static file handling code is in C and not
> > subject to the GIL. So, as much as people like to bash up on the GIL,
> > within Apache it is not necessarily as big a deal as people make out.
>
> BTW nginx now has a mod_wsgi too, if someone is looking for an Apache
> replacement.

Yes that is a viable option, as still are existing fastcgi solutions
for Apache, lighttpd and nginx. Because the bottlenecks are generally
going to be in upper application layers it really comes down to
personal preferences as to which you want to use, which you find
easier to configure and manage, plus which you trust as being the most
secure and stable.

For truly high demand sites you should also be looking at spreading
load across multiple hosts, not just because of performance but also
because of redundancy. You would also be serving media off a different
web server to your Python web application and configuring each web
server to the specific task it is doing.

So, there is a lot more to in than raw speed of underlying technology
and so  one should always treat with caution something that is being
sold as some better way of doing things. This applies to Apache
mod_wsgi, nginx mod_wsgi or anything else for that matter. All have
benefits, but they also have shortcomings in different areas which may
not always make them suitable for all applications, or at least they
may need to be configured in specific ways to make them perform best
for specific applications.

Anyway, because it isn't that simple is why I'd like to see some
actual documentation for this new contender posted in a public place,
along with code being browsable so one can evaluate it without having
to sign up for some closed beta program.

Graham

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Beta testers needed for a high performance Python application server

2008-03-25 Thread Graham Dumpleton
On Mar 26, 7:31 am, Minor Gordon <[EMAIL PROTECTED]> wrote:
> Hello all,
>
> I'm looking for beta testers for a high performance, event-driven Python
> application server I've developed.
>
> About the server: the front end and other speed-critical parts of the
> server are written in portable, multithreaded C++. The back end is an
> embedded CPython interpreter. The server is much faster than anything in
> pure Python, and it can compete with C servers (including e.g. lighttpd
> for file workloads) or outdo them (e.g. anything behind Apache) until
> CPython consumes a single processor. On the Python side it supports WSGI
> (the server can handle the static and dynamic requests of MoinMoin with
> a handful of lines), the DB API with blocking calls offloaded to a
> connection in a separate thread (MySQL, SQLite supported), Google's
> ctemplate, gzipping responses, file caching, reading and writing to URIs
> as a client, AJAX integration, debugging as a Python extension, and a
> lot of other features. The core Python API is event-driven, using
> continuations like Twisted but much cleaner (continuations are any
> callables, there are no special objects anywhere). The Python back end
> also supports Stackless Python so all of the continuation machinery can
> be hidden behind tasklet switching.
>
> Background: I'm in this to help write a "story" for Python and web
> applications. Everyone likes to go on about Ruby on Rails, and as far as
>   I can tell there's nothing that approaches Rails in Python. I want to
> code quickly in Python like I can with Rails, but without sacrificing
> single node performance on many cores.
>
> Beta testers: should be intermediate to advanced Python programmers with
> demanding applications, particularly web applications with databases and
> AJAX. The C++ is portable to Win32, Linux, and OS X, with no mandatory
> libraries beyond python-dev.
>
> Please contact me if you're interested: firstname.lastname at cl.cam.ac.uk.

Why not just put it on the net somewhere and tell us where it is?
People aren't generally going to want to help or even look at it if
you treat it like a proprietary application. So, put the documentation
and code up somewhere for all to see.

BTW, multiprocess web servers such as Apache can quite happily make
use of multiple cores. Even within a single Apache multithread process
it can still use multiple cores quite happily because all the
underlying network code and static file handling code is in C and not
subject to the GIL. So, as much as people like to bash up on the GIL,
within Apache it is not necessarily as big a deal as people make out.

Also, an event driven model, or even a large dependence on
multithreading, can actually be worse for Python web applications
where using it means one can handle a much larger number of concurrent
requests. This is because you greatly increase the risk of code having
large requirements for transient memory being hit at the same time.
The result can be large unpredictable blowouts in memory requirements
for the process, with that memory then being held by the process and
not necessarily able to be released back to operating system. Thus,
for large Python web applications, use of a multiprocess web server,
where each worker process is single threaded, is in various way still
best as it provides the most predictable memory profile.

Finally, the speed of the underlying web server (except for CGI)
generally has minimal bearing on the performance of a large Python web
application. This is because that isn't where the bottlenecks are. The
real bottlenecks are generally in the application code itself and in
any access to a back end database. Thus, pursuing absolute speed is a
bit of a fools errand when you consider that any performance gain you
may have over a competing solution may only end up resulting in
somewhat less than 1% difference when one looks at overall request
time.

Graham

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Huge problem gettng MySQLdb to work on my mac mini running Macosx 10.5 Leopard

2008-03-19 Thread Graham Dumpleton
On Mar 19, 9:30 pm, geert <[EMAIL PROTECTED]> wrote:
> On Mar 19, 2:26 am, Graham Dumpleton <[EMAIL PROTECTED]>
> wrote:
>
>
>
> > On Mar 19, 9:47 am, geert <[EMAIL PROTECTED]> wrote:
>
> > > On Mar 18, 6:56 pm, geert <[EMAIL PROTECTED]> wrote:
>
> > > > On Mar 14, 1:15 pm, [EMAIL PROTECTED] wrote:
>
> > > > > look 
> > > > > athttp://groups.google.be/group/comp.lang.python/browse_thread/thread/d...
>
> > > > > There is a macpython list that you can consult 
> > > > > athttp://www.nabble.com/Python---pythonmac-sig-f2970.html
>
> > > > Just wanted to let you know that I've solved my problem. The solution
> > > > is to compile mysql using
>
> > > > MACOSX_DEPLOYMENT_TARGET=10.5 \
> > > > CFLAGS='-arch i386 -arch x86_64 -arch ppc7400 -arch ppc64' \
> > > > LDFLAGS='-arch i386 -arch x86_64 -arch ppc7400 -arch ppc64' \
> > > > CXXFLAGS='-arch i386 -arch x86_64 -arch ppc7400 -arch ppc64' \
> > > > ./configure --disable-dependency-tracking  --enable-thread-safe-client
> > > > --prefix=/usr/local/mysql
>
> > > > You then go this way to get it running on your machine:
>
> > > >http://hivelogic.com/articles/installing-mysql-on-mac-os-x/
>
> > > > Then reinstall MySQLdb. Magic!
>
> > > > Geert
>
> > > Seems that I've yelled success to quickly. Everything's ok as long as
> > > I just run the django dev server, but moving to apache 
> > > throughmod_wsgibrings a well-known but less than comforting complaint:
>
> > > [Tue Mar 18 23:34:25 2008] [error] [client ::1]mod_wsgi(pid=2352):
> > > Exception occurred processing WSGI script '/Users/geert/Sites/LithoNET/
> > > LN/LNApache.wsgi'., referer:http://localhost/images/
> > > [Tue Mar 18 23:34:25 2008] [error] [client ::1] Traceback (most recent
> > > call last):, referer:http://localhost/images/
> > > [Tue Mar 18 23:34:25 2008] [error] [client ::1]   File "/Library/
> > > Python/2.5/site-packages/django/core/handlers/wsgi.py", line 205, in
> > > __call__, referer:http://localhost/images/
> > > [Tue Mar 18 23:34:25 2008] [error] [client ::1] response =
> > > self.get_response(request), referer:http://localhost/images/
> > > [Tue Mar 18 23:34:25 2008] [error] [client ::1]   File "/Library/
> > > Python/2.5/site-packages/django/core/handlers/base.py", line 64, in
> > > get_response, referer:http://localhost/images/
> > > [Tue Mar 18 23:34:25 2008] [error] [client ::1] response =
> > > middleware_method(request), referer:http://localhost/images/
> > > [Tue Mar 18 23:34:25 2008] [error] [client ::1]   File "/Library/
> > > Python/2.5/site-packages/django/contrib/sessions/middleware.py", line
> > > 13, in process_request, referer:http://localhost/images/
> > > [Tue Mar 18 23:34:25 2008] [error] [client ::1] engine =
> > > __import__(settings.SESSION_ENGINE, {}, {}, ['']), 
> > > referer:http://localhost/images/
> > > [Tue Mar 18 23:34:25 2008] [error] [client ::1]   File "/Library/
> > > Python/2.5/site-packages/django/contrib/sessions/backends/db.py", line
> > > 2, in , referer:http://localhost/images/
> > > [Tue Mar 18 23:34:25 2008] [error] [client ::1] from
> > > django.contrib.sessions.models import Session, 
> > > referer:http://localhost/images/
> > > [Tue Mar 18 23:34:25 2008] [error] [client ::1]   File "/Library/
> > > Python/2.5/site-packages/django/contrib/sessions/models.py", line 5,
> > > in , referer:http://localhost/images/
> > > [Tue Mar 18 23:34:25 2008] [error] [client ::1] from django.db
> > > import models, referer:http://localhost/images/
> > > [Tue Mar 18 23:34:25 2008] [error] [client ::1]   File "/Library/
> > > Python/2.5/site-packages/django/db/__init__.py", line 17, in ,
> > > referer:http://localhost/images/
> > > [Tue Mar 18 23:34:25 2008] [error] [client ::1] backend =
> > > __import__('%s%s.base' % (_import_path, settings.DATABASE_ENGINE), {},
> > > {}, ['']), referer:http://localhost/images/
> > > [Tue Mar 18 23:34:25 2008] [error] [client ::1]   File "/Library/
> > > Python/2.5/site-packages/django/db/backends/mysql/base.py", line 12,
> > > in , referer:http://localhost/images/
> > > [Tue Mar 18 23:34:25 2008] [error] [client ::1] raise
> > > ImproperlyC

Re: Huge problem gettng MySQLdb to work on my mac mini running Macosx 10.5 Leopard

2008-03-18 Thread Graham Dumpleton
On Mar 19, 9:47 am, geert <[EMAIL PROTECTED]> wrote:
> On Mar 18, 6:56 pm, geert <[EMAIL PROTECTED]> wrote:
>
>
>
> > On Mar 14, 1:15 pm, [EMAIL PROTECTED] wrote:
>
> > > look 
> > > athttp://groups.google.be/group/comp.lang.python/browse_thread/thread/d...
>
> > > There is a macpython list that you can consult 
> > > athttp://www.nabble.com/Python---pythonmac-sig-f2970.html
>
> > Just wanted to let you know that I've solved my problem. The solution
> > is to compile mysql using
>
> > MACOSX_DEPLOYMENT_TARGET=10.5 \
> > CFLAGS='-arch i386 -arch x86_64 -arch ppc7400 -arch ppc64' \
> > LDFLAGS='-arch i386 -arch x86_64 -arch ppc7400 -arch ppc64' \
> > CXXFLAGS='-arch i386 -arch x86_64 -arch ppc7400 -arch ppc64' \
> > ./configure --disable-dependency-tracking  --enable-thread-safe-client
> > --prefix=/usr/local/mysql
>
> > You then go this way to get it running on your machine:
>
> >http://hivelogic.com/articles/installing-mysql-on-mac-os-x/
>
> > Then reinstall MySQLdb. Magic!
>
> > Geert
>
> Seems that I've yelled success to quickly. Everything's ok as long as
> I just run the django dev server, but moving to apache throughmod_wsgibrings 
> a well-known but less than comforting complaint:
>
> [Tue Mar 18 23:34:25 2008] [error] [client ::1]mod_wsgi(pid=2352):
> Exception occurred processing WSGI script '/Users/geert/Sites/LithoNET/
> LN/LNApache.wsgi'., referer:http://localhost/images/
> [Tue Mar 18 23:34:25 2008] [error] [client ::1] Traceback (most recent
> call last):, referer:http://localhost/images/
> [Tue Mar 18 23:34:25 2008] [error] [client ::1]   File "/Library/
> Python/2.5/site-packages/django/core/handlers/wsgi.py", line 205, in
> __call__, referer:http://localhost/images/
> [Tue Mar 18 23:34:25 2008] [error] [client ::1] response =
> self.get_response(request), referer:http://localhost/images/
> [Tue Mar 18 23:34:25 2008] [error] [client ::1]   File "/Library/
> Python/2.5/site-packages/django/core/handlers/base.py", line 64, in
> get_response, referer:http://localhost/images/
> [Tue Mar 18 23:34:25 2008] [error] [client ::1] response =
> middleware_method(request), referer:http://localhost/images/
> [Tue Mar 18 23:34:25 2008] [error] [client ::1]   File "/Library/
> Python/2.5/site-packages/django/contrib/sessions/middleware.py", line
> 13, in process_request, referer:http://localhost/images/
> [Tue Mar 18 23:34:25 2008] [error] [client ::1] engine =
> __import__(settings.SESSION_ENGINE, {}, {}, ['']), 
> referer:http://localhost/images/
> [Tue Mar 18 23:34:25 2008] [error] [client ::1]   File "/Library/
> Python/2.5/site-packages/django/contrib/sessions/backends/db.py", line
> 2, in , referer:http://localhost/images/
> [Tue Mar 18 23:34:25 2008] [error] [client ::1] from
> django.contrib.sessions.models import Session, 
> referer:http://localhost/images/
> [Tue Mar 18 23:34:25 2008] [error] [client ::1]   File "/Library/
> Python/2.5/site-packages/django/contrib/sessions/models.py", line 5,
> in , referer:http://localhost/images/
> [Tue Mar 18 23:34:25 2008] [error] [client ::1] from django.db
> import models, referer:http://localhost/images/
> [Tue Mar 18 23:34:25 2008] [error] [client ::1]   File "/Library/
> Python/2.5/site-packages/django/db/__init__.py", line 17, in ,
> referer:http://localhost/images/
> [Tue Mar 18 23:34:25 2008] [error] [client ::1] backend =
> __import__('%s%s.base' % (_import_path, settings.DATABASE_ENGINE), {},
> {}, ['']), referer:http://localhost/images/
> [Tue Mar 18 23:34:25 2008] [error] [client ::1]   File "/Library/
> Python/2.5/site-packages/django/db/backends/mysql/base.py", line 12,
> in , referer:http://localhost/images/
> [Tue Mar 18 23:34:25 2008] [error] [client ::1] raise
> ImproperlyConfigured("Error loading MySQLdb module: %s" % e), 
> referer:http://localhost/images/
> [Tue Mar 18 23:34:25 2008] [error] [client ::1] ImproperlyConfigured:
> Error loading MySQLdb module: dlopen(/Library/WebServer/.python-eggs/
> MySQL_python-1.2.2-py2.5-macosx-10.5-i386.egg-tmp/_mysql.so, 2): no
> suitable image found.  Did find:, referer:http://localhost/images/
> [Tue Mar 18 23:34:25 2008] [error] [client ::1] \t/Library/
> WebServer/.python-eggs/MySQL_python-1.2.2-py2.5-macosx-10.5-i386.egg-
> tmp/_mysql.so: no matching architecture in universal wrapper, 
> referer:http://localhost/images/

Did you again confirm that running:

  file /Library/WebServer/.python-eggs/MySQL_python-1.2.2-py2.5-
macosx-10.5-i386.egg-tmp/_mysql.so

shows the .so having the required architectures, specifically what
Apache runs as (eg. x86_64)?

Do the gcc compiler flags when building and linking the .so file show
all the architecture flags?

Have you empty the Python egg cache to make sure it isn't an older
compiled version?

Graham

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Apache binary error?

2008-03-17 Thread Graham Dumpleton
On Mar 18, 4:43 am, Sean Allen <[EMAIL PROTECTED]> wrote:
> On Mar 17, 2008, at 10:55 AM, Michael Wieher wrote:
>
> > have simple webpage running
>
> > apache,mod_python
>
> > the error is binary
> > ...binary as in "every other" time I load the page, Firefox keeps  
> > telling me I'm downloading a python script, and asks to open it in  
> > WINE, which is really strange.
>
> > then, alternately, it loads the page just fine.  any clues as to why  
> > this is happening?
> > --
>
> for anything like mod_perl,mod_pythonetc the first thing i do when i  
> get really weird errors
> it move to having only one apache process for testing.
>
> might want to start there.

In mod_python handler code:

  req.content_type = 'text/plain'

or otherwise.

If you don't indicate what the response content type is web browsers
will often try and work it out based on the extension in the URL.

This is presuming you configured Apache correctly and your code is
actually being executed and you aren't just serving up your code
instead.

Graham
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to import custom python file in python server page (psp) ?

2008-03-14 Thread Graham Dumpleton
On Mar 15, 6:44 am, Joshua Kugler <[EMAIL PROTECTED]> wrote:
> James Yu wrote:
> > Hi folks,
>
> > I prepared a python script for dynamically get the absolute paths of the
> > files in certain folder.
> > Then I tried to invoke that function from my web server in a .psp file
> > like this:
>
> >       1 
> >       2 
> >       3 asdfasdfasdfa
> >       4 
> >       5 <%
> >       6 import glob
> >       7 import os
> >       8 *import Helper
> > *      9
> >      10 body = ''
> >      11 top = 'asdfasdfasdfa'
> >      12 links = {}
> >      13 *Helper.GetLinks(top=top)
> > *     14 *paths = Helper.GenLinkPath(links)
> > *     15 body = paths
> >      16 %>
> >      17     <%=body%>
> >      18 
> >      19 
>
> > However, this is the error message I received when I open the page in a
> > browser:
>
> >>Mod_pythonerror: "PythonHandlermod_python.psp"
>
> >> Traceback (most recent call last):
>
> >>   File "/usr/lib/python2.5/site-packages/mod_python/apache.py", line 299,
> >> in HandlerDispatch
> >>     result = object(req)
>
> >>   File "/usr/lib/python2.5/site-packages/mod_python/psp.py", line 302, in
> >> handler
> >>     p.run()
>
> >>   File "/usr/lib/python2.5/site-packages/mod_python/psp.py", line 213, in
> >> run
> >>     exec code in global_scope
>
> >>   File "/var/www/.cyu021/.pic/index.psp", line 8, in
> >>     import Helper
>
> >> ImportError: No module named Helper
>
> > *PS. I put Helper.py and index.psp in the same dir
> > *
> > Thanks in advance,
>
> What is the import path?  The current directory in PSP might not be the
> directory in which the .psp file resides.  Print out sys.path before you
> import your helper module to see what paths you're dealing with.

If using mod_python 3.3.1, see:

  http://issues.apache.org/jira/browse/MODPYTHON-220

Graham

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: problem with mod_python

2008-02-19 Thread Graham Dumpleton
On Feb 20, 6:04 am, Joshua Kugler <[EMAIL PROTECTED]> wrote:
> Pradnyesh Sawant wrote:
> > Hello,
> > I have a small program which does 'import hashlib'. This program runs fine
> > with python2.5. But when I try running the same program through
> > mod_python, I get the error: 'ImportError: No module named hashlib' in the
> > apache2 error.log
>
> > Searching online suggested me to include md5.so or md5module.so in
> > apache2. but I don't see that in a package for debian lenny (the system
> > I'm using).
>
> > So, my Q is, is it possible to make mod_python use the same PYTHONPATH as
> > the python2.5 interpreter? if so, how?
>
> It sounds like your mod_python may be compiled against a different version
> of Python than your main installation?  How did you install mod_python? How
> did you install your main python installation?
>
> What is the output of the command:
>
> ldd /path/to/mod_python.so
>
> (the full path on my system is /usr/lib/apache2/mod_python.so)
>
> There should be a line something like:
>
> libpython2.5.so.1.0 => /usr/lib/libpython2.5.so.1.0 (0xb7e37000)
>
> If it is pointing to libpython.2.4.so.1.0, then that could be the reason for
> you troubles.

The ldd trick only works if the Python version being used actually
supplied a shared library and mod_python was able to link against it,
otherwise a static version of Python is embedded in mod_python.

Some Linux distributions still possibly don't provide a shared library
for Python, or don't correctly symlink the .so into the Python config
directory along side the .a so that linkers will find it correctly
when -L for config directory is used. This has in part been the fault
of Python itself as build from source doesn't necessarily do that
symlink. Not sure if this has changed in more recent Python versions.

Graham

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: problem with mod_python

2008-02-17 Thread Graham Dumpleton
On Feb 17, 3:29 pm, Pradnyesh Sawant <[EMAIL PROTECTED]> wrote:
> Hello,
> I have a small program which does 'import hashlib'. This program runs fine
> with python2.5. But when I try running the same program through mod_python,
> I get the error: 'ImportError: No module named hashlib' in the apache2
> error.log
>
> Searching online suggested me to include md5.so or md5module.so in apache2.
> but I don't see that in a package for debian lenny (the system I'm using).
>
> So, my Q is, is it possible to make mod_python use the same PYTHONPATH as
> the python2.5 interpreter? if so, how?
>
> any other suggestions to solve the above problem are welcome too.
> thanks!

Your mod_python isn't compiled against Python 2.5 but is using an
older version. You will need to rebuild mod_python to use Python 2.5
instead. You cannot just point mod_python at the Python 2.5 module
directories as they are incompatible.

Graham

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problem with Image: Opening a file

2008-02-04 Thread Graham Dumpleton
On Feb 4, 6:51 pm, mcl <[EMAIL PROTECTED]> wrote:
> I am obviously doing something stupid or not understanding the
> difference between HTML file references and python script file
> references.
>
> I am trying to create a thumbnail of an existing .jpg file. It is in
> the directory 'temp', which is below my script
>
> I can display the file with , but Image.open does not see it.
>
> I think it is probably a path problem, but I have tried '/temp/
> fred.jpg' and './temp/fred.jpg'
>
> Can anyone help, please
>
> PS I am usingmod_pythonon a Linux Apache 2 and python 2.5
>
> CODE =
>
> import os
> import StringIO
>
> def index(req):
> main(req)
>
> def writeHTMLHeader(req):
>
> req.content_type = "text/html"
> req.write(' EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd";>')
> req.write('http://www.w3.org/1999/xhtml"; lang="en"
> xml:lang="en">')
> req.write('My Tests: Python')
> req.write('')
>
> def writeHTMLTrailer(req):
> req.write('')
> req.write('')
>
> def writelineHTML(req, htmlMSG):
> req.write(htmlMSG + '\n')
>
> def createThumb(req, infile, maxwidth, maxheight):
> import Image
> import os
> infile = "temp/fred.jpg"
> outfile = "temp/fred_th.jpg"
> if infile != outfile:
>
> writelineHTML(req, "Opening %s" % str(infile))
> writelineHTML(req, '' % infile)  # Finds
> File OK ***
> im =
> Image.open(infile) #
> ERRORS on the same file *
> im.thumbnail((maxwidth,maxheight),Image.ANTIALIAS)
> im.save(outfile, "JPEG")
> writelineHTML(req, "Created: " + str(outfile))
> writelineHTML(req, '' % outfile)
>
> return outfile
>
> return ""
>
> def showThumbnail(req, picture):
> myThumb = createThumb(req, picture, 60, 60)
> if myThumb:
> writelineHTML(req, '
> def main(req):
> writeHTMLHeader(req)
> picture = "temp/fred.jpg"
> showThumbnail(req, picture)
> writeHTMLTrailer(req)
>
> ERROR ==
>
>   File "/home/mcl/htdocs/timslists/thumbs2.py", line 33, in
> createThumb
> im = Image.open(infile)
>
>   File "/usr/lib/python2.5/site-packages/PIL/Image.py", line 1888, in
> open
> fp = __builtin__.open(fp, "rb")
>
> IOError: [Errno 2] No such file or directory: 'temp/fred.jpg'
>
> Richard

Code under mod_python will typically run as Apache user. This means
that when writing files the location you use must be writable to the
Apache user. Do note though that the working directory of Apache is
not guaranteed and you must always use an absolute path to the
location you are saving files, you cannot reliably use relative paths
like you are.

BTW, do you perhaps mean '/tmp/fred.jpg'? There is no '/temp'
directory on Linux boxes.

Graham
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: apache/mod_wsgi daemon mode

2008-02-03 Thread Graham Dumpleton
On Feb 4, 10:33 am, Scott SA <[EMAIL PROTECTED]> wrote:
> On 2/3/08, Brian Smith ([EMAIL PROTECTED]) wrote:
> >Scott SA wrote:
> >> I am trying to configure mod_wsgi to run in daemon mode with
> >> Apache. I can easily get it to run 'normally' under Apache
> >> but I obtain permission errors _or_ process-failures in
> >> daemon mode. Specifically:
>
> >>  ... (13)Permission denied: mod_wsgi (pid=26962): Unable
> >> to connect
> >>  to WSGI daemon process '' on
> >> '/etc/httpd/logs/wsgi.26957.0.1.sock' after multiple attempts.
>
> ...
>
> I had previoiusly done what I _thought_ was a good job of searching the wsgi 
> mailing list (really!). A reworking of my google search parameters finally 
> yeildd a helpful thread:
>
> 
>
> The problem was WSGI trying to create its .sock file in /var/log/httpd but 
> failing and therefore not running at all. The user I had specified did not 
> have enough permissions to do so (part of the point _of_ running in daemon 
> mode, LOL). Oddly, I had attempted to grant the permissions for the user but 
> see now there was an error in how I did that... oops.
>
> By adding the following to my config:
>
> WSGISocketPrefix /tmp/wsgi

Also documented in:

  http://code.google.com/p/modwsgi/wiki/ConfigurationIssues

Since you have given a real live example error message from logs for
when it goes wrong, I'll be able to include this in the documentation.
That may make it easier for others to find that page when they do a
Google search. At the moment the page only mentioned that you would
get a '503 Service Temporarily Unavailable' response in browser.

Graham

> We now have succe!
>
> So my config now looks like:
>
>WSGISocketPrefix /tmp/wsgi
>
>
>ServerName host.domain.com
>
>WSGIDaemonProcess  user= group= threads=10 \
> maximum-requests=500
>
>WSGIScriptAlias /something /path/to/

Re: Multiple interpreters retaining huge amounts of memory

2008-02-03 Thread Graham Dumpleton
On Feb 4, 10:03 am, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote:
> >>> It means that
> >>> environment variable separation for changes made unique to a sub
> >>> interpreter is impossible.
> >> That's not really true. You can't use os.environ for that, yes.
>
> > Which bit isn't really true?
>
> The last sentence ("It means that...").
>
> > When you do:
>
> >   os.environ['XYZ'] = 'ABC'
>
> > this results in a corresponding call to:
>
> >   putenv('XYZ=ABC')
>
> Generally true, but not when you did
>
>os.environ=dict(os.environ)
>
> Furthermore, you can make changes to environment variables
> without changing os.environ, which does allow for environment
> variable separation across subinterpreters.
>
> > As a platform provider and not the person writing the application I
> > can't really do it that way and effectively force people to change
> > there code to make it work. It also isn't just exec that is the issue,
> > as there are other system calls which can rely on the environment
> > variables.
>
> Which system calls specifically?

For a start os.system(). The call itself may not rely on environment
variables, but users can expect environment variables they set in
os.environ to be inherited by the program then are running.

There would similarly be issues with use of popen2 module
functionality because it doesn't provide means of specifying a user
specific environment and just inherits current process.

Yes you could rewrite all these with execve in some way, but as I said
it isn't something you can really enforce on someone, especially when
they might be using a third party package which is doing it and it
isn't even their own code.

> > It is also always hard when you aren't yourself having the problem and
> > you are relying on others to try and debug their problem for you. More
> > often than not the amount of information they provide isn't that good
> > and even when you ask them to try specific things for you to test out
> > ideas, they don't. So often one can never uncover the true problem,
> > and it has thus become simpler to limit the source of potential
> > problems and just tell them to avoid doing it. :-)
>
> You do notice that my comment in that direction (avoid using multiple
> interpreters) started that subthread, right :-?

I was talking about avoiding use of different versions of a C
extension module in different sub interpreters, not multiple sub
interpreters as a whole.

Graham

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multiple interpreters retaining huge amounts of memory

2008-02-03 Thread Graham Dumpleton
On Feb 4, 7:13 am, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote:
> > You might also read section 'Application Environment Variables' of
> > that document. This talks about the problem of leakage of environment
> > variables between sub interpreters. There probably isn't much that one
> > can do about it as one needs to push changes to os.environ into C
> > environment variables so various system library calls will get them,
> > but still quite annoying that the variables set in one interpreter
> > then show up in interpreters created after that point. It means that
> > environment variable separation for changes made unique to a sub
> > interpreter is impossible.
>
> That's not really true. You can't use os.environ for that, yes.

Which bit isn't really true? When you do:

  os.environ['XYZ'] = 'ABC'

this results in a corresponding call to:

  putenv('XYZ=ABC')

as well as setting value in os.environ dictionary.

  >>> os.environ.__class__
  

class _Environ(UserDict.IterableUserDict):
def __setitem__(self, key, item):
putenv(key, item)
self.data[key] = item

Because os.environ is set from the current copy of C environ at time
the sub interpreter is created, then a sub interpreter created at a
later point will have XYZ show up in os.environ of that sub
interpreter.

> However,
> you can pass explicit environment dictionaries to, say, os.execve. If
> some library relies on os.environ, you could hack around this aspect
> and do
>
> os.environ = dict(os.environ)
>
> Then you can customize it. Of course, changes to this dictionary now
> won't be reflected into the C library's environ, so you'll have to
> use execve now (but you should do so anyway in a multi-threaded
> application with changing environments).

As a platform provider and not the person writing the application I
can't really do it that way and effectively force people to change
there code to make it work. It also isn't just exec that is the issue,
as there are other system calls which can rely on the environment
variables.

The only half reasonable solution I have ever been able to dream up is
that just prior to first initialising Python that a snapshot of C
environment is taken and as sub interpreters are created os.environ is
replaced with a new instance of the _Environ wrapper which uses the
initial snapshot rather than what the environment is at the time. At
least then each sub interpreter gets a clean copy of what existed when
the process first started.

Even this isn't really a solution though as changes to os.environ by
sub interpreters still end up getting reflected in C environment and
so the C environment becomes an accumulation of settings from
different code sets with a potential for conflict at some point.

Luckily this issue hasn't presented itself as big enough of a problem
at this point to really be concerned.

> > First is that one can't use different versions of a C extension module
> > in different sub interpreters. This is because the first one loaded
> > effectively gets priority.
>
> That's not supposed to happen, AFAICT. The interpreter keeps track of
> loaded extensions by file name, so if the different version lives in
> a different file, that should work fine.
>
> Are you using sys.setdlopenflags by any chance? Setting the flags
> to RTLD_GLOBAL could have that effect; you'ld get the init function
> of the first module always. By default, Python uses RTLD_LOCAL,
> so it should be able to keep the different versions apart (on
> Unix with libdl; on Windows, symbol resolution is per-DLL anyway).

That may be true, but I have seen enough people raise strange problems
that I at least counsel people not to rely on being able to import
different versions in different sub interpreters.

The problems may well just fall into the other categories we have been
discussing. Within Apache at least, another source of problems which
can arise is that Apache, or other Apache modules (eg. PHP), can
directly link to shared libraries where they are then loaded at global
context. Even if a Python module tries to isolate itself, one can
still end up with conflicts between the version of a shared library
that the module may want to use and what something else has already
loaded. The loader scope doesn't always protect against this.

It is also always hard when you aren't yourself having the problem and
you are relying on others to try and debug their problem for you. More
often than not the amount of information they provide isn't that good
and even when you ask them to try specific things for you to test out
ideas, they don't. So often one can never uncover the true problem,
and it has thus become simpler to limit the source of potential
problems and just tell them to avoid doing it. :-)

Graham
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multiple interpreters retaining huge amounts of memory

2008-02-03 Thread Graham Dumpleton
Nice to see that your comments do come from some understanding of the
issues. Been number of times in the past when people have gone off
saying things about multiple interpreters, didn't really know what
they were talking about and were just echoing what some one else had
said. Some of the things being said were often just wrong though. It
just gets annoying. :-(

Anyway, a few comments below with pointers to some documentation on
various issues, plus details of other issues I know of.

On Feb 3, 6:38 pm, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote:
> > If you are going to make a comment such as 'multi-interpreter feature
> > doesn't really work' you really should substantiate it by pointing to
> > where it is documented what the problems are or enumerate yourself
> > exactly what the issues are. There is already enough FUD being spread
> > around about the ability to run multiple sub interpreters in an
> > embedded Python application, so adding more doesn't help.
>
> I don't think the limitations have been documented in a systematic
> manner. Some of the problems I know of are:
> - objects can easily get shared across interpreters, and often are.
>    This is particularly true for static variables that extensions keep,
>    and for static type objects.

Yep, but basically a problem with how people write C extension
modules. Ie., they don't write them with the fact that multiple
interpreters can be used in mind.

Until code was fixed recently in trunk, one high profile module which
had this sort of problem was psycop2. Not sure if there has been an
official release yet which includes the fix. From memory the problem
they had was that a static variable was caching a reference to the
type object for Decimal from the interpreter which first loaded and
initialised the module. That type object was then used to create
instances of Decimal type which were passed to other interpreters.
These Decimal instances would then fail isinstance() checks within
those other interpreters.

Some details about this in section 'Multiple Python Sub Interpreters'
of:

  http://code.google.com/p/modwsgi/wiki/ApplicationIssues

That section of documentation also highlights some of the other errors
that can arise where file objects in particular are somehow shared
between interpreters, plus issues when unmarshalling data.

You might also read section 'Application Environment Variables' of
that document. This talks about the problem of leakage of environment
variables between sub interpreters. There probably isn't much that one
can do about it as one needs to push changes to os.environ into C
environment variables so various system library calls will get them,
but still quite annoying that the variables set in one interpreter
then show up in interpreters created after that point. It means that
environment variable separation for changes made unique to a sub
interpreter is impossible.

> - Py_EndInterpreter doesn't guarantee that all objects are released,
>    and may leak. This is the problem that the OP seems to have.
>    All it does is to clear modules, sys, builtins, and a few other
>    things; it is then up to reference counting and the cycle GC
>    whether this releases all memory or not.

There is another problem with deleting interpreters and then creating
new ones. This is where a C extension module doesn't declare reference
counts to static Python objects it creates. When the interpreter is
destroyed and objects that can be destroyed are destroyed, then it may
destroy these objects which are referenced by the static variables.
When a subsequent interpreter is created which tries to use the same C
extension module, that static variable now contains a dangling invalid
pointer to unused or reused memory.

PEP 3121 could help with this by making it more obvious of what
requirements exist on C extension modules to cope with such issues.

I don't know whether it is a fundamental problem with the tool or how
people use it, but Pyrex generated code seems to also do this. This
was showing up in PyProtocols in particular when attempts were made to
recycle interpreters within the lifetime of a process. Other packages
having the problem were pyscopg2 again, lxml and possibly subversion
bindings. Some details on this can be found in section 'Reloading
Python Interpreters' of that document.

> - the mechanism of PEP 311 doesn't work for multiple interpreters.

Yep, and since SWIG defaults to using it, it means that SWIG generated
code can't be used in anything but the main interpreter. Subversion
bindings seem to possibly have a lot of issues related to this as
well. Some details on this can be found in section 'Python Simplified
GIL State API' of that document.

> > Oh, it would also be nice to know exactly what embedded systems you
> > have developed which make use of multiple sub interpreters so we can
> > gauge with what standing you have to make such a comment.
>
> I have never used that feature myself. However, I wrote PEP 3121
> 

Re: Multiple interpreters retaining huge amounts of memory

2008-02-02 Thread Graham Dumpleton
On Feb 2, 12:34 pm, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote:
> > Is there some way to track references per interpreter, or to get the
> > memory allocator to set up seperate arenas per interpreter so that it
> > can remove all allocated memory when the interpreter exits?
>
> No. The multi-interpreter feature doesn't really work, so you are
> basically on your own. If you find out what the problem is, please
> submit patches to bugs.python.org.
>
> In any case, the strategy you propose (with multiple arenas) would *not*
> work, since some objects have to be shared across interpreters.
>
> Regards,
> Martin

The multi interpreter feature has some limitations, but if you know
what you are doing and your application can be run within those
limitations then it works fine.

If you are going to make a comment such as 'multi-interpreter feature
doesn't really work' you really should substantiate it by pointing to
where it is documented what the problems are or enumerate yourself
exactly what the issues are. There is already enough FUD being spread
around about the ability to run multiple sub interpreters in an
embedded Python application, so adding more doesn't help.

Oh, it would also be nice to know exactly what embedded systems you
have developed which make use of multiple sub interpreters so we can
gauge with what standing you have to make such a comment.

Graham
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Web Interface Recommendations

2008-01-29 Thread Graham Dumpleton
On Jan 30, 12:00 pm, PurpleServerMonkey <[EMAIL PROTECTED]>
wrote:
> Looking for suggestions on the best framework to use for an
> applications web interface.
>
> The user interface doesn't need immediate feedback and will be cross
> platform so a web interface is a good solution especially since it
> operates as a client\server system and not standalone.
>
> However there's just so many frameworks to choose from it's very
> confusing. After a lot of reading it sounds like OSE or Cherrypy might
> be good options but I don't have any real world experience with these
> frameworks to go by.
>
> Basically the interface won't be the application, it's used to input
> data and have the application act on it. It's going to have a
> Postgresql database and a number of service\daemon processes that do
> the actual work. It will also contain some form based information for
> keeping track of requests etc. and grow to a fair number of screens
> over time.
>
> Given the above what framework would you recommend?

Surprised you even looked at OSE. Although OSE has some features for
building HTML based web interfaces, they are very basic and not really
intended for building major stuff. OSE can still be useful for writing
backend applications, but would very much suggest you use just the XML-
RPC interfaces it provides to talk into its distributed messaging
system and service agent framework.

If you use the XML-RPC interfaces then you can use a proper web
application framework for doing the actual HTML based user interface
front end. At that point you can choose any of the major frameworks,
such as Django, Pylons, TurboGears, CherryPy or web.py.

Splitting the front end from the backend like this also means that
backend itself could be swapped out. Thus, instead of using OSE in the
backend, you could use simpler XML-RPC enabled Python applications, or
even use Pyro. In other words, you avoid intertwining code for front
end and backend too much, thus perhaps making it easier to change and
adapt as it grows.

Graham
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python setup not working on Windows XP

2008-01-08 Thread Graham Dumpleton
On Jan 8, 5:31 pm, Tim Roberts <[EMAIL PROTECTED]> wrote:
> Gowri <[EMAIL PROTECTED]> wrote:
>
> >I am new to Python and am trying to setup Apache to serve Python using
> >mod_python. I'm using a Windows XP box. here is a list of steps i
> >followed for the installation:
>
> >1. Installed Apache 2.2.6
> >2. Installed Python 2.5.1
> >3. Installedmod_python3.3.1
>
> >I then included the line
> >LoadModule python_module modules/mod_python.so in httpd.conf
>
> >I had this one line python file (print "Hello") in htdocs of Apache.
>
> Did you put it in a file called "hello.py"?  Did you create an AddHandler
> for .py files?  Did you create a PythonHandler referring to hello.py?

And did you (OP) read the mod_python documentation enough to know that
'print "Hello" is in no way going to work with mod_python. You cannot
just throw an arbitrary bit of Python code in a file using 'print'
statements and it will somehow magically work. You need to write your
code to the mod_python APIs.

Graham

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is the best way to do dynamic imports ?

2007-12-30 Thread Graham Dumpleton
On Dec 31, 1:24 am, [EMAIL PROTECTED] wrote:
> Hi list and python gurus :-)
>
> I'm playing with somemod_pythonand web development. And in me code I
> need to do som dynamic imports.
> Right now I just do a:
>
> exec 'import '+some_modulename
>
> But it seems to easy, is there a "dark side" to doing it this way?
> (memory use,processing ,etc)
> And have to I check if the modul is already loaded?
>
> Another thing is how to call my dynamic imported moduls.
> Now I use exec (as with my modules), like this:
>
> exec 'newclass = '+classname+'()'
> newclass.somefunction()
>
> Again it seems to easy. Is there a better/proper way to do it?
>
> Do anybody now a good howto or tutorial to this?
>
> Many thanks and hope you all have a happy new year :-)
>
> /marc

If using mod_python, you should use mod_python's own dynamic module
importer. See the documentation for mod_python.apache.import_module()
in:

  http://www.modpython.org/live/current/doc-html/pyapi-apmeth.html

There is no need to use __import__ directly. By using mod_python's way
of doing things you can benefit from automatic module reloading
provided certain constraints met.

Graham
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to protect directory traversal in mod_python based custom apps

2007-12-25 Thread Graham Dumpleton
On Dec 24, 10:34 pm, "Ravi Kumar" <[EMAIL PROTECTED]> wrote:
> hi :)
> I was trying to develop a custommod_pythonbased web-site, just
> today. the problem I got
> though i liked themod_python'sfeature of mapping and calling
> functions in python script by parsing the url.
> I mean,http://localhost/site/member/list?no=100
>
> would call site/member.py page's function list with arguments no=100.
> Thats a feature i liked.
> But PROBLEM 01:
> i have included in index.py a css link to say something media/base.css
> now when same page comes with URL index.py/index the URL becomes
> false. I am finding some better way to overcome this.
> Placing all CSS as static served is not a good idea,(like if CSS is
> dynamically generated).
> So according to you, what should be a better approach to this problem.

The mod_python.publisher code is arguably broken in the way it handles
the trailing slash problem.

For some discussion on the issue see:

  http://www.modpython.org/pipermail/mod_python/2006-March/020501.html

This includes some code which might be modified and used in a stack
handler arrangement to give you a relative anchor point to use on
URLs.

> PROBLEM 02:
> How can I prevent directory traversal.
> Take the case, i have five subdirs in dir 'site' named :
> components
> modules
> config
> templates
>
> and a file loader.py
>
> when a request comes as loader.py/pagename?renderType=xhtml
> it would call the function pagename which loads the pages from subdir
> 'templates' resolves the added components in pages from subdir
> 'components' where components uses custom modules from 'modules' and
> so on. Configuration subdir contains various configuration files in
> .py and .xml
>
> I don't want visitors to traverse and get list of all those subdirs.
> Those sub-dirs actually should no way be traversable online.
> Though I can prevent it using apache .htaccess and access directives
> in apache config.
>
> But many hosting server, apache config can't be edited (or maybe some
> situation). Then how can i block traversing the directory (what sort
> of implementation)
> Referring to CodeIgnitor PHP Framework, they places index.php in every
> dir. thats doesn't seem a good idea, and if a person calls the pages
> providing the right path, they are able to execute files in the
> framework,  though since those configs and other files doesn't return
> anything, tere is no result.

If the ISP gives you some directory space which isn't part of the
exposed document tree, then simply move those subdirectories from the
document tree outside to the additional space you have. Then refer to
the files from there.

If you can't do that because the document tree is all you have, then
one remaining hack is to rename all the files in the subdirectories to
begin with '.ht' prefix. This would generally work as default Apache
configuration is to forbid access to any files starting with '.ht'
prefix.

Graham

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Detecting memory leaks on apache, mod_python

2007-12-21 Thread Graham Dumpleton
On Dec 21, 7:42 pm, Ilias Lazaridis <[EMAIL PROTECTED]> wrote:
> On Dec 19, 5:40 am, Ilias Lazaridis <[EMAIL PROTECTED]> wrote:
>
>
>
> > On Dec 17, 8:41 am, Ilias Lazaridis <[EMAIL PROTECTED]> wrote:
>
> > > How to detect memory leaks of python programms, which run in an
> > > environment like this:
>
> > >  * Suse Linux 9.3
> > >  * Apache
> > >  *mod_python
>
> > > The problem occoured after some updates on the infrastructure. It's
> > > most possibly caused by trac and it's dependencies, but several
> > > components of the OS where updated, too.
>
> > > Any nice tools which play with the above constellation?
>
> > > Thank's for any hints!
>
> > No tool available to detect memory leaks for python applications?
>
> So, anyone who hit's on this thread via a search will think
>
> a) that there's really no memory leak detection for python
> b) that this community is not very helpful

Comments like (b) will not help your chances one bit in respect of
getting an answer from anyone.

Maybe you should read:

  http://www.catb.org/~esr/faqs/smart-questions.html

Two things to note in here. First, choose your forum appropriately and
secondly show some courtesy rather than making accusations against the
community if no one answers.

If you want to see perhaps how you might be viewed by the open source
community when you make such a comment, perhaps also watch:

  http://video.google.com/videoplay?docid=-4216011961522818645

Now, since you think this is a Trac problem, why don't you go ask on
the Trac mailing list.

  http://groups.google.com/group/trac-users?lnk=li

Even a search of that forum will most likely yield previous
discussions about growing memory use of Trac as a result of things
like Python wrappers for Subversion or certain database adapters. It
may be a case of using a different version, or in some cases
configuration of your hosting environment, if using Apache, to recycle
Apache child processes after a set number of requests so as to restore
process sizes back to a low level.

So, do some research first in the correct places and then perhaps ask
any additional questions in the correct place also.

Graham
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: More than one interpreter per process?

2007-12-18 Thread Graham Dumpleton
On Dec 19, 3:07 pm, Roger Binns <[EMAIL PROTECTED]> wrote:
> Graham Dumpleton wrote:
> > When using mod_wsgi there is no problem with C extension modules which
> > use simplified GIL API provided that one configures mod_wsgi to
> > delegate that specific application to run in the context of the first
> > interpreter instance created by Python.
>
> Graham, I've asked you before but never quite got a straight answer.

Maybe because it isn't that simple. :-)

> What *exactly* should extension authors change their code to in order to
> be fully compatible?  For example how should the C function below be
> changed:
>
> void somefunc(void)
> {
>   PyGILState_STATE gilstate=PyGILState_Ensure();
>
>   abc();
>
>   Py_BEGIN_ALLOW_THREADS
>  def();
>   Py_END_ALLOW_THREADS
>
>   ghi();
>
>   PyGILState_Release(gilstate);
>
> }

What you do depends on what the overall C extension module does. It
isn't really possible to say how the above may need to be changed as
there is a great deal of context which is missing as far as knowing
how that function comes to be called. Presented with that function in
isolation I can only say that using simplified GIL API is probably the
only way of doing it and therefore can only be used safely against the
first interpreter created by Python.

If the direction of calling for a C extension module is always Python
code into C code and that is far as it goes, then none of this is an
issue as you only need to use Py_BEGIN_ALLOW_THREADS and
Py_END_ALLOW_THREADS.

The problem case is where C code needs to callback into Python code
and you are not using simplified GIL API in order to be able to
support multiple sub interpreters. The easiest thing to do here is to
cache a thread state object for the interpreter instance when you
first obtained the handle for the object which allows you to interact
with C extension module internals. Later when a callback from C to
Python code needs to occur then you lookup the cached thread state
object and use that as the argument to PyEval_AcquireThread().

As example see:

  http://svn.dscpl.com.au/ose/trunk/software/python/opydispatch.cc

The thread state object is cached when handle to an instance is first
created. Any callbacks which are registered remember the interpreter
pointer and then that is used as key to lookup up the cached thread
state.

This code was done a long time ago. It is possible that it needs to be
revised based on what has been learnt about simplified GIL API.

This way of doing things will also not work where it is possible that
sub interpreters are created and then later destroyed prior to process
exit because of the fact that the interpreter pointer is cached. But
then, enough C extension modules have this problem that recycling sub
interpreters in a process isn't practical anyway.

Note that the indicated file uses a global cache. The agent.hh/
opyagent.cc files at that same location implement a more complicated
caching system based on individual objects.

The software which this is from is probably a more extreme example of
what is required, but your example was too simple to draw any
conclusions from.

Graham
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: More than one interpreter per process?

2007-12-18 Thread Graham Dumpleton
On Dec 18, 8:24 pm, Roger Binns <[EMAIL PROTECTED]> wrote:
> sturlamolden wrote:
> > If one can have more than one interpreter in a single process,
>
> You can.  Have a look at mod_python andmod_wsgiwhich does exactly
> this.  But extension modules that use the simplified GIL api don't work
> with them (well, if at all).

When using mod_wsgi there is no problem with C extension modules which
use simplified GIL API provided that one configures mod_wsgi to
delegate that specific application to run in the context of the first
interpreter instance created by Python.

In theory the same should be the case for mod_python but there is
currently a bug in the way that mod_python works such that some C
extension modules using simplified API for the GIL still don't work
even when made to run in first interpreter.

Graham
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: More than one interpreter per process?

2007-12-18 Thread Graham Dumpleton
On Dec 19, 2:37 am, sturlamolden <[EMAIL PROTECTED]> wrote:
> On 18 Des, 10:24, Roger Binns <[EMAIL PROTECTED]> wrote:
>
> > You can.  Have a look at mod_python andmod_wsgiwhich does exactly
> > this.  But extension modules that use the simplified GIL api don't work
> > with them (well, if at all).
>
> mod_python implements use Py_NewInterpreter() to create sub-
> interpreters. They all share the same GIL. The GIL is declared static
> in ceval.c, and shared for the whole process. But ok, if
> PyEval_AquireLock() would take a pointer to a 'sub-GIL', sub-
> interpreters could run concurrent on SMPs. But it would require a
> separate thread scheduler in each subprocess.

In current versions of Python it is possible for multiple sub
interpreters to access the same instance of a Python object which is
notionally independent of any particular interpreter. In other words,
sharing of objects exists between sub interpreters. If you remove the
global GIL and make it per sub interpreter then you would loose this
ability. This may have an impact of some third party C extension
modules, or in embedded systems, which are able to cache simple Python
data objects for use in multiple sub interpreters so that memory usage
is reduced.

Graham
-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   >