On Feb 12, 1:44 pm, John Heenan <johnmhee...@gmail.com> wrote:
> Where is the perspective?
>
> 1) Even with an 'ideal configuration that does not use MPM pre-
> forking, Apache still uses threads to service each request (for static
> files). This is still more inefficient than lightttpd and nginx that
> use an event model.

I am not disagreeing that for static file serving servers such as
nginx and lighttpd are more efficient in memory usage and performance.
I acknowledged in my post that for static file serving lighttpd
performs better.

Where appropriate and justified I even recommend that people using a
nginx front end to Apache specifically for hosting static files, but
also proxy through to Apache dynamic requests for Python web
application. The reason for this goes far beyond just issues of
whether an event system uses less memory than a threaded system and
can handle more concurrent connections for static file requests and
pass thru proxying. Using a front end also brings other benefits in
relation to handling of keep alive connections and slow HTTP clients.
Use a nginx front end the right way and you can actually improve
Apache performance and resource utilisation for a dynamic web
application because you need less processes/threads in Apache to
handle same number of dynamic requests because it doesn't have to
worry about held over connections resulting from keep alive
connections as that is bumped off to the nginx front end. Use less
processes/threads and obviously your Apache memory usage comes down.

> 2) No one is going to take anyone seriously when they imply Apache
> bloatware can be configured to take a lower memory footprint than
> Lighttpd for the same job.

Your use of the term bloatware just reinforces that it would appear
you do not know how to setup and configure Apache nor use it in
conjunction with other tools where appropriate when using it to host
Python web applications. And I never said that one could end up with a
lower memory footprint using Apache, they are your words. I am only
taking issue with your claim that Apache is overly bloated and instead
put forward that Apache memory usage can be trimmed down if you take
the time to set it up properly.

There are many things one can do to cut down Apache to an absolute
minimum memory usage for the type of application being run. The
majority of people do not even attempt to do that and use the default
generic configuration. Most of the others still don't do all that
could be done. So, it is very few in the end who properly optimise
their Apache setup. In the end it usually doesn't even matter that
most don't. This is because the dreams of many as to how much traffic
they are going to get to their web site far exceeds reality and even a
poorly configured Apache installation will easily handle what traffic
they do get. It is only those who persist with prefork and a PHP
biased configuration and try and run even small Python web application
embedded within the same server who particularly have problems and
where the myth of bloatedness in Apache when talking about Python web
applications often arises. This hasn't been helped one bit by past
problems that mod_python had which gave people the wrong idea about
using Python with Apache in general.

Anyway, we aren't talking about using lighttpd and Apache for the same
job. Lighttpd is not capable of doing integrated Python web
application hosting. All it can effectively do is host static files
and proxy using HTTP or FASTCGI/SCGI like wire protocols.

In talking about Apache/mod_wsgi I am talking about a more integrated
system for Python web application hosting, or at least one that takes
into consideration all processes. If you try and compare lighttpd by
itself which is doing only static file serving and proxying, and
ignoring the separate process which holds your fat web2py application
then of course lighttpd by itself is going to have a smaller foot
print. So, stop ignoring the fat Python web2py application which is
still part of the overall system as that is what predominates and is
that you should be worrying about. Choose an incorrect configuration
where you unnecessarily create multiple copies of that fat Python web
application then sure you will get bloat. This applies to FASTCGI just
as equally to Apache itself. Some FASTCGI default configurations allow
one to spin up to a maximum of 100 FASTCGI processes in total. Hit a
slow web application with enough traffic and you end up with lots of
fat processes with FASTCGI as well.

Either way, what you need to get in context is that the amount of
memory your fat Python web application uses far outweighs how much
memory overhead a specific web server uses at the base level for
handling requests, whether that be for static file requests or dynamic
requests. Sure Apache has more overhead. Configure it correctly though
and it isn't as great as claims of bloateness would make out. Even
then, the additional base memory overhead is just a small percentage
of the overall memory which is used by the fat Python web application,
so much so that usually it is inconsequential.

> 3) How Python and web2py use threads to process information
> transferred through a socket has nothing to do with the web server.
> There is just a single socket or 'pipe'. Essentially the web server
> acts as a pretty dumb pipe. The web server should not be a big issue.
> It needs to just do its job quickly and efficiently and then get out
> of the way.

Yes it does. The other thing you have to remember is that when running
web2py or any other Python web application for that matter, the bulk
of the requests are for dynamic web pages and not for static files.
Even where you run it in something like FASTCGI and not an integrated/
embedded system, it doesn't matter one bit that your front end web
server can handle 5000 concurrent connections, because the number of
concurrent connections that the backend Python web application can
handle is limited by the size of the thread pool it uses as well as
the performance implications around the Python GIL. Further
bottlenecks will exist in any database access you have.

In other words, just because you have a large river flowing in one end
doesn't mean you can shove it all through the garden hose at the other
end.

Such a model can even give problems when the front end uses an event
drive system model. Take for example the nginx/mod_wsgi
implementation. It tries to put a blocking WSGI implementation
directly on top of the event driven system model of nginx. Sure it may
seem to work, but the model is somewhat flawed because of the problems
that can come up if deployed in a multiprocessor configuration as
explained in:

  http://blog.dscpl.com.au/2009/05/blocking-requests-and-nginx-version-of.html

> 4) FastCGI is not WGSI. In web2py the internal FastCGI server upgrades
> the FastCGI socket information to WGSI to use existing WGSI
> infrastructure but this is irrelevant. The code is short and simple.
> This is all irrelevant to the web server.

Just as mod_wsgi is not WSGI but an implementation of that interface.
You appear though to know little or nothing about how mod_wsgi with
Apache works.

> 5) Using the internal web server with web2py is not recommended. The
> question remains what is the best choice for an external web server.
> The answer is certainly not bloatware like Apache.

Overall all I can say is that you don't seem to really understand what
options exist for hosting Python WSGI applications with Apache and
that you are also failing to see the bigger picture. It is like the
people who get excited over benchmarks from event driven Python web
servers such as Tornado. Not only were their benchmarks flawed because
they effectively compared a simple event driven hello world program
with a blocking full Django stack, they ignore completely the
realities around the fact that the web server is never going to be the
bottleneck. The same sort of things exists where one talks about
memory usage with the actual memory used by the web server itself
generally being insignificant in terms of the overall memory used by
the Python web application itself. There are much better things one
can do to improve web application performance than chasing a dream
that a different web server or architecture will magically somehow
solve all your problems.

Graham

> John Heenan
>
> On Feb 12, 12:16 pm, Graham Dumpleton <graham.dumple...@gmail.com>
> wrote:
>
>
>
> > On Feb 12, 1:04 pm, John Heenan <johnmhee...@gmail.com> wrote:
>
> > > Hello Graham, whoever you are.
>
> > > You sound highly confused, clueless aboout how to present objective
> > > data and a fairly typical bombastic nerd of the type that clogs up and
> > > plagues forums.
>
> > > Get a life
>
> > I think you will find that I have a lot more credibility over this
> > issue than you might because of the work I have done in the past which
> > relates specifically to Python and WSGI hosting mechanisms, including
> > the many posts in various forums explaining where people get it wrong
> > in setting up Apache.
>
> > In future you might want to do your home work and perhaps look into
> > why I might say what I have before you dismiss it off hand.
>
> > Graham
>
> > > John Heenan
>
> > > On Feb 12, 11:32 am, Graham Dumpleton <graham.dumple...@gmail.com>
> > > wrote:
>
> > > > On Feb 12, 9:59 am, John Heenan <johnmhee...@gmail.com> wrote:
>
> > > > > How about web2py in a VPS using less than 40MB RAM?
>
> > > > > You can reduce web2py memory usage by using a newer generation web
> > > > > server with web2py instead of the internal web server with web2py.
>
> > > > Not really.
>
> > > > > Apache gets trashed in tests by newer generation web servers such as
> > > > > lightttpd and nginx.
>
> > > > Only for static file serving.
>
> > > > > Apache also uses far more memory.
>
> > > > For hosting a dynamic Python web application it doesn't have to. The
> > > > problem is that the majority of people have no clue about how to
> > > > configure Apache properly and will leave it as the default settings.
> > > > Worse, they load up PHP as well which forces use of prefork MPM which
> > > > compounds the problems.
>
> > > > > The reason is simple. Apache services each request with a thread.
> > > > > Nginx amd lightttpd service each request with an event model.
>
> > > > A WSGI application like web2py however isn't event based and requires
> > > > the threaded model. You are therefore still required to run web2py in
> > > > a threaded system, or at least a system which uses a thread pool on
> > > > top of an underlying thread system. Your arguments are thus moot, as
> > > > as soon as you have to do that, you end up with the same memory usage
> > > > profile issues as with Apache's threaded model.
>
> > > > > I only use lightttpd for static pages and to remap URLs.
>
> > > > > This is my memory usage with lighthttpd and web2py from command 'ps
> > > > > aux'.
>
> > > > > resident memory units are in KB
> > > > > virtual memory units are 1024 byte units
>
> > > > > lighttpd: resident memory 3660, virtual memory 59568
> > > > > python for web2py: resident memory 32816, virtual memory 225824
>
> > > > So, 32MB for web2py.
>
> > > > Now configure Apache with a comparable configuration, presumed single
> > > > process which is multithreaded and guess what, it will be pretty close
> > > > to 32MB still.
>
> > > > If you are stupid enough to leave Apache with prefork MPM because of
> > > > PHP and use embedded mode with mod_python or mod_wsgi, then of course
> > > > you will get up to 100 processes each of 32MB, because that is what
> > > > the PHP biased configuration will give.
>
> > > > Even in that situation you could used mod_wsgi daemon mode and shift
> > > > web2py to its own process, which means again that all it takes is
> > > > 32MB. The memory of Apache server child process handling static and
> > > > proxying will still be an issue if using prefork, but if you ditch PHP
> > > > and change to worker MPM you can get away with a single or maybe two
> > > > such processes and drastically cut back memory usage.
>
> > > > For some background on these issues read:
>
> > > >  
> > > > http://blog.dscpl.com.au/2009/03/load-spikes-and-excessive-memory-usa...
>
> > > > Anyway, if you aren't up to configuring Apache properly, by all means
> > > > use lighttpd or nginx.
>
> > > > Graham
>
> > > > > This is the memory usage of a python console WITHOUT any imports:
> > > > > resident memory 3580, virtual memory 24316
>
> > > > > John Heenan
>
> > > > > On Feb 11, 10:30 pm, raven <ravenspo...@yahoo.com> wrote:
>
> > > > > > It seems that everyone is running with Apache and gobs of memory
> > > > > > available.

-- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to web...@googlegroups.com.
To unsubscribe from this group, send email to 
web2py+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en.

Reply via email to