Re: dynamic vs. mostly static data

2000-11-08 Thread Marinos J. Yannikos

 Also, moving all static content, mostly images, off to another server
 helps tremendously.

True, we had an extra thttpd for static content at one point while we were
short on memory. Something else that seems to work well, although I can't
really explain it, is to disable keepalive support. For some reason, the
number of concurrent processes (for a single server setup) went from 70-80
to approx. 20(!), without a noticeable drop in performance or page
impressions.

My guess is that with such a configuration, since some httpd's are busy
generating dynamic pages, those that are available for static content are
usually (i.e. with a higher probability) those that just served static
content and finished quickly, so the number of httpd's stays near the number
of concurrent dynamic page accesses + max. number of concurrent connections.
With keepalives on, httpd's need much more time for one page impression if
the connection is slow, so that should explain why there are so many of
them. Does this make sense?

Regards,
-mjy
--
Marinos J. Yannikos
Preisvergleich Internet Services AG, Linke Wienzeile 4/2/5, A-1060 Wien
Tel/Fax: (+431) 5811609-52/-55





Re: dynamic vs. mostly static data

2000-11-07 Thread Marinos J. Yannikos

 If you have a caching proxy server running in front of your mod_perl
 server (like mod_proxy or Squid), you can just set Expires headers in your
 pages and this will be handled for you by the proxy.

True, both methods have advantages and disadvantages. The advantages of
using mod_rewrite and static pages are:
- less configuration / maintainance overhead
- somewhat fewer resources needed
- no extra care needed with headers
- static pages can be removed at any time and kept around as long as needed,
individual static pages can be removed and regenerated

The obvious disadvantage is that it may be tricky to find / maintain an
URL/file encoding scheme that can handle all the possible arguments for
dynamic pages.

-mjy
--
Marinos J. Yannikos
Preisvergleich Internet Services AG, Linke Wienzeile 4/2/5, A-1060 Wien
Tel/Fax: (+431) 5811609-52/-55





Re: dynamic vs. mostly static data

2000-11-07 Thread Marinos J. Yannikos

 Only if you don't already have a proxy front-end.  Most large sites will
 need one anyway.

After playing around for a while with mod_proxy on a second server, I'm not
so convinced; we have been doing quite well without such a setup for some
time now, despite up to 70-80 httpd processes (with mod_perl) during busy
hours. Since we try to keep our website very responsive, the additional
latency we observed with a mod_proxy based front-end was noticeable (still
in the tenths of a second range though).

Perhaps with some more tweaking we could have avoided that (yes, we did turn
off Keepalive for the mod_perl back-end!) , but together with the other
drawbacks (remote IP tracking is difficult / impossible in some cases, lots
of IfModule mod_proxy.c sections in the config files etc.), it seemed to
be too much of a hassle to use while we still have enough memory for all the
httpd processes.

Regards,
-mjy
--
Marinos J. Yannikos
Preisvergleich Internet Services AG, Linke Wienzeile 4/2/5, A-1060 Wien
Tel/Fax: (+431) 5811609-52/-55





Re: dynamic vs. mostly static data

2000-11-06 Thread Marinos J. Yannikos

 How can I 'cache' this data so that all Apache children can
 access it quickly? Is there a way to automatically update
 this cache periodically (say every 10 minutes)?

If possible with your data, it'd probably be a good idea to generate static
pages on-the-fly using mod_rewrite as in the related guide:
http://www.engelschall.com/pw/apache/rewriteguide/#ToC33.
You'll have to come up with a scheme to encode all query parameters in the
URL and you can then just remove old pages periodically. It works fine here
for a different application (thumbnails in arbitrary sizes generated
on-the-fly).

If that is out of the question, you can still use a simple filesystem-based
caching mechanism (by mapping query args to unique, safe filenames) or
something like IPC::Cache to cache the query results.

-mjy
--
Marinos J. Yannikos
Preisvergleich Internet Services AG, Linke Wienzeile 4/2/5, A-1060 Wien
Tel/Fax: (+431) 5811609-52/-55





Zombies with dbiproxy

2000-10-27 Thread Marinos J. Yannikos

Has anyone used dbiproxy extensively? I seem to be getting hundreds of
zombie Perl processes (until the proc table is full) after a while. I only
replaced my data source string with an equivalent dbi:Proxy:... string and
started dbiproxy. Is this a known bug, or is there something I should have
done differently?

I was trying to reduce the number of active postgres client processes
(Apache::DBI created too many) without while still avoiding the postgres
connect() overhead. It turned out that the latter was much less severe than
the performance problems caused by ~ 100 active postgres client processes,
so I stopped using Apache::DBI. Any other suggestions?

Regards,
 Marinos
--
Marinos J. Yannikos
Preisvergleich Internet Services AG, Linke Wienzeile 4/2/5, A-1060 Wien
Tel/Fax: (+431) 5811609-52/-55
== Geizhals-Preisvergleich - jetzt auch für Deutschland:
http://geizhals.cc/de/