Nathan Rixham wrote:
Lester Caine wrote:
Nathan Rixham wrote:
In your application itself, caching can be introduced at every level,
you've already got filesystem io caches provided by the operating
system, a well tuned db server cache can make a big difference as well,
then on to opcode caches in languages like PHP since it's interpreted
rather than compiled, and then on to code optimizations such as using
static class cache's behind getByID methods and similar, and finally
down to micro optimizations, static code analysis and dead code
elimination, replacing (4*10) with (40), inlining static class members /
constants and such like. Finally, language specific nuances and micro
optimizations.

Actually THAT probably sums things up nicely. An approach suitable for
MySQL WILL probably be wrong for Postgres or Firebird. Certainly the
optimised SQL I use for my own applications is much improved if I
simply ignore compatibility with the databases other developers are
using. Libraries like doctrine and even PDO are probably not using the
best approach that a database specific approch may take. Firebird will
maintain the 'results' of previous searches in cache and give results
sets a lot faster, such as being ready to return the next page of
results BEFORE the web page asks for it ;) But a database agnostic
approach is not as efficient.

Yes, but you snipped a key paragraph there, because all the forms of
caching and optimization listed above, including db cache optimization
will only increase performance of the app by small percentages, whereas
moving to a publishing model and producing static output will increase
performance by several factors:

[[[
As a web developer, the most important interface you need to introduce
caching on, is HTTP - a "static" or "published" HTML document which
leverages HTTP caching using Last-Modified / ETag headers will give
circa 100 times better performance (or more) than a dynamically
generated each time doc. Not only that, but transparent proxies on the
network can come in to effect and reduce weight on the server to zero,
and further, web browsers will serve repeat requests to the document
straight from RAM, again leading to zero weight. When you include those
two factors it doesn't take long to see that the performance difference
over the network is so good that it's nigh on unmeasurable.
]]]

Feel free to ignore this yourself, but please don't promote a bit of SQL
and db server optimization as being the most important factor in
optimizing PHP applications, it is important, but the net result is
minimal in comparison to leveraging HTTP caching and static publishing
of components or entire documents.

For fixed pages this is the best way of handling the information. And handling those fixed pages is ... from my point of view ... not a problem since they can be cached at that level, or even stored locally in the browser cache. I've just been hitting re-load every time for a few updates I've just been processing! In order to actually see the result. But for the majority of my work, the data to be displayed is being rebuilt with every browser hit. In that case generating dynamic pages fast becomes the bottleneck.

--
Lester Caine - G8HFL
-----------------------------
Contact - http://lsces.co.uk/wiki/?page=contact
L.S.Caine Electronic Services - http://lsces.co.uk
EnquirySolve - http://enquirysolve.com/
Model Engineers Digital Workshop - http://medw.co.uk//
Firebird - http://www.firebirdsql.org/index.php

--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to