On 5/1/06, Luke, David <[EMAIL PROTECTED]> wrote:
Mary,

Your program should not "wait" for a web user to click a link or a
Submit button -- ever. There are no cpu cycles and there is no storage
reserved once you finish building the web page.

Your whole program should draw a web page and exit.
The user is then not wasting your time (and money) while he reads what
you wrote and decides to click something.

When he does click something, it will start up the same or a different
program on your server from scratch, and that will create another web
page, perhaps showing the next row.



And in order to fetch the next row, we need some method of saving
state. I'm pretty sure that's what OP meant by "getting the array to
persist." Unless we know what row the user is reading, we don't know
what row to send next. That means either saving session info locallaly
and passing a session id as a hidden filed, or passing info session
info in a hidden filed to resume the session without caching on the
server.

It is unnecessary to bring down an array of items from the database if
you are going to use only one. Your program should be retrieving and
displaying only one row, or enough extra to handle navigation.

You should pass some sort of information along with that row so that
when the user clicks for the next one, a program can retrieve and
display it. Several people have offered session schemes for persisting
your retrieval data during repeated calls. All are involved and some
could open vulnerabilities to hacking.


Welcome to the web. I certainly wouldn't advocate sending the entire
query to the client and then reading it back in. Nor would I advocate
using evaling Data::Dumper output that's been to the client at any
point. That said, I've done both on internal apps, and with proper
taint checking it shouldn't be any worse than accepting input to
construct a database query.

Storable or DBM are certainly safer options. They also have higher
overhead, although not as much as DBI. Caveat Emptor; YMMV; etc., etc.

The database you use determines the exact syntax necessary for
retrieving the "Next" row, when presented with information from the
"Current" row. That information will have to go roundtrip out to the
user's browser and then back in to the server for a NEW invocation of
your program. Again, do not retrieve all of the rows unless you will
pass them all to the user WITHOUT another trip to the server.


While that's often the way it works, it's not the only way, or even
the best way. Database connections are expensive, and there are many
situations where it makes more sense to fetch the data in chuncks,
cache it locally, and then feed it to the user as needed. Database
access is often the bottleneck in CGI scripts, especially if you have
connections that don't close properly. And what database closes all of
its connections properly all the time? In fact, I'll go a little
farther than that and say that it rarely makes sense to fetch a single
row from a database if you know you're going to need more. Database
connections are fixed overhead. And in most cases, database queries
approach being fixed overhead. That is it will take the same amount of
time to connect to the database each time you connect (more or less).
And in many cases it will take about the same amount of time to fetch
50 rows as it will to fetch one. If the time it takes to connect is =
dc and the time it takes to run the query and return = dq, then the
time it takes to fetch a single row is

   dc + dq

The time it takes to fetch 50 rows is

  +-(dc + dq)

But probably < (dc + dq + 1sec). The time it takes to to fetch 50 rows
one at a time, though, is by definition

  50 * (dc + dq)

If your database is faster than your webserver, maybe that makes
sense, but in general, the two techniques that, more than any others,
can increase the efficiency of your dynamic content displays are
prefetching and caching. Serving dynamic content is like chess: you
have to respond to what your user is doing, but you can't win unless
you anticipate what he's going to do. How proactive you have to be vs.
how reactive you have the luxury to be will depend mostly on your
hardware, and also whether you're using mod_perl, FastCGI, etc.. The
faster you can turn around database and HTTP requests as they come in,
the less important caching and prefetching become.

HTH,

-- jay
--------------------------------------------------
This email and attachment(s): [  ] blogable; [ x ] ask first; [  ]
private and confidential

daggerquill [at] gmail [dot] com
http://www.tuaw.com  http://www.dpguru.com  http://www.engatiki.org

values of β will give rise to dom!

Reply via email to