Exactly.. it's like disk inodes.. So instead of a larger query and the
database having to transmit the big resultset over the network to the
webserver (assuming a multi-tier architecture), using up memory, you break
it down into smaller queries so the webserver keeps asking for chunks as
they are delivered to the client via whatever method you are planning to
use (http,ftp,etc)..

It also works around any max_packet_size, etc problems..

FTP uploads are especially interesting as the ftpserver, inserts 1 record
at a time(64k each) during the ftp upload until it is complete (could take
hours depending on the filesize/speed) whereas HTTP uploads are typically
processed after the upload is fully complete (eg. PHP).  The upload
finishes and the webserver as fast as possible inserts 100's or 1000's of
rows of data in a matter of seconds..

In both cases keeping memory overhead to a minimum..


On Wed, 7 Jan 2004, Steve Folly wrote:

>
> On 7 Jan 2004, at 21:51, [EMAIL PROTECTED] wrote:
>
> >
> > This article discusses it briefly:
> > http://php.dreamwerx.net/forums/viewtopic.php?t=6
>
> That's an interesting article. Thanks.  A similar table design to what
> I had in mind (hmmm... how different can these things be! :)
>
> I like the idea of splitting up binary data into segments so as reduce
> the load on the server. I assume this works because MySQL doesn't send
> all rows over the connection when the query completes, only just the
> ones you ask for? (In this case, a segment at a time?)
>
>
>
> Steve.
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]
>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to