I wonder if you use REPLACE instead of UPDATE if this would work around
this issue, or at least make it less noticeable.
On Tuesday, July 30, 2002, at 10:27 AM, Daniel Brockhaus wrote:
> Hi there,
>
> here's something everyone using variable length records (varchar, text,
> blob) should know:
At 15:56 30.07.02 +0100, you wrote:
> > Yes. Of course. But that's just another way to work around the problem,
> you
> > know? I mean, who wants to have to take a database down for an hour at
> > least once a week?
>
>You have to take it down ?I run optimize table on every tbale at
>midnight eve
At 09:43 30.07.02 -0500, you wrote:
>At 16:27 +0200 7/30/02, Daniel Brockhaus wrote:
>[...]
>>
>>Whoa. Each record has been split into 1000 (one thousand!) blocks.
>>Reading one of these records would require 1000 reads from your harddisk.
>>That's about 14 seconds to read a record of 16K length
At 16:27 +0200 7/30/02, Daniel Brockhaus wrote:
>Hi there,
>
>here's something everyone using variable length records (varchar,
>text, blob) should know:
>
>
>Create a table containing at least one blob:
>
>> create table db_test (
>> ID int not null,
>> vara blob,
>> primary key
Hi there,
here's something everyone using variable length records (varchar, text,
blob) should know:
Create a table containing at least one blob:
> create table db_test (
> ID int not null,
> vara blob,
> primary key (ID)
> );
Insert two records:
> insert db_test values