Ted Dunning wrote:
This factor of 1500 in speed seems pretty significant and is the motivation
for not supporting random read/write.
This doesn't mean that random access update should never be done, but it
does mean that scaling a design based around random access will be more
difficult than scaling a design based on sequential read and write.
On 4/3/08 12:07 PM, "Andrzej Bialecki" <[EMAIL PROTECTED]> wrote:
In general, if updates are relatively frequent and small compared to the
size of data then this could be useful.
Hehe ... yes, good calculations :) What I had in mind though when saying
"relatively frequent" was rather a situation when updates are usually
small and come at unpredictable intervals (e.g. picked from a queue
listener) and then need to set flags on a few records. Running
sequential update in face of such minor changes doesn't usually pay off,
and queueing the changes so that it starts to pay off is sometimes not
possible (takes too long to fill the batch).
--
Best regards,
Andrzej Bialecki <><
___. ___ ___ ___ _ _ __________________________________
[__ || __|__/|__||\/| Information Retrieval, Semantic Web
___|||__|| \| || | Embedded Unix, System Integration
http://www.sigram.com Contact: info at sigram dot com