On Thursday 25 May 2006 09:25, [EMAIL PROTECTED] wrote:
> well after 5 years of data , just because it shares basically the same
> algorithm for search which is to use index files  ,  which are either
> sorted and binary searched,    or arranged as hash buckets, or btrees , and
> the algorithm technology stayed the same for the last 20 years ( and isn't
> likely going to change either).

Ah, but it has got much better.

E.g., modern RDBMS systems often do not store the "columns" of a "row" in a 
file analogue unit together  - that way more often than not the entire 
"column" can be cached in RAM and searched with e.g. Boyer-Moore algorithm 
"full text" at very little cost. Sometimes it is "cheaper" to just pack the 
first 4 or 8 bytes of any "column" that gets searched regularly by x* 
wildcard (meaning the beginnning of a term is seached for) into a memory 
array and search that array sequentially, forfeiting the costs of sorting  
(or tree organizing) - and then looking up the position in the "real data" in 
a second array - most succesful strategy especially with variable length 
"columns".

Have a look behind the curtains of Postgres and marvel - the "genetic 
optimizer" constantly rearranged storage strategies in order to improve query 
performance and optimize for multi-level concurrency access. Same with all 
other modern RDBMS

Yes, ultimately, at the end of the day, there will be some sort of flat file 
equivalent for permanent storage even in proper efficient and concurrency 
safe database systems - but only in the same way as every car has wheels (for 
now). But a wheel doesn't make a car.

MD2 in this analogy is a wheel pretending to be a car, and users must not be 
surprised if they get their bottoms dirty without getting anywhere by riding 
the wheel instead of the car. But oh, the wonders of simplicity - look, you 
don't need  a driver's license for your wheel!

Horst
_______________________________________________
Gpcg_talk mailing list
[email protected]
http://ozdocit.org/cgi-bin/mailman/listinfo/gpcg_talk

Reply via email to