Hi all,

I have been reading the recent discussion and was researching a bit, and I 
think that we should really go with the idea of randomising the input data(if 
it is not completely presorted), to ensure that we do not get quadratic 
complexity.

One easy way to do that could be to take a sample of the data set, and take a 
pivot out of it. Still a better way could be to take multiple samples which are 
spread of the data set, select a value from each of them, and then take a 
cumulative pivot(median,maybe).

Anyways, I really think that if we do not go with the above ideas, then, we 
should some how factor in the degree of randomness of the input data when 
making the decision between quicksort and external merge sort for a set of rows.

This shouldn't be too complex, and should give us a fixed nlogn complexity even 
for wild data sets, without affecting existing normal data sets that are 
present in every day transactions. I even believe that those data sets will 
also benefit from the above optimisation.

Thoughts/Comments?

Regards,
Atri

Sent from my iPad

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to