On Sun, Jun 30, 2013 at 8:30 AM, Atri Sharma <atri.j...@gmail.com> wrote:
> I have been reading the recent discussion and was researching a bit, and I 
> think that we should really go with the idea of randomising the input data(if 
> it is not completely presorted), to ensure that we do not get quadratic 
> complexity.

That doesn't ensure any such thing.  It just makes it less likely.
But we have existing guards that also make that unlikely, so I'm not
sure what we'd be gaining.

> One easy way to do that could be to take a sample of the data set, and take a 
> pivot out of it. Still a better way could be to take multiple samples which 
> are spread of the data set, select a value from each of them, and then take a 
> cumulative pivot(median,maybe).

We pretty much do that already.

> This shouldn't be too complex, and should give us a fixed nlogn complexity 
> even for wild data sets, without affecting existing normal data sets that are 
> present in every day transactions. I even believe that those data sets will 
> also benefit from the above optimisation.

The only method of selecting a pivot for quicksort that obtain O(n lg
n) run time with 100% certainty is have a magical oracle inside the
computer that tells you in fixed time and with perfect accuracy which
pivot you should select.

If you want to get a useful response to your emails, consider
including a statement of what you think the problem is and why you
think your proposed changes will help.  Consider offering a test case
that performs badly and an analysis of the reason why.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to