So maybe someone should explain to Elliott *why* his own benchmarks are not trustworthy, rather than just repeat "use perf or timeit". Actually, there are two things: (a) when something new comes along it *always* needs to prove beyond a shadow of a doubt that it is actually an improvement and not a timing artifact or a trick; (b) you can't time sorting 10 values *once* and get a useful result. You have to do it many times. And you have to make sure that creating a list of 10 random values isn't taken as part of your test -- that's tricky since random() isn't all that fast; but it has to be done.
Although Elliott had it coming when he used needlessly offensive language in his first post. On Mon, Oct 10, 2016 at 5:09 PM, Steven D'Aprano <st...@pearwood.info> wrote: > On Mon, Oct 10, 2016 at 09:16:32PM +0000, Elliot Gorokhovsky wrote: > >> Anyway, benchmarking technique aside, the point is that it it works well >> for small lists (i.e. doesn't affect performance). > > You've been shown that there is something suspicious about your > benchmarking technique, something that suggests that the timing results > aren't trustworthy. Until you convince us that your timing results are > reliable and trustworthy, you shouldn't be drawing *any* conclusions > about your fastsort versus the standard sort. > > > -- > Steve > _______________________________________________ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) _______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com