On Wed, Mar 7, 2012 at 2:36 PM, Vinay Sajip <vinay_sa...@yahoo.co.uk> wrote:
> Armin Ronacher <armin.ronacher <at> active-4.com> writes:
>
>> What are you trying to argue?  That the overall Django testsuite does
>> not do a lot of string processing, less processing with native strings?
>>
>> I'm surprised you see a difference at all over the whole Django
>> testsuite and I wonder why you get a slowdown at all for the ported
>> Django on 2.7.
>
> The point of the figures is to show there is *no* difference (statistically
> speaking) between the three sets of samples. Of course, any individual run or
> set of runs could be higher or lower due to other things happening on the
> machine (not that I was running any background tasks), so the idea of the 
> simple
> statistical analysis is to determine whether these samples could all have come
> from the same populations. According to ministat, they could have (with a 95%
> confidence level).

But the stuff you run is not really benchmarking anything. As far as I
know django benchmarks benchmark something like mostly DB creation and
deletion, although that might differ between CPython and PyPy. How
about running *actual* django benchmarks, instead of the test suite?

Not that proving anything is necessary, but if you try to prove
something, make it right.

Cheers,
fijal
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to