#18687: Randomly failing test: test_performance_scalability ---------------------------------+------------------------------------ Reporter: aaugustin | Owner: nobody Type: Bug | Status: new Component: Core (Other) | Version: master Severity: Release blocker | Resolution: Keywords: | Triage Stage: Accepted Has patch: 0 | Needs documentation: 0 Needs tests: 0 | Patch needs improvement: 0 Easy pickings: 0 | UI/UX: 0 ---------------------------------+------------------------------------
Comment (by akaariai): We should probably test something else than the proportion of total runtime. I think the underlying problem is that we are testing on a machine which is doing a lot of other tests at the same time. This can result in spikes in the testing times, and these lead to false positives. The problem is not imprecise measurement, it is that the runtime of the same code is not guaranteed to stay the same between consecutive runs. So, maybe doing 9 significantly shorter runs for iterations in alternating fashion and then comparing the median would work better? In code something like: {{{ n1 = 1000 n2 = 10000 runs_1000 = [] runs_10000 = [] for i in range(9): t1 = elapsed('pbkdf2("password", "salt", iterations=%d)' % n1) t2 = elapsed('pbkdf2("password", "salt", iterations=%d)' % n2) runs_1000.append(t1) runs_10000.append(t2) runs_1000.sort(); runs_10000.sort() t1 = runs_1000[4] t2 = runs_10000[4] measured_scale_exponent = math.log(t2 / t1, n2 / n1) self.assertLess(measured_scale_exponent, 1.1) }}} I haven't actually tested the above. -- Ticket URL: <https://code.djangoproject.com/ticket/18687#comment:2> Django <https://code.djangoproject.com/> The Web framework for perfectionists with deadlines. -- You received this message because you are subscribed to the Google Groups "Django updates" group. To post to this group, send email to django-updates@googlegroups.com. To unsubscribe from this group, send email to django-updates+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/groups/opt_out.