== Quote from retard (r...@tard.com.invalid)'s article > Wed, 09 Jun 2010 01:13:43 -0400, Nick Sabalausky wrote: > > "retard" <r...@tard.com.invalid> wrote in message > > news:hun6ok$13s...@digitalmars.com... > >> Tue, 08 Jun 2010 16:14:51 -0500, Andrei Alexandrescu wrote: > >> > >>> On 06/08/2010 04:05 PM, Walter Bright wrote: > >>>> Andrei Alexandrescu wrote: > >>>>> On 06/08/2010 01:27 PM, "Jérôme M. Berger" wrote: > >>>>>> Please define "reasonable performance"... > >>>>> > >>>>> Within 15% of hand-optimized code specialized for the types at hand. > >>>> > >>>> I would have said O(n) or O(log n), as opposed to, say, O(n*n). > >>>> > >>>> General rules for performance improvements: > >>>> > >>>> 1. nobody notices a 10% improvement > >>>> > >>>> 2. users will start noticing speedups when they exceed 2x > >>>> > >>>> 3. a 10x speedup is a game changer > >>> > >>> max of n elements is O(n). > >> > >> This probably means that D 2 won't be very efficient on multicore until > >> the authors learn some basic parallel programming skills. Now where did > >> you get your PhD - I'm collecting a list of toy universities people > >> should avoid. > > > > You used to have meaningful things to say. Now you're just trolling. > Max of n unordered elements can be solved in O(log log n) time assuming > you have enough cores and constant time memory access. Happy now?
Technically true, but when's the last time you needed to find the max of n unordered elements, where n was large enough to justify the overhead of using even 2 threads? Even if you use some kind of pool, there's still the synchronization overhead.