On Tuesday, 31 December 2013 at 19:53:29 UTC, Ola Fosheim Grøstad wrote:
On Tuesday, 31 December 2013 at 17:52:56 UTC, Chris Cain wrote:
1. The compiler writer will actually do this analysis and write the optimization (my bets are that DMD will likely not do many of the things you suggest).

I think many optimizations become more valuable when you start doing whole program anlysis.

You're correct, but I think the value only comes if it's actually done, which was my point.

2. The person writing the code is writing code that is allocating several times in a deeply nested loop.

The premise of efficient high level/generic programming is that the optimizer will undo naive code.

Sure. My point was that it's a very precise situation that the optimization would actually work effectively enough to be significant enough to discard the advantages of using a library solution. If there were no trade offs for using a compiler supported new, then even a tiny smidge of an optimization here and there is perfectly reasonable. Unfortunately, that's not the case. The only times where I think the proposed optimization is significant enough to overcome the tradeoff is precisely the type of situation I described.

Note I'm _not_ arguing that performing optimizations is irrelevant. Like you said, "The premise of efficient high level/generic programming is that the optimizer will undo naive code." But that is _not_ the only facet that needs to be considered here. If it were, you'd be correct and we should recommend only using new. But since there are distinct advantages to a library solution and distinct disadvantages to the compiler solution, the fact that you _could_, with effort, make small optimizations on occasion just isn't enough to overturn the other tradeoffs you're making.

3. Despite the person making the obvious critical error of allocating several times in a deeply nested loop, he must not have made any other significant errors or those other errors must also be covered by optimizations

I disagree that that inefficiencies due to high level programming is a mistake if the compiler has opportunity to get rid of it. I wish D would target high level programming in the global scope and low level programming in limited local scopes. I think few applications need hand optimization globally, except perhaps raytracers and compilers.

You seem to be misunderstanding my point again. I'm _not_ suggesting D not optimize as much as possible and I'm not suggesting everyone "hand optimize" everything. Following my previous conditions, this condition is obviously suggesting that there isn't any other significant problems which would minimize the effect of your proposed optimization.

So, _if_ the optimization is put in place, and _if_ the code in question is deeply nested to make the code take a significant amount of time so that your proposed optimization has a chance to be actually useful, then now we have to ask the question "are there any other major problems that are also taking up significant time?" If the answer is "Yes, there are other major problems" then your proposed speed-up seems less likely to matter. That's where I was going.

Manual optimization in this case isn't too unreasonable.

I think manual optimization in most cases should be privided by the programmer as compiler hints and constraints.

In some cases, yes. An "inline" hint, for instance, makes a ton of sense. Are you suggesting that there should be a hint provided to new? Something like: `Something thing = new (@stackallocate) Something(arg1,arg2);`? If so, it seems like a really roundabout way to do it when you could just do `Something thing = stackAlloc.make!Something(arg1,arg2);` I don't see hints possibly being provided to new being an advantage at all. All that means is that to add additional "hint" allocators, you'd have to dive into the compiler (and language spec) as opposed to easily writing your own as a library.

Think of replacing library calls when it's noticed that it's an allocate function. It's pretty dirty and won't actually happen nor do I suggest it should happen, but it's actually still also _possible_.

Yes, why not? As long as the programmer has the means to control it. Why not let the compiler choose allocation strategies based on profiling for instance?

Uhh... You don't see the problem with the compiler tying itself to the implementation of a library allocate function? Presumably such a thing would _only_ be done using the default library allocator since when the programmer says "use std.allocator.StackAllocator" he generally means it. And I find the whole idea of the compiler hard-coding "if the programmer uses std.allocator.DefaultAllocator.allocate then instead of emitting that function call do ..." to be more than a bit ugly. Possible, but horrific.

Reply via email to