On Thursday, 4 August 2016 at 04:41:43 UTC, Chris Wright wrote:
In your example, you have a size_t or double factor for array growth. If you set it to -1, you would have an unpleasant time. You need a way to specify the range of valid values. In more complex algorithms, you need a way to evaluate several parameters together to determine if they are valid.


Yes, I mentioned that, but there is no ideal way to do this. I think it shouldn't be done in code since it clutters the code. The parameter tuples are the groupings, I think the compiler can infer groupings based on source code closeness(if 3 parameters show up on the same line, it's obviously save to assume they go together). Ultimately there is no big deal about grouping because one can view the parameter space of tuples as flat.

Since you're not always optimizing for the same thing (time vs memory; wasted memory vs number of reallocations; for approximation algorithms, how good an approximation is generated), you need a way to specify that as well.

Well, it depends. If performance is cpy cycles and memory, then they are relatively easy to get, although absolute accuracy would be required.

Putting that in the language would be horribly complex. It would be much simpler for everyone if it were in a library instead. I could probably code up a basic version in a couple hours. Which I'll do for US$150 -- below market rate, especially since I'd continue supporting it.

or you could pay me 150$ and I'd do it. But I have better things to do, and without compiler support, or a preprocessor, I'm not interested in the code complexity a library solution would have. Having to specify every parameter with some long sequence of characters just makes it messy.

I would have to disagree about the language complexity though. At most the hardest part would be figuring out what syntax to use for symbols.

$(10,0,100,1)

Could be used. When ever the compiler comes across such a thing it simply

1. creates an "anonymous" variable with value 10.
2. Emits something like

test.d#43(10,0,100,1), 0x043245

(file,line,default value,min,max,step,address)

One could allow for group indexing or "closeness" by code. e.g., $(10,0,100,1,43) is of group 43. 0x043245 which is the address.

That would be it on the compiler side. The optimizer then has enough information to work from.

While such a method may lack enough information to be optimized effectively, in theory, it would be enough to find the optimal parameters in the long term.

One could probably get away with a two or three line code for the variables that does the same.

auto x = Opt_Var(10, 0, 100, 1, 43);

then use x in the code. The hard part might be getting the relative address and absolute addresses setup properly for the optimizer.

Might work but still a bit cluttery IMO.

If you feel like implementing it for the benefit of humanity, be my guest. Else I'll end up trying in the future when I get around to it.


Furthermore, your idea of changing compile-time constants without recompiling the program is unworkable. Constants are folded and inlined. That "* 2" might turn into "<< 1". A loop based on a constant can be unrolled.

This is why I said they were variables. I said they were effectively constants. They obviously can't be constants. We have to have an address that holds the value so it can be changed by the optimizer.

You suggested that we can do a random walk on values to find optimal values. This will find a local minimum, which might not be the globally optimal value. Something like simulated annealing would work better in the general case. Simulated annealing is a fair bit more complex than a random walk; it's not going to be implemented inside druntime.

Well, there are many optimization methods, that isn't the point. The optimizer can be exposed for modification and different methods could be used. My main point was to introduce the idea. I have never seen it mentioned in literature before. The ability for a program to slowly optimize itself over it's life cycle seems to be a new and novel concept.

The benefit of such a thing? Not sure. May only improve performance 1 or 2%. The real benefit? Who knows, but as time goes on it might become real useful and be part of the profiling process and find better ways to optimize programs that give better results.





Reply via email to