In your example, you have a size_t or double factor for array growth. If 
you set it to -1, you would have an unpleasant time. You need a way to 
specify the range of valid values. In more complex algorithms, you need a 
way to evaluate several parameters together to determine if they are 
valid.

Since you're not always optimizing for the same thing (time vs memory; 
wasted memory vs number of reallocations; for approximation algorithms, 
how good an approximation is generated), you need a way to specify that 
as well.

Putting that in the language would be horribly complex. It would be much 
simpler for everyone if it were in a library instead. I could probably 
code up a basic version in a couple hours. Which I'll do for US$150 -- 
below market rate, especially since I'd continue supporting it.

Furthermore, your idea of changing compile-time constants without 
recompiling the program is unworkable. Constants are folded and inlined. 
That "* 2" might turn into "<< 1". A loop based on a constant can be 
unrolled.

You suggested that we can do a random walk on values to find optimal 
values. This will find a local minimum, which might not be the globally 
optimal value. Something like simulated annealing would work better in 
the general case. Simulated annealing is a fair bit more complex than a 
random walk; it's not going to be implemented inside druntime.

Reply via email to