Michel Fortin wrote:
On 2009-01-10 00:10:11 -0500, Andrei Alexandrescu <seewebsiteforem...@erdani.org> said:

The problem is identifying if this would be faster than recomputing the return value.

I used memoizers for exp and log (the most called functions in some code I wrote) and it made the original version feel like it was driving in reverse.

That's only true because, in your specific context, those functions were called often with the same input. In a program that rarely use the same inputs, your memoizing functions will just waste time and memory.

So to determine if a function is worth memoizing, you have to say yes to those two things:

1.    is it faster than recomputing the return value?
2.    is this function called often with the same inputs?

While the compiler could take an educated guess at answering 1 given the code of a function, question 2 can only be answered by knowing the usage pattern of various functions.

... so let's tell it what the usage patterns are!

1. Compile with profiling.

$ dmd -profile pure_program.d

2. Feed the profiled executable a big gob of test data.  Om nom nom.

$ for I in {0..99} do; pure_program < test_data_$I.dat; done

3. Recompile, giving the compiler the trace log, telling it which functions got called frequently, and where time was spent.

$ dmd -trace=trace.log pure_program.d

Viola; now the compiler can make a well-informed guess at compile-time. All it really lacks is information on how many distinct argument sets any given pure functions gets, but I imagine that could be added.

I think feeding the program test data with profiling enabled and then re-compiling would be a very acceptable trade off.

  -- Daniel

Reply via email to