Hi.

The standard advice is not to worry about memory usage and execution speed until profiling shows you where the problem is, and I respect Knuth greatly as a thinker.

Still, one may learn from others' experience and cultivate good habits early. To say that one should not prematurely optimize is not to say that one should not try to avoid cases that tend to be really bad, and I would rather learn from others what these are then learn only the hard way.

For the time being I am still at early development stage and have not yet tested things with the larger data sets I anticipate eventually using. It's cheap to make slightly different design decisions early, but much more painful further down the line, particularly given my context.

As I understand it, foreach allocates when a simple C-style for using an array index would not. I would like to learn more about when this turns particularly expensive, and perhaps I could put this up on the wiki if people think it is a good idea.

What exactly does it allocate, and how often, and how large is this in relation to the size of the underlying data (structs/classes/ranges)? Are there any cache effects to consider? Happy to go to the source code if you can give me some pointers.

Thanks in advice for any thoughts.



Laeeth.

Reply via email to