On Tue, Jun 15, 2021 at 02:20:11PM +0000, Paul Backus via Digitalmars-d-learn wrote: [...] > It's a time-space tradeoff. As you say, caching requires additional > space to store the cached element. On the other hand, *not* caching > means that you spend unnecessary time computing the next element in > cases where the range is only partially consumed. For example: > > ```d > import std.range: generate, take; > import std.algorithm: each; > import std.stdio: writeln; > > generate!someExpensiveFunction.take(3).each!writeln; > ``` > > Naively, you'd expect that `someExpensiveFunction` would be called 3 > times--but it is actually called 4 times, because `generate` does its > work in its constructor and `popFront` instead of in `front`.
One way to address this is to make the computation lazy: the element is only computed once on demand, and once computed it's cached. But of course, this is probably overkill on many ranges. So the long answer is, it depends. :-) T -- It is not the employer who pays the wages. Employers only handle the money. It is the customer who pays the wages. -- Henry Ford