> The kernel cannot know a costly function will be frequently called with the same arguments and will always return the same value given the same arguments (i.e., does not depend on anything but its arguments). A cache at the application-level is not reimplementing the caches at system-level.

You see, we are back to the subtleties between grand design and tactical design choices. It really depends on for which purpose you allocate RAM. If it is for *direct* data caching, then it is both "selfish" (as I've explained in previous post) and inefficient too, as the kernel makes more efficient use of free RAM. But, on the other hand, if it is related to *indirect* caching, that employs a rather complicated algorithm beyond that of kernel's idea of data caching, then it may be worthwhile. All depends on the grand design, on the intricacies of the use of that cunk of allocated memory.

Then again, all this started with the idea of "allocating as much as there is free RAM" which has nothing to do with design excellence. It is simply using brute force at the cost of the rest of the entire system.

> An implementation strategy that minimizes the space requirements ("no more RAM than you really need") will usually be slower than alternatives that require more space. As with the one-million-line examples I gave to heyjoe. Or with the examples on https://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff

I have seen, and already know the trade-offs mentioned in that wiki page. As for the explanation of "no more RAM than you really need" please see above.

Reply via email to