On Nov 24, 6:42 pm, Steven D'Aprano <[EMAIL PROTECTED]
cybersource.com.au> wrote:

> This has nothing, absolutely NOTHING, to do with memoization. Memoization
> trades off memory for time, allowing slow functions to return results
> faster at the cost of using more memory. The OP wants to save memory, not
> use more of it.

Not to beat a dead horse, but memoization can significantly minimize
memory usage, given a large data set with redundant elements (as the
OP seems to suggest [e.g., calculating the deltas of trigrams in a
natural language]). For example, if the data set has 1/3 redundant
elements, then the un-memoized version requires 1/3 more memory,
because it needs to store 1/3 more unique copies of the element,
whereas the memoized version has only to store unique elements once.
Every instance of an element which is already in the cache requires
only the cache lookup (speed), rather than the creation of a new
object (memory). So the trade-off is actually speed for memory rather
than the other way around. Of course, it all depends on the nature of
the data; but for large, redundant data sets, memoization is
definitely a win when it comes to memory optimization.

Regards,
Jordan
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to