On Fri, Dec 4, 2015 at 2:44 PM, Bill Winslow <buns...@gmail.com> wrote: > This is a question I posed to reddit, with no real resolution: > https://www.reddit.com/r/learnpython/comments/3v75g4/using_functoolslru_cache_only_on_some_arguments/ > > The summary for people here is the following: > > Here's a pattern I'm using for my code: > > def deterministic_recursive_calculation(input, partial_state=None): > condition = do_some_calculations(input) > if condition: > return deterministic_recursive_calculation(reduced_input, > some_state) > > Basically, in calculating the results of the subproblem, the subproblem can > be calculated quicker by including/sharing some partial results from the > superproblem. (Calling the subproblem without the partial state still gives > the same result, but takes substantially longer.) > > I want to memoize this function for obvious reasons, but I need the > lru_cache to ignore the partial_state argument, for its value does not > affect the output, only the computation expense. > > Is there any reasonable way to do this?
What form does the partial_state take? Would it be reasonable to design it with __eq__ and __hash__ methods so that each partial state (or a wrapper of it) is considered equal? -- https://mail.python.org/mailman/listinfo/python-list