I'm trying to cache the construction and destruction of an expensive
object coming out of a generator function. I can do the following to
force a new object to be used if the object is being used somewhere
else (based upon it's reference count). Here is the pseudo-code of
what I am trying to accomplish.

def GetExpensiveObjects():
    obj = ExpensiveObject()
    baserefs = sys.getrefcount(obj)
    while UpdateExpensiveObject(obj):
        yield obj
        if sys.getrefcount(obj) > baseline + 1:
            obj = ExpensiveObject()

Some users may use this generator by reading all objects into a list,
"objs = list(GetExpensiveObjects())". Others may use the generator and
just operate on each object totally independent of each other:
for obj in GetExpensiveObjects():
   DoSomething(obj)

I would like to support both use cases. Unfortunately, the simple
approach of changing the following:
obj = ExpensiveObject()
while UpdateExpensiveObject(obj):
    yield obj

To the following:
obj = ExpensiveObject()
while UpdateExpensiveObject(obj):
    yield obj
    obj = ExpensiveObject()

Is %30 - %50 more expensive. So is the getrefcount approach an
acceptable? I would prefer avoiding the proper object cache based on
weakref as it would be a lot more complexity.

Thanks
Brian

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to