Thinking of it I somehow remember the iterator idea. Maybe this is something we should really look at. An extension with extra cursors for each iterator could be an idea, with an 1) get_iterator(time_range, event_template, result_type, transaction_size) which returns an iterator object with next() and previous() 2) release_iterator(iterator_id) This approach can be very useful in the sense of Sezen and Unity where we won't need to execute big queries at once , but rather in chunks and upon request. Don't bash me for trying.
-- Large requests increase memory usage considerably https://bugs.launchpad.net/bugs/624310 You received this bug notification because you are a member of Zeitgeist Framework Team, which is subscribed to Zeitgeist Framework. Status in Zeitgeist Framework: Confirmed Status in Zeitgeist Extensions: New Bug description: I'm seeing with standalone Sezen that after running it, mem usage of the zeitgeist-daemon process goes up from ~13MB to ~40MB, this is understandable as when Sezen is starting, it does one big query where it asks for everything grouped by most recent subjects and in my case this returns ~11 thousand events, so the extra 30MB can be explained by allocating memory for the DBus reply. Still, my question is whether Zeitgeist should be at mercy of the applications, where nothing prevents them from spiking the memory usage of the core process. (I already saw a couple of times zeitgeist using 80-100MB of memory on my system). Perhaps there's a way to tell python dbus to free its buffers? _______________________________________________ Mailing list: https://launchpad.net/~zeitgeist Post to : [email protected] Unsubscribe : https://launchpad.net/~zeitgeist More help : https://help.launchpad.net/ListHelp

