On Sun, 27 Apr 2008 20:39:25 -0400
Kyle Schaffrick <[EMAIL PROTECTED]> wrote:
>
> On Sun, 27 Apr 2008 20:23:24 -0400
> Michael Bayer <[EMAIL PROTECTED]> wrote:
> >
> > But yes its probably how that section should be done anyway so
> > that ShardedQuery so that the iterative framework provided by  
> > iterate_instances() (this method would need to be used instead of  
> > instances()).
> 
> I actually thought the same thing, so as a first step I'm actually
> trying my hand at a patch to change the existing "just concatenate
> them" behavior to use iterators.  I'm running test suite now :)

See attachment for the above; I'm interested in feedback/further
improvements!

I also wrote a higher-order "iter_merging" function (which will make
more sense in reference to the attached patch) that will merge any
number of iterators by an arbitrary ordering function.

Since as I mentioned, this is kind of an interesting learning project
for me to learn some SA internals, I'd like my next step to be seeing if
I can write something that will make a callable/closure which, when
passed into iter_merging, will produce the least surprising ordering of
results w/r/t what was requested in the original ShardedQuery.

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Use a generator to concatenate results from ShardedQuerys.

Adds a function `iter_concatenating` that accepts a list of iterators, and
yields every object produced by each of them. Alter ShardedQuery to use this
function instead of loading all of the potentially large sets of result objects
into RAM before concatenating them.

diff --git a/lib/sqlalchemy/orm/shard.py b/lib/sqlalchemy/orm/shard.py
--- a/lib/sqlalchemy/orm/shard.py
+++ b/lib/sqlalchemy/orm/shard.py
@@ -93,15 +93,13 @@
             finally:
                 result.close()
         else:
-            partial = []
+            partials = []
             for shard_id in self.query_chooser(self):
                 result = self.session.connection(mapper=self.mapper, shard_id=shard_id).execute(context.statement, **self._params)
-                try:
-                    partial = partial + list(self.instances(result, querycontext=context))
-                finally:
-                    result.close()
+                partials.append(self.iterate_instances(result, querycontext=context))
+
             # if some kind of in memory 'sorting' were done, this is where it would happen
-            return iter(partial)
+            return util.iter_concatenating(partials)
 
     def get(self, ident, **kwargs):
         if self._shard_id is not None:
diff --git a/lib/sqlalchemy/util.py b/lib/sqlalchemy/util.py
--- a/lib/sqlalchemy/util.py
+++ b/lib/sqlalchemy/util.py
@@ -229,6 +229,21 @@
                 yield y
         else:
             yield elem
+
+def iter_concatenating(iters):
+    """Concatenate iterables using a generator.
+
+    Yields every item produced by every iterable in the given list, in
+    concatenated order.
+    """
+    for iterable in iters:
+        try:
+            while True:
+                yield iterable.next()
+
+        # Begin yielding stuff off the next iterator.
+        except StopIteration:
+            pass
 
 class ArgSingleton(type):
     instances = weakref.WeakValueDictionary()

Reply via email to