I continue to hope that you're correct.  I'm already somewhat stumped
though.

   I think I see, at the simplest level, how to get the aggregate bucket
data into the b-tree - in AggInsert, change the data passed to
BtreeInsert to be the aggregate bucket itself, not the pointer, and
change the size appropriately; then, correspondingly in the handling of
OP_AggFocus, change the call to BtreeData to read back the entire
aggregate bucket instead of just the pointer.  Conceptually pretty
straightforward.  (I think.  Please correct me if I'm wrong.)

   Here's where I get stuck: calling BtreeInsert from AggInsert no
longer makes any sense, because the data in the aggregate bucket hasn't
yet been accumulated.  So I think, conceptually, I need another op code
that is called before OP_Next to get the data in the current aggregate
bucket written out to the b-tree.  That would seem to require changes to
the code generator, where I haven't yet ventured, so I'm hoping you can
confirm that I'm on the right track before I head off down the garden
path on my own. :)

   Just to make sure I'm not missing anything obvious: OP_Next seems to
be multi-purpose, so I don't want to just stick something in there.  If
there's a way to tell that an aggregate operation is underway such that
I can write the bucket data to the btree I'd be happy with that, but I'm
not sure that there is and the inelegance of that approach irks me
anyway.

   Thanks
   -Tom

> -----Original Message-----
> From: D. Richard Hipp [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, March 24, 2005 4:26 PM
> To: sqlite-users@sqlite.org
> Subject: RE: [sqlite] Memory usage for queries containing a 
> GROUP BY clause
> 
> On Thu, 2005-03-24 at 16:08 -0500, Thomas Briggs wrote:
> >    Am I wrong in interpreting your comment to mean that 
> this should be
> > feasible within the current architecture, and more 
> importantly, feasible
> > for someone like me who looked at the SQLite source code 
> for the first
> > time yesterday? :)
> > 
> 
> I know of no reason why you should not be able to tackle this
> problem yourself.
> 
> 

Reply via email to