I continue to hope that you're correct. I'm already somewhat stumped
though.
I think I see, at the simplest level, how to get the aggregate bucket
data into the b-tree - in AggInsert, change the data passed to
BtreeInsert to be the aggregate bucket itself, not the pointer, and
change the
On Thu, 2005-03-24 at 16:08 -0500, Thomas Briggs wrote:
>Am I wrong in interpreting your comment to mean that this should be
> feasible within the current architecture, and more importantly, feasible
> for someone like me who looked at the SQLite source code for the first
> time yesterday? :)
> You are welcomed to experiment with changes that will store the
> entire result set row in the btree rather than just a pointer.
> If you can produce some performance improvements, we'll likely
> check in your changes.
Am I wrong in interpreting your comment to mean that this should be
Well, I'm using the command line tool that comes with SQLite and
there is no ORDER BY clause in my query, so both the good news and the
bad news is that it certainly seems like something that SQLite is doing,
uhh... sub-optimally, shall we say. :)
I'm working my way through the VDBE,
On Thu, 2005-03-24 at 13:59 -0500, Thomas Briggs wrote:
>I feel like I'm missing something, but that didn't seem to help. I
> can see in the code why it should be behaving differently (many thanks
> for the hint on where to look, BTW), but the memory usage is unchanged.
>
>I modified
I feel like I'm missing something, but that didn't seem to help. I
can see in the code why it should be behaving differently (many thanks
for the hint on where to look, BTW), but the memory usage is unchanged.
I modified sqliteInt.h to define SQLITE_OMIT_MEMORYDB, then verified
that it is
On Thu, 2005-03-24 at 10:57 -0500, Thomas Briggs wrote:
>After posting my question, I found the discussion of how aggregate
> operations are performed in the VDBE Tutorial; that implies that memory
> usage will correspond with the number of unique keys encountered by the
> query, but I
After posting my question, I found the discussion of how aggregate
operations are performed in the VDBE Tutorial; that implies that memory
usage will correspond with the number of unique keys encountered by the
query, but I appreciate having it stated explicitly.
How difficult would it be,
On Thu, 2005-03-24 at 10:09 -0500, Thomas Briggs wrote:
>I have a 1GB database containing a single table. Simple queries
> against this table (SELECT COUNT(*), etc.) run without using more than a
> few MBs of memory; the amount used seems to correspond directly with the
> size of the page
9 matches
Mail list logo