On Sat, Aug 20, 2005 at 06:36:19PM -0400, D. Richard Hipp wrote:
> On Sun, 2005-08-21 at 03:21 +0530, Ligesh wrote:
> >  I am running a very simple sql query with only one instance. The
> > query is something like:
> > 
> >  select * from table where parent_name = 'parent' order by name limit
> > 10
> > 
> >  The query takes around .3 seconds when the total number of matches
> > are around 7,000, but just jumps to 11 seconds, at 13,000 matches, and
> > there is some heavy swap activity too at this point, and the whole
> > machine's responsiveness goes down once this query is run.
> > 
> >  The table is indexed with parent_name, so I think it shouldn't even
> > have to load the whole table into the memory. The row size isn't huge
> > too with only around 1KB per row.
> > 
> 
> In order to implement the ORDER BY clause, SQLite reads the
> entire result set into memory and sorts it there.  When your
> result set gets very large (13000 rows) and each row uses in
> excess of 1KB or memory, this is apparently causing your
> machine to thrash.
> -- 

 As an aside, I had thought that the row size would not be much relevant to 
database speed, especially the size of those columns that are not used in 
either 'where' or in sorting. So is this true for a standard sql database? Or 
should I start moving the serialized columns to another table?

 
 The logic would be to store the serialized values in a separate table. When 
the main rows are loaded, tranparently fill the serialized values at the 
database driver level itself. So it can be done without touching the top level 
logic, but by just adding more intelligence to my database layer.

 Thanks.

--
:: Ligesh :: http://ligesh.com 


Reply via email to