Hello, Jérôme
Nice to hear you finally joined us with this really interesting discussion )
>
> To Max Vlasov:
>
> > in sorted order to sqlite base other 5 minutes, so about 10 minutes it
> > total. First 5 minutes was possible since we exchange only offsets,
> > not data
> > and other 5 minutes
> I'm really surprised you're not seeing something from an increased
> cache size. It has always made a very noticeable difference in my own
> manipulations, but I think the largest table I've ever worked with
> "only" had 6M rows.
>
Jay, increasing cache size really helps, but till some
Hello!
See tests and some interpretation:
http://geomapx.blogspot.com/2009/11/degradation-of-indexing-speed.html
Best regards, Alexey Pechnikov.
http://pechnikov.tel/
___
sqlite-users mailing list
sqlite-users@sqlite.org
Jay A. Kreibich a écrit :
You said you're creating two databases. Are you doing those one at a
time or ATTACHing the other databases?
Doing them one at a time, indeed.
___
sqlite-users mailing list
sqlite-users@sqlite.org
On Mon, Feb 15, 2010 at 08:09:26PM +0100, Jérôme Magnin scratched on the wall:
> > The database is 2GB
> > but the least amount of memory I have is 4GB. I also use a 32kb page
> > size,
> > larger cache etc.
> I have of course tried to increase cache size and database page size (up
> to 32K)
Sorry folks, it took me time (and a brand new brain) to figure out how I
could post a reply that would appear on the mailing list (I'm new to
that kind of sport).
To Roger Binns:
> As another data point, my data set is 10M records and the 6 indices I
> need
> are created in under two minutes.
> His original question was about the todo list found at the wiki.
> Not sure if any of the core developer will answer, but I would
> be interested as well...
I too have been long interested in when this might be addressed, since we see
similar performance drop-offs with large numbers of rows.
Am 14.02.2010 18:53, schrieb Max Vlasov:
>> This is appx. 500MB cache, why not trying with 2,000,000 cache size ? :-)
>>
>>
>>
> Hmm, managed to increase it to only 1,000,000 (x1024) size, larger values
> bring to "Out of memory" finally, and this values (1G) allows up to
> 6,000,000 fast
> This is appx. 500MB cache, why not trying with 2,000,000 cache size ? :-)
>
>
Hmm, managed to increase it to only 1,000,000 (x1024) size, larger values
bring to "Out of memory" finally, and this values (1G) allows up to
6,000,000 fast records for 100 bytes field per record index. Still good,
> Marcus,
>
> although increasing cache size is a good method, it may sometimes give
> unpredictable results (in terms of performance).
hm... why ?
> I looked at the vdbe code (EXPLAIN CREATE INDEX ... ) of the index
> creation
> and it seems like there is no special sorting algorithm (CMIIW
Marcus,
although increasing cache size is a good method, it may sometimes give
unpredictable results (in terms of performance).
I looked at the vdbe code (EXPLAIN CREATE INDEX ... ) of the index creation
and it seems like there is no special sorting algorithm (CMIIW please).
Excluding all "make
Just for my curiosity:
Have you tried to increase the cache as already suggested ?
I ran into a similar problem while playing with a artificial
test database with appx. 10 Mio records and creating an index.
Without drastically increasing the cache size sqlite appears
not to be able to create an
Jerome,
It's an an interesting challenge, thanks for the post
I tried to research more and did some tests.
My test database contains a table with 10,000,000 records of the text 100
chars in length
CREATE TABLE [TestTable] (
[Id] INTEGER PRIMARY KEY AUTOINCREMENT,
[Text] TEXT
)
I suppose your
On Sat, Feb 13, 2010 at 11:21:25AM -0800, Roger Binns scratched on the wall:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Jérôme Magnin wrote:
> > SQLite provides decent indexing times for such tables with up to 1M
> > rows,
>
> As another data point, my data set is 10M records and
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Jérôme Magnin wrote:
> SQLite provides decent indexing times for such tables with up to 1M
> rows,
As another data point, my data set is 10M records and the 6 indices I need
are created in under two minutes. (I also analyze.) The database is 2GB
Hi,
This post is a question directed to D. Richard Hipp :
I have been using SQLite for 3 years in a records linkage software
package I have developed. My organization recently had to use the
package to perform daily linkage of large administrative governmental
registers (up to 20 million
16 matches
Mail list logo