-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Drescher
Sent: Tuesday, July 13, 2010 12:37 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Sqlite Insert Speed Optimization
I also wrap my statements
Thanks for all the feedback!
It helped a lot.
1. I'm going to try and see what happen if I leave the end transaction
until 5 insert was done.
2. I'm going to increase cache_size from 8192 to 16384
Extra info,
1. This program saved data to a clarion file before and in sqlite it's
about
Thanks for all the feedback!
It helped a lot.
1. I'm going to try and see what happen if I leave the end transaction
until 5 insert was done.
This is what I meant also when I said 500 was too small.
John
___
sqlite-users mailing list
On 14 July 2010 11:56, Werner Smit wrote:
3. I'm saving to a network drive.
Is this a one-off data import into a new clean sqlite db? If so have
you considered writing to a db file on a local drive and then copying
the whole file to the network drive afterwards?
Paul.
-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Paul Corke
Sent: 14 July 2010 01:03 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Sqlite Insert Speed Optimization
On 14 July 2010 11:56, Werner Smit wrote
On 14 Jul 2010, at 11:55am, Werner Smit wrote:
ps. When I started with sqlite it took 500 minutes to save the 1 million
records.
I've got it down to just less than 200 minutes with current settings.
Clarion does it in between 100 and 200 minutes.
Do you have any indexes defined ? It can be
On 14 Jul 2010, at 11:55am, Werner Smit wrote:
ps. When I started with sqlite it took 500 minutes to save the 1
million
records.
I've got it down to just less than 200 minutes with current settings.
Clarion does it in between 100 and 200 minutes.
Do you have any indexes defined ? It can be
Can you answer #3 though?
Does anybody know how to make the journal file go to a different location than
the database? Apprarently it's not treated as a temporary file. Perhaps it
should be??
Michael D. Black
Senior Scientist
Northrop Grumman Mission Systems
According to my math your final database size should be on the order
of
100Meg?
That means at 200 minutes and 1,000,000 records:
83 inserts per second
8333 bytes per second
Both of these values are terrible.
#1 What kind of network connection do you have? 100BaseT?
#2 What kind of server are
To: General Discussion of SQLite Database
Subject: EXTERNAL:Re: [sqlite] Sqlite Insert Speed Optimization
According to my math your final database size should be on the order
of
100Meg?
That means at 200 minutes and 1,000,000 records:
83 inserts per second
8333 bytes per second
Both of these values
Black, Michael (IS) wrote:
Does anybody know how to make the journal file go to a different
location than the database? Apprarently it's not treated as a temporary
file. Perhaps it should be??
Seems like you'd have to communicate the journal location to other
processes, meaning you'd
If you don't know why it's slow you'll be shooting in the dark. And doing
compression on a local network link isn't likely to buy you much. Might even
hurt.
In other words, is it latency or bandwidth? Give 8K/sec I'm guessing it's
latency unless you're running a 64KBit line. Are you THAT
/14/2010 11:24 AM
To: General Discussion of SQLite Database
Subject: EXTERNAL:Re: [sqlite] Sqlite Insert Speed Optimization
Black, Michael (IS) wrote:
Does anybody know how to make the journal file go to a different
location than the database? Apprarently it's not treated as a temporary
file
If you don't know why it's slow you'll be shooting in the dark. And
doing compression on a local network link isn't likely to buy you much.
Might even hurt.
In other words, is it latency or bandwidth? Give 8K/sec I'm guessing
it's latency unless you're running a 64KBit line. Are you THAT
Werner Smit wrote:
After taking out count(*) and adding a few pragma's and saving 6000
records rather than 500 at a time I've got it down to 34 minutes.
If I build in on local drive it takes 28 minutes.(with chunks of 500)
Why not do an apples-to-apples test and commit the same number of
of SQLite Database
Subject: EXTERNAL:Re: [sqlite] Sqlite Insert Speed Optimization
If you don't know why it's slow you'll be shooting in the dark. And
doing compression on a local network link isn't likely to buy you much.
Might even hurt.
In other words, is it latency or bandwidth? Give 8K
On Wed, Jul 14, 2010 at 12:31 PM, Black, Michael (IS)
michael.bla...@ngc.com wrote:
If you could set the journcal location BEFORE you open the database that
wouldn't be such a bad thing. Giving us the ability to do this would allow
for the flexibility when needed with appropriate warnings
12:12 PM
To: General Discussion of SQLite Database
Subject: EXTERNAL:Re: [sqlite] Sqlite Insert Speed Optimization
On Wed, Jul 14, 2010 at 12:31 PM, Black, Michael (IS)
michael.bla...@ngc.com wrote:
If you could set the journcal location BEFORE you open the database that
wouldn't
On Wed, Jul 14, 2010 at 1:34 PM, Black, Michael (IS) michael.bla...@ngc.com
wrote:
Was that a facetious remark???
Rather than here's a function/pragma that allows you to put the journal
file where you want -- but BE CAREFUL BECAUSE...
Writing you own VFS is not for the casual user...
I
On 14 Jul 2010, at 5:02pm, Black, Michael (IS) wrote:
Does anybody know how to make the journal file go to a different location
than the database? Apprarently it's not treated as a temporary file.
Perhaps it should be??
It's essential not to treat the journal file as a temporary file
D. Black
Senior Scientist
Northrop Grumman Mission Systems
From: sqlite-users-boun...@sqlite.org on behalf of Richard Hipp
Sent: Wed 7/14/2010 12:51 PM
To: General Discussion of SQLite Database
Subject: EXTERNAL:Re: [sqlite] Sqlite Insert Speed Optimization
On Wed, Jul 14, 2010 at 1:51 PM, Richard Hipp d...@sqlite.org wrote:
On Wed, Jul 14, 2010 at 1:34 PM, Black, Michael (IS)
michael.bla...@ngc.com wrote:
Was that a facetious remark???
Rather than here's a function/pragma that allows you to put the journal
file where you want -- but BE
From: sqlite-users-boun...@sqlite.org on behalf of Richard Hipp
Sent: Wed 7/14/2010 1:12 PM
To: General Discussion of SQLite Database
Subject: EXTERNAL:Re: [sqlite] Sqlite Insert Speed Optimization
On Wed, Jul 14, 2010 at 1:51 PM, Richard Hipp d...@sqlite.org wrote
On Wed, Jul 14, 2010 at 12:34:05PM -0500, Black, Michael (IS) scratched on the
wall:
Was that a facetious remark???
Rather than here's a function/pragma that allows you to put the
journal file where you want -- but BE CAREFUL BECAUSE...
Writing you own VFS is not for the casual user...
Hi there.
I've been playing around with sqlite.
Very impressed so far.
Using 3.5.6 in windows developing with Clarion.
My question(s)
If I want to use the insert or replace to populate my database of
around 1 million records.
And I want to do it as fast as possible.
What are all the tricks I can
I also wrap my statements (about 500 inserts at a time) with a begin/end
transaction.
After these 500 i take a few seconds to read more data so sqlite should
have time to do any housekeeping it might need.
Wrap more into a transaction. 500 is too small of a percentage of a million.
John
After these 500 i take a few seconds to read more data so sqlite should
have time to do any housekeeping it might need.
SQLite is not a Database Server. It has no background threads. So it
can't do any housekeeping until you call some sqlite3_* function.
Pavel
On Tue, Jul 13, 2010 at 12:33
-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org]
On Behalf Of John Drescher
Sent: Tuesday, July 13, 2010 12:37 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Sqlite Insert Speed Optimization
I also wrap my
I was wondering if that's really so. Wouldn't the marginal speed improvement
be quite small? Is the percentage of the final rowcount really a criterion?
Each transaction costs at least 1 disk seek. Doing thousands of seeks
the result would be very slow.
John
On Tue, Jul 13, 2010 at 12:48 PM, John Drescher dresche...@gmail.com wrote:
I was wondering if that's really so. Wouldn't the marginal speed
improvement be quite small? Is the percentage of the final rowcount really
a criterion?
Each transaction costs at least 1 disk seek. Doing thousands
On 13 Jul 2010, at 5:33pm, Werner Smit wrote:
I currently use the following pragma's (for speed)
temp_store = 2
page_size=4096
cache_size=8192
synchronous=off
Any others I could try?
Don't get too involved in the PRAGMAs until you have a good reason to. The
default values are pretty
Griggs, Donald wrote:
Is the percentage of the final rowcount really a criterion?
The answer to that, according to my brief exploration, is somewhere
between yes and very much yes, depending on various factors.
--
Eric A. Smith
The number of UNIX installations has grown to 10, with more
Werner Smit wrote:
My question(s)
If I want to use the insert or replace to populate my database of
around 1 million records.
And I want to do it as fast as possible.
What are all the tricks I can use?
Obey the first rule of optimization: don't do it unless you're
sure you need
Griggs, Donald wrote:
Is the percentage of the final rowcount really a criterion?
The answer to that, according to my brief exploration, is somewhere between
yes and very much yes, depending on various factors.
Thanks, Eric. I guess I was wondering if the fastest
Griggs, Donald wrote:
I guess I was wondering if the fastest records-per-transaction value
would depend on the page cache and be more or less independent of the
total records to be imported.
I think the page cache is one of a great many variables.
So, the records-per-transaction for
35 matches
Mail list logo