> On modern PCs, SQLite's page cache does not have a large effect because
> of the file cache of the OS, but you should do, e.g.,
> PRAGMA cache_size = -100;
> for a 1 GB cache (default is only a few MB).
Hitting the page cache is much cheaper as it does not involve a system call.
Try opening
Yuri wrote:
> I followed every advise I could find:
> * Database is written into memory
> * Whole insertion happens in one transaction in one giant BEGIN TRANSACTON;
> ... END TRANSACTION; block.
> * Foreign keys are deferred: PRAGMA defer_foreign_keys=ON;
> * Journal is disabled: PRAGMA journal_m
On 2015-08-28 04:15 AM, Yuri wrote:
> I build a fairly large DB, with major tables having 800k..5M rows,
> with several relationships between tables.
>
> At ~30% into the insertion process it slows down rapidly. Records
> #171k..172k are still fast, and records #172k...173k are already ~10
> t
On 28 Aug 2015, at 3:45am, Yuri wrote:
> My computer has 24GB ob RAM of which 4GB is free. At the time of the problem
> the size of the process is 325MB. And the size of complete DB is ~250MB. So
> this isn't it.
>
> What's worth mentioning though is that at the time of the event in question
On 28 Aug 2015, at 3:15am, Yuri wrote:
> * Database is written into memory
> [...]
> How can I understand why the slowdown occurs? Especially, why the slowdown in
> so "sharp"?
Your computer has a certain amount of free memory. Once your database is
bigger than that size the computer has to
Thanks to everybody who made suggestions.
There was the legitimate constraint violation caused by a bug in an
importing program. Program was erroneously inserting zero integer into
the field that is both the leading part of the primary key (possibly
causing its non-uniqueness), and a foreign ke
How many indexes on your data?
Can you DROP your indexes for the insert, then CREATE INDEX them again
after all the content is in place?
--
D. Richard Hipp
drh at sqlite.org
I'm with Simon. If you're writing 5 million rows to memory, the OS has to
move unused memory out to disk.
I'd say write the data to disk (SSD if you can) and check if the data
written out is at a consistent speed.
On Thu, Aug 27, 2015 at 10:29 PM, Simon Slavin wrote:
>
> On 28 Aug 2015, at 3:
On 08/27/2015 19:45, Richard Hipp wrote:
> How many indexes on your data?
>
> Can you DROP your indexes for the insert, then CREATE INDEX them again
> after all the content is in place?
Only indexes needed during the insert are in DB. Only 2 indexes exist,
plus there are several "uniq" constraint
On 08/27/2015 19:29, Simon Slavin wrote:
> Your computer has a certain amount of free memory. Once your database is
> bigger than that size the computer has to keep moving parts of the database
> into storage so it has room for the new data.
>
> I will guess that if you got more RAM in your comp
I build a fairly large DB, with major tables having 800k..5M rows, with
several relationships between tables.
At ~30% into the insertion process it slows down rapidly. Records
#171k..172k are still fast, and records #172k...173k are already ~10
times slower. (all records are more or less simila
11 matches
Mail list logo