It was as simple as
PRAGMA cache_size = 10;
I want one *million* database disk pages.
Thx,
Doug
On Tue, Apr 26, 2011 at 2:44 PM, Douglas Eck <d...@google.com> wrote:
> Hi all,
> I have a table like this;
> CREATE TABLE entity (
> id BIGINT NOT NULL,
>
Hi all,
I have a table like this;
CREATE TABLE entity (
id BIGINT NOT NULL,
a_id VARCHAR(64),
b_id VARCHAR(64),
path VARCHAR,
< more fields>,
created_at TIMESTAMP,
PRIMARY KEY (id)
);
3.1M records.
When I create an index for a_id it is
I fixed the slowness by sorting the sql statements in ascending key
order and dumping to a new text file, then running sqlite on the new
text file. That *really* helped :-)
-Doug
On Mon, Mar 21, 2011 at 5:47 PM, Douglas Eck <d...@google.com> wrote:
> Hi all,
> I've got a reasonably l
Hi all,
I've got a reasonably large dataset to load via sql insert statements
(3.3M inserts spread out over 4 or 5 tables)
into an empty database. The Db is created using sqlalchemy. I then
drop all indexes using sql statements.
The inserts are performed by calling sqlite3 my.db <
4 matches
Mail list logo