We tend to use append only. We delete some small transient data as we
use them, this transient data maintains a bit of state between appends.
We're talking 300-500 lines of a table.
It's not read only, we add a lot of data every 3-5 minutes, but its all
driven from a single Perl process that captures XML data, processes it
as only Perl can, and then updates the database. Nothing updates the
database between those time intervals. No foreign keys at all.
The schema is quite long but rather simplistic. There are lots of tables
all referenced by a big central table, since a lot of our data is common
and referenced thousands of times by different rows, we needed to reduce
the amount of data or else we would have tripled the size of the main
database to over 100GB.
Its 1,200 lines long so not really suitable for posting here, but the
main table is
-- ----------------------------
-- Table structure for Disruptions
-- ----------------------------
DROP TABLE IF EXISTS "Disruptions";
CREATE TABLE "Disruptions" (
"id" INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
"version" integer NOT NULL,
"Disruption_id" INTEGER NOT NULL,
"status" integer NOT NULL,
"severity" integer NOT NULL,
"levelOfInterest" integer NOT NULL,
"category" integer NOT NULL,
"subCategory" integer NOT NULL,
"startTime" TEXT NOT NULL,
"endTime" text NOT NULL,
"location" integer NOT NULL,
"corridor" integer NOT NULL,
"comments" integer NOT NULL,
"currentUpdate" integer NOT NULL,
"remarkTime" TEXT NOT NULL,
"lastModTime" TEXT NOT NULL,
"CauseAreaPointX" real NOT NULL,
"CauseAreaPointY" real NOT NULL,
"Direction" TEXT
);
INSERT INTO "main".sqlite_sequence (name, seq) VALUES ("Disruptions",
'0');
All the integer fields are really indexes into other tables. We made a
deliberate decision to move some of the more complex logic out of SQL
into Perl as it sat better there. This was no reflection on SQLite, we
would have made the same decision with PostGres or MySQL or Oracle.
Right tool for the right job etc etc.
Rob
On 2 Oct 2016, at 22:47, Domingo Alvarez Duarte wrote:
Hello Rob !
How do you use those big databases ?
Append only ? Read only ? Constant read write ? Foreign keys ?
Could you show the schema to talk about it ?
Cheers !
On 02/10/16 17:44, Rob Willett wrote:
We have production databases with 30-50GB and have no issues with
managing them. Mind you we have more memory, though we only have 8GB
as thats all our VM provider allows us.
After saying that we have never seen any performance issues that
weren't due to our schemas or our bad design. Our working assumption
is that if there's a problem, its our fault before we blame SQLite.
So far that working assumption had held true :)
We did some tests (just for the hell of it) to 200GB with dummy data
and had no issues I can recall.
I know that other people on this list have far, far bigger production
databases than us.
I'm not a SQLite expert but I look on machines with 2GB of main
memory as rather small for this sort of thing.
Rob
On 2 Oct 2016, at 21:35, Domingo Alvarez Duarte wrote:
Hello !
I'm still fighting to use sqlite with a 20GB database and now I'm
trying to update a table with 980M rows but sqlite is eating all my
memory (2GB) and making blocking my computer (too much swap).
I'm in doubt if sqlite can really manage databases above 1GB.
Have someone any experience with big databases ?
Cheers !
The query:
--explain query plan
update bolsas_familia set favorecido_id=(select id from favorecidos
where nis=favorecido_id);
Query plan:
selectid|order|from|detail
0|0|0|SCAN TABLE bolsas_familia
0|0|0|EXECUTE CORRELATED SCALAR SUBQUERY 0
0|0|0|SEARCH TABLE favorecidos USING COVERING INDEX
favorecidos_nis_idx (nis=?)
The schema is:
CREATE TABLE bolsas_familia(
id integer primary key,
favorecido_id integer not null,
valor_parcela numeric,
mes_competencia integer
); //980M rows
CREATE TABLE "favorecidos"(
id integer primary key,
nis integer,
nome varchar collate nocase_slna,
municipio_id integer,
programa_id integer not null,
fonte_finalidade_id integer not null,
first_ano_mes integer
); //14M rows
CREATE UNIQUE INDEX favorecidos_nis_idx ON favorecidos(nis);
Cheers !
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users