On 2017/05/30 2:01 PM, Hick Gunter wrote:
If you stuff all 18MB of your data into a single INSERT, then SQlite will need 
to generate a single program that contains all 18MB of your data (plus code to 
build rows aout of that). This will put a heavy strain on memory requirements 
and offset any speed you hope to gain.

The SOP is to put many (1000 magnitude) INSERT statements into one transaction 
to save disk IO on commit.

Correct, and let me just add, the /compressed/ size is 18MB of fairly compressible statements, so the real data may well be 180MB or more. This can take quite some time to build a query on.

Out of interest Sarge, did you try this on MySQL or Postgres too? What was the result?


_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to