Follow-up on this issue. (Re-Post of previous which included large .jpg.  I
converted .jpg to text [see below] to make message smaller, and then
deleted previous post.)

This morning I used a JetBrains product called dotTrace to analyze my
application's resource usage. I loaded a 7 million rows into a table.  When
I simply loaded the table, it took 12 minutes.  Pretty impressive.  When I
added 'Unique' to one of the fields definitions, the load time jumped to 90
minutes.

Naturally, most all the cpu consumption was cause by the various stream and
data readers in the application.  It's easy to understand, also, that
applying unique is going to cause addition hits against the database
(ensuring the unique field value does not already exist.)  The attached
image is the dotTrace output for this run.

What I did next was employ an internal hash in my application.  I attempt
to insert the unique value into the hash and catch the exception if it
exists.  The run time is now 14 minutes for the 10 million records.



4 -- Thread #1 • 30,628 ms
  99.38 % Main • 30,437 ms • CDGAddAName.AAN.Main(Sing[])
4 99.38 % InitializePools • 30,437 ms •
CDGAddName.AddPoolController.InitializePools
4 99.38 % LoadPool • 30,437 ms + CDGAddAName.AddPool.LoadPool
4 55.32 % save_rec_to_db • 16,45 ms •
CDGddAName.TblZipRoute.save_rec_to_db(Hashtable, String)
4 55.01 % ExecuteNonQuery • 16,850 ms •
System.Data.SQLite.SQLitecommand.ExecuteNonQuery
4 54.81 % ExecuteReader • 16,786 ms •
System.Data.SQLite.SQLitecommand.ExecuteReader(CommandBehavior)
4 54.75 % NextResuIt • 16,770 ms •
System.DataSQLite.SQLiteDataReader.NextResult
  52.42 % Step • 16,056 ms •
System.Data.SQLite.SqlLite3.Step(SQLiteStatement)
   2.17 % GetStatement • 665 ms •
System.Data.SQLite.SQLitecommand.GetStatement(1nt32)
   0.05 % Reset • 16 ms • System.Data.SQLite.SQLite3.Reset(SQLtteStatement)
   0.05 % CheckClosed • 16 ms •
System.Data.SQLIte.SQLiteDataReader.CheckClosed
   0.05 % InitializeForReader • 16 ms •
System.Data.SQLite.SQLitecommand.InitializeForReader
   0.10 % Dispose • 32 ms • System.Data.Common.DbDataReader.Dispose
   0.10 % Nexesult • 32 ms • System.Data.SQLiteSQLiteDataReader.NextResult
   0.26 % getltern • 79 ms • System.Collections.Hashtable.getItem(Object)
   0.05 % Eligible • 16 ms • CDG.AddAName.Global.Eligible(Hashtable)
4 43.95 % get_parsed_record • 13,451 ms
CDGAddAName.AddPool.get_parsed_record
4 43.95% GearsedRecord • 13,461 ms •
CDG.Util.FL_FirstLogic.FL_InputFile.GearsedRecord
4 42.87% ReadRecord • 13,129 ms •
CDG.UtiI.FL_Firsltogic.FL_Inputlile.ReadRecord
  42.07 % ReadLine • 13,129 ms • System.IO.StreamReader.ReadLine
   1.08 % Parse_Record • 332 ms •
CDG.UtiI.FL_FirstLogic.Parse_FmtDef.Parse(RecordString)
   0.62 % [Native or optirnized code] 191 ms
4 Thread #2 • 30,628 ms
 100.00 % [Native or optimized code] 30,628 ms


On Thu, Feb 23, 2012 at 4:02 PM, Don V Nielsen <donvniel...@gmail.com>wrote:

> Follow-up on this issue.
>
> This morning I used a JetBrains product called dotTrace to analyze my
> application's resource usage. I loaded a 7 million rows into a table.  When
> I simply loaded the table, it took 12 minutes.  Pretty impressive.  When I
> added 'Unique' to one of the fields definitions, the load time jumped to 90
> minutes.
>
> Naturally, most all the cpu consumption was cause by the various stream
> and data readers in the application.  It's easy to understand, also, that
> applying unique is going to cause addition hits against the database
> (ensuring the unique field value does not already exist.)  The attached
> image is the dotTrace output for this run.
>
> What I did next was employ an internal hash in my application.  I attempt
> to insert the unique value into the hash and catch the exception if it
> exists.  The run time is now 14 minutes for the 10 million records.
>
>
> On Thu, Feb 9, 2012 at 8:46 AM, Black, Michael (IS) <
> michael.bla...@ngc.com> wrote:
>
>> I think you may find you're running into buffer cache limits (not sqlite
>> but OS limits).
>>
>>
>>
>> So the 1st third all fits into buffer cache.  Once it starts committing
>> to disk things slow down a LOT.
>>
>>
>>
>> Since you're not showing an real times it's hard to say you are any
>> slower than anybody else.
>>
>>
>>
>> I saw similar behavior on a project I was doing and it all boiled down to
>> disk write speed once things started going to disk.
>>
>>
>>
>> Michael D. Black
>>
>> Senior Scientist
>>
>> Advanced Analytics Directorate
>>
>> Advanced GEOINT Solutions Operating Unit
>>
>> Northrop Grumman Information Systems
>>
>> ________________________________
>> From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org]
>> on behalf of Don V Nielsen [donvniel...@gmail.com]
>> Sent: Thursday, February 09, 2012 8:14 AM
>> To: General Discussion of SQLite Database
>> Subject: EXT :Re: [sqlite] Inserts get slower and slower
>>
>> I've noticed a similar thing happening.  The first 1/3rd loads quickly;
>> the
>> remain 2/3rds stagnates.  It appears that there is some kind of bottleneck
>> happening.  I thought it was the SAN.
>>
>> My application begins a transaction, does all its inserts, and then
>> commits.  There could be millions in the transaction.  Would it be better
>> processing to commit in batches, say 250m or 500m?
>>
>> Now's the time for me to make these changes, as the application is being
>> prep'd for production.
>>
>> dvn
>>
>> On Wed, Feb 8, 2012 at 4:29 PM, Simon Slavin <slav...@bigfraud.org>
>> wrote:
>>
>> >
>> > On 8 Feb 2012, at 10:22pm, Oliver Peters wrote:
>> >
>> > > It's the Primary Key that you're using cause for every INSERT it is
>> > checked if unix_time is already present in a record.
>> > >
>> > > So the question is if you really need unix_time as a PK
>> >
>> > If you're batching your INSERTs up into transactions, try doing a VACUUM
>> > after each COMMIT.
>> >
>> > Simon.
>> > _______________________________________________
>> > sqlite-users mailing list
>> > sqlite-users@sqlite.org
>> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> >
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to