--- On Thu, 9/9/10, Black, Michael (IS) <michael.bla...@ngc.com> wrote:

> From: Black, Michael (IS) <michael.bla...@ngc.com>
> Subject: Re: [sqlite] In memory database and locking.
> To: "General Discussion of SQLite Database" <sqlite-users@sqlite.org>
> Date: Thursday, September 9, 2010, 9:16 AM
> I've never seen an application that
> would run faster in ANY database vs custom code. 
> Databases are for generic query problems...not the end-all
> to "store my data" when speed is a concern.
> 

I will see how sqlite works out otherwise I will write my own hash code.


> I've pointed out a few times on this list where people are
> concerned for speed and showed things like 30X faster using
> your own.  I've done network data acquisition like this
> before and I'll guarantee that you will never keep up with a
> data burst on a gigabit network.  I don't even think
> winpcap can do that let alone a database.
> 
> I don't think you need to run your trigger every
> minute.  Just run it every insert.  I think the
> delete will be notably faster than the insert and you won't
> notice the difference vs running every 60 seconds.
> 
> What you will want to do is only do a commit every so
> often...I think you stated you're doing a commit every
> packet which would be dog slow.
> 

  I agree with you about commiting every insert. I am going to experiment with 
different times and see where I get best performance. Right now I have one 
process doing insert and every 5000 rows do a delete.


> 
> -----Original Message-----
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org]
> On Behalf Of Hemant Shah
> Sent: Thursday, September 09, 2010 8:57 AM
> To: General Discussion of SQLite Database
> Subject: EXTERNAL:Re: [sqlite] EXTERNAL: In memory database
> and locking.
> 
> How do I setup trigger to run every minute?
> 
> I thought about writing hash code, but thought sqlite or
> other in memory database would work. The in memory database
> seems to keep up with the in coming traffic. 
> 
> Hemant Shah
> 
> E-mail: hj...@yahoo.com
> 
> --- On Thu, 9/9/10, Black, Michael (IS) <michael.bla...@ngc.com>
> wrote:
> 
> From: Black, Michael (IS) <michael.bla...@ngc.com>
> Subject: Re: [sqlite] EXTERNAL: In memory database and
> locking.
> To: "General Discussion of SQLite Database" <sqlite-users@sqlite.org>
> Date: Thursday, September 9, 2010, 7:48 AM
> 
> Have you considered doing your cleanup during a trigger?
> I assume you're already using transactions for your
> inserts.  I wouldn't
> think it would be much slower doing it every insert as
> you'd be deleting
> a much smaller set every time.
> 
> This is really a LOT faster if you just hash your info and
> then
> periodically walk the hash table to delete old stuff.  A
> database is
> never going to keep up with a gigabit interface.
> 
> -----Original Message-----
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org]
> On Behalf Of Hemant Shah
> Sent: Wednesday, September 08, 2010 10:55 PM
> To: sqlite-users@sqlite.org
> Subject: EXTERNAL:[sqlite] In memory database and locking.
> 
> Folks,
> I am trying to write an application that reads packets from
> the network
> and inserts it into sqlite database, I have a unique key
> which is
> combination of couple of columns. I want to find
> re-transmitted packets
> so I rely on the fact that if I violate unique key
> constraint then I
> have found the duplicate packet. Also, I just want to
> compare it with
> packets received within last minute. One of the column is
> timestamp.
> I am using C API and statically link sqlite 3.7.2 with my
> application.
> Here is what I am doing. When I start my application it
> creates the
> database and table and then forks two processes. One
> process reads
> packets from network and inserts information about it in
> the database,
> if insert fails then it has found re-transmission and it
> executes the
> select statement to get the information about previous
> packet and print
> information about both packets.
> The other process wakes up every 60 seconds and deletes all
> row whose
> timestamp columns is less then (current timestamp - 60).
> The timestamp
> is number of seconds since epoch.
> The first process is constantly inserting rows into the
> database, so the
> other process cannot delete any rows. When I use :memory:
> for database I
> do not get any error but it does not delete any rows as the
> memory
> footprint of my program keeps on increasing.If I use a file
> for database
> I get error that database is locked.
> Both of these processes are sibling and have same database
> handle. When
> I read the documentation I found that in-memory database
> always uses
> EXCLUSIVE lock.
> How do I solve this problem?
> Thanks.
> 
> 
> Hemant Shah
> 
> E-mail: hj...@yahoo.com
> 
> 
>       
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 
> 
>       
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 



Hemant Shah
E-mail: hj...@yahoo.com



      
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to