-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 08/08/2011 06:34 PM, sreekumar...@gmail.com wrote:
From the point of view of performance, I expected similar performance , tmpfs
being a little slower due to filesystem overhead. However, the operations on
tmpfs was much slower than expected.
Journal mode is WAL
--Original Message--
From: Roger Binns
Sender: sqlite-users-boun...@sqlite.org
To: General Discussion of SQLite Database
ReplyTo: General Discussion of SQLite Database
Subject: Re: [sqlite] In memory v/s tmpfs
Sent: Aug 9, 2011 2:42 PM
-BEGIN PGP SIGNED
Environment: Windows XP, MinGW+MSYS
ICU configured with:
sh runConfigureICU MinGW --enable-static --enable-shared
make make install
Produces several libraries (both static and shared).
Attempting to compile sqlite-amalgamation-3070701:
gcc -osqlite3 sqlite3.c shell.c
Dear experts,
Can anyone please tell me if there is a limit to the number of tables that
can be held in a single data file? I am considering an application that
will require a table for every minute in a day, i.e. 3600+ tables in a
single database or data file.
Regards,
Jaco
Sounds to me like you're over normalizing things.
You'll never want to do a query spanning multiple minutes in your described
setup?
http://www.sqlite.org/limits.html
The main limit that applies to your question is max of 64 tables in a Join.
But you apparently never plan on joining
Jaco Breitenbach jjbreitenb...@gmail.com wrote:
Can anyone please tell me if there is a limit to the number of tables that
can be held in a single data file? I am considering an application that
will require a table for every minute in a day, i.e. 3600+ tables in a
single database or data
Hi Igor and Michael,
Yes, of course, 1440 minutes in a day. :-)
I am building an application that filters out duplicate input data by
generating an MD5 hash of each input, and implicitly comparing that against
a set of keys already stored in the SQLite database by doing an insert into
a
Heve you ever considere using a NOSQL database I think it would serve you
better
2011/8/9 Jaco Breitenbach jjbreitenb...@gmail.com
Hi Igor and Michael,
Yes, of course, 1440 minutes in a day. :-)
I am building an application that filters out duplicate input data by
generating an MD5 hash of
Jaco Breitenbach jjbreitenb...@gmail.com wrote:
I am building an application that filters out duplicate input data by
generating an MD5 hash of each input, and implicitly comparing that against
a set of keys already stored in the SQLite database by doing an insert into
a unique-indexed table.
Yes, but each input record also contains a timestamp that can be used to
identify the relevant table.
On 9 August 2011 14:43, Igor Tandetnik itandet...@mvps.org wrote:
Jaco Breitenbach jjbreitenb...@gmail.com wrote:
I am building an application that filters out duplicate input data by
Hi Gabriel,
Is there such a database that is both free and non-GPL that you can
recommend?
Jaco
On 9 August 2011 14:38, gabriel.b...@gmail.com gabriel.b...@gmail.comwrote:
Heve you ever considere using a NOSQL database I think it would serve you
better
2011/8/9 Jaco Breitenbach
MongoDB
http://www.mondodb.orghttp://www.mondodb.org/
Michael D. Black
Senior Scientist
NG Information Systems
Advanced Analytics Directorate
From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on
behalf of Jaco Breitenbach
i would sugest Mongo db
just use it from its binaries packages and don't worrie
like it says
If you are using a vanilla MongoDB server from either source or binary
packages you have NO obligations. You can ignore the rest of this page.
http://www.mongodb.org/display/DOCS/Licensing
2011/8/9
Journal mode is WAL
I believe in-memory database can't have journal mode WAL. So you
compare completely different settings.
Pavel
On Tue, Aug 9, 2011 at 5:15 AM, sreekumar...@gmail.com wrote:
Journal mode is WAL
--Original Message--
From: Roger Binns
Sender:
On Tue, Aug 9, 2011 at 9:27 AM, Jaco Breitenbach jjbreitenb...@gmail.comwrote:
Unfortunately the performance rate of the inserts
into the indexed tables decreases significantly as the number of records in
the tables increases. This seems to be because of a CPU bottleneck rather
than I/O
On 9 Aug 2011, at 2:27pm, Jaco Breitenbach wrote:
The problem that I'm facing, is that I would ultimately need to process
1,000,000,000 records a day, with history to be kept for up to 128 days. I
am currently creating a new data file per day, with hourly tables. However,
that will
If non-GPL is a firm requirement, you might also look at CouchDB (
http://couchdb.org).
On Tue, Aug 9, 2011 at 07:02, gabriel.b...@gmail.com gabriel.b...@gmail.com
wrote:
i would sugest Mongo db
just use it from its binaries packages and don't worrie
like it says
If you are using a
2011/8/9 Jaco Breitenbach jjbreitenb...@gmail.com:
I am building an application that filters out duplicate input data by
generating an MD5 hash of each input, and implicitly comparing that against
a set of keys already stored in the SQLite database by doing an insert into
a unique-indexed
Richard Hipp writes:
This is a locality of reference problem. The caching mechanisms (both in
SQLite and in the filesystem of your computer) begins to break down when the
size of the database exceeds available RAM. And when the cache stops
working well, you have to wait on physical I/O which
We have an application that does not scale well – especially 8 and 16
threads we are getting about 10% at best to -50% at worst (2 and 4 are okay
aprox. 40-50%).
We have narrowed the problem down to the SQLite code.
Looking for best practices from anyone that overcame scaling issues.
First
On 9 Aug 2011, at 9:46pm, Drew Kozicki wrote:
The application is a C++ app. that accesses all the databases in a read
only style. Size of database range from 1MB to 10GB.
Each thread does the following in a loop
1. Grab a record from an external system (not SQLite)
2. Runs several
On Tue, Aug 9, 2011 at 4:46 PM, Drew Kozicki drewkozi...@gmail.com wrote:
We have an application that does not scale well – especially 8 and 16
threads we are getting about 10% at best to -50% at worst (2 and 4 are okay
aprox. 40-50%).
Open a separate database connection for each thread.
Hello Drew,
Why multiple threads? What kind of performance do you get if you only
use a single thread?
Is it one thread per database perhaps?
C
Tuesday, August 9, 2011, 4:46:13 PM, you wrote:
DK We have an application that does not scale well especially 8 and 16
DK threads we are getting
2011/8/9 David Garfield garfi...@irving.iisd.sra.com:
Having said that, let me present a database for consideration: Any
filesystem. Split the hex of the MD5 into directory levels and make
what you need. Might be slower, particularly with some OSes, but the
tools are easy.
--David Garfield
24 matches
Mail list logo