Monday, June 19, 2006, 07:37:22, Manzoor Ilahi Tamimy wrote:
> The Database Size is more than 500 MB.
> It contain one table and about 10 million Records.
I had problems with even more records (roughly 25 million, > 1GB of
data) and I've stopped efforts to do it in pure sqlite in the end, also
b
Manzoor Ilahi Tamimy wrote:
Hello All,
We are Using SQLite for one of our project.
The Database Size is more than 500 MB.
It contain one table and about 10 million Records.
We are facing Problem in the select with single Join. The join is between a
big table and a small table. The small t
Hello All,
We are Using SQLite for one of our project.
The Database Size is more than 500 MB.
It contain one table and about 10 million Records.
We are facing Problem in the select with single Join. The join is between a
big table and a small table. The small table contain records not more t
On Thu, 16 Mar 2006 [EMAIL PROTECTED] wrote:
>Sorry it took me some time to get back to this thread.
No problem. I missed your reply anyway:)
>
>- Original Message
>From: Christian Smith <[EMAIL PROTECTED]>
>
>> When your database does not fit in memory, yes, you're right, the OS may
Sorry it took me some time to get back to this thread.
- Original Message
From: Christian Smith <[EMAIL PROTECTED]>
> When your database does not fit in memory, yes, you're right, the OS may
> well get caching wrong, and in the worst way possible. Two things though:
> - SQLite does have
On Mon, 13 Mar 2006 [EMAIL PROTECTED] wrote:
> [snip aio API stuff]
>After finding out about this api, I found out that at least mysql and
>postgresql use it, so I am guessing that changing the sql engine to
>generate batches of read/writes is possible.
>
>My guess is that using this api will incr
- Original Message
> From: Joe Wilson <[EMAIL PROTECTED]>
> The disk read/write reordering optimization only works if the
> database file is contiguous on the disk and not fragmented.
> --- [EMAIL PROTECTED] wrote:
> > Basically, the db file is accessed with seek + read/write operations
On Fri, 10 Mar 2006 [EMAIL PROTECTED] wrote:
>- Original Message
>> From: Christian Smith <[EMAIL PROTECTED]>
>
>> But SQLite depends on the OS caching abilities for much of it's
>> performance. Removing it is like saying SQLite is rubbish on Intel
>> processors after testing on a i486.
>
The disk read/write reordering optimization only works if the
database file is contiguous on the disk and not fragmented.
--- [EMAIL PROTECTED] wrote:
> Basically, the db file is accessed with seek + read/write operations.
> Given a set of such operations, it can be very beneficial to reorder th
It also helps to put the database file(s) on a different
physical hard drive than the hard drive of the operating
system and swap.
--- Carl Jacobs <[EMAIL PROTECTED]> wrote:
> Nicolas,
>
> From: <[EMAIL PROTECTED]>
> > On the other hand, I tried to make better use of the cache: if I run my 1M
Nicolas,
From: <[EMAIL PROTECTED]>
> On the other hand, I tried to make better use of the cache: if I run my 1M
inserts in 10 transactions of 100,000, things get a bit slower than 100
transactions of 10,000 inserts.
> I tried one transaction of 1,000,000 inserts and the test app hangs at
100% cpu
- Original Message
> From: Christian Smith <[EMAIL PROTECTED]>
> But SQLite depends on the OS caching abilities for much of it's
> performance. Removing it is like saying SQLite is rubbish on Intel
> processors after testing on a i486.
yes and no: while it's nice to be able to rely someh
- Original Message
> From: [EMAIL PROTECTED]
> <[EMAIL PROTECTED]> wrote:
> >
> > One question though: are the file access "sorted", so that seeks are
> > minimised when performing a transaction (making the assumption that
> > the file is not fragmented on disk)?
> >
> I'm not sure wha
<[EMAIL PROTECTED]> wrote:
>
> One question though: are the file access "sorted", so that seeks are
> minimised when performing a transaction (making the assumption that
> the file is not fragmented on disk)?
>
I'm not sure what you are asking. Can you restate your question?
--
D. Richard Hipp
On Thu, 9 Mar 2006 [EMAIL PROTECTED] wrote:
>- Original Message
>> From: Christian Smith <[EMAIL PROTECTED]>
>
>> Is this a likely usage scenario? Will your application regularly
>> umount/mount the filesystem between transactions? While sounding
>> facetious, I'm not trying to. Your othe
I've had similar speed increases using the same technique
you've described for the last 10 revs of SQLite. I never
had any noticable improvement when fiddling with the
SQLite cache parameters. I just assumed it was a Linux thing.
But maybe not.
> So what I tried next was to simply run my app af
- Original Message
> From: Christian Smith <[EMAIL PROTECTED]>
> Is this a likely usage scenario? Will your application regularly
> umount/mount the filesystem between transactions? While sounding
> facetious, I'm not trying to. Your otherwise excellent example is let down
> by a probably
On Thu, 9 Mar 2006, Christian Smith wrote:
>On Wed, 8 Mar 2006 [EMAIL PROTECTED] wrote:
>
>>
>>One question though: are the file access "sorted", so that seeks are
>>minimised when performing a transaction (making the assumption that the
>>file is not fragmented on disk)?
>
>
>The OS will sort IO
On Wed, Mar 08, 2006 at 03:09:15PM -0800, [EMAIL PROTECTED] wrote:
[...]
> One question though: are the file access "sorted", so that seeks are
> minimised when performing a transaction (making the assumption that the file
> is not fragmented on disk)?
I assume you tried your tests with "PRAGMA
On Wed, 8 Mar 2006 [EMAIL PROTECTED] wrote:
>- Original Message
>From: [EMAIL PROTECTED]
>To: sqlite-users@sqlite.org
>
>> SQLite inserts in records in primary key order. (That is not
>> strictly true - but it is close enough to being true for the
>> purposes of what follows.) So when y
On Tue, 7 Mar 2006 [EMAIL PROTECTED] wrote:
>Hi all
>
>it seems that I am running in a problem with the way sqlite accesses the
>disk when inserting rows of data in databases that are large in number of
>records but not necessary big on disk (I am talking millions of records
>in files that are in
- Original Message
From: [EMAIL PROTECTED]
To: sqlite-users@sqlite.org
> SQLite inserts in records in primary key order. (That is not
> strictly true - but it is close enough to being true for the
> purposes of what follows.) So when you insert records that
> are already in fname order,
<[EMAIL PROTECTED]> wrote:
> Hi all
>
> it seems that I am running in a problem with the way sqlite accesses the disk
> when inserting rows of data in databases that are large in number of records
> but not necessary big on disk (I am talking millions of records in files that
> are in the order
Hi all
it seems that I am running in a problem with the way sqlite accesses the disk
when inserting rows of data in databases that are large in number of records
but not necessary big on disk (I am talking millions of records in files that
are in the order of a few hundred MBytes).
I reduced t
> well, it could be true, but not in the queries i have posted. i "group
> by" column "a" and there is an index on column "a", so sqlite does not
> have to do anything to compute key. it does not even have to back to
Do not confuse the index key with the aggregator key. The two may be
the s
On Fri, 2005-04-08 at 11:53 -0700, Maksim Yevmenkin wrote:
> CREATE INDEX data_by_a ON data (a);
>
> > time sqlite3 db 'select n2 from data where a <= 18234721' > /dev/null
> 25.95u 0.71s 0:27.02 98.6%
>
If you make the index look like this:
CREATE INDEX data_by_a ON data(a, n2);
Then SQLite
On Tue, 12 Apr 2005, Maksim Yevmenkin wrote:
>Dear SQLite users,
>
>consider this
>
> [snip]
>
>it only took 4+ seconds to read, parse, perform hash table lookup and
>sum the data. note that for unique 1417 keys it had to do hash lookup
>and hash insert.
>
>so, just with plain ascii file i get fou
Maksim Yevmenkin wrote:
>
>>>so, just with plain ascii file i get four times the speed i get with
>>>sqlite. note that my c program will scale linearly with the size of
>>>dataset (just like i see with sqlite).
>>>
>>>
>> With anything related to computers, there are always tradeoffs - mos
John.
> >i think, i know what is going on here. the problem is that every time
> >i do an indexed scan sqlite has to
> >
> >1) fetch index pages
> >
> >2) fetch data pages that match "where" condition
> >
> >because both index and data are in the same file sqlite has to perform
> >insane amount of
> > SELECT count(*) FROM (SELECT a,n1 FROM data WHERE a <= 18234721 GROUP BY a);
> >
> > > time sqlite3 db < test.sql
> > 30
> > 1024
> > 1417
> > 13.14u 1.06s 0:14.40 98.6%
>
> Have you tried doing the query like this:
>
> SELECT count(*) FROM (SELECT a,nl FROM data WHERE a-18234721<=0 GRO
Maksim Yevmenkin wrote:
i think, i know what is going on here. the problem is that every time
i do an indexed scan sqlite has to
1) fetch index pages
2) fetch data pages that match "where" condition
because both index and data are in the same file sqlite has to perform
insane amount of seek() calls
On Tue, 2005-04-12 at 16:17 -0700, Maksim Yevmenkin wrote:
> SELECT count(*) FROM (SELECT a,n1 FROM data WHERE a <= 18234721 GROUP BY a);
>
> > time sqlite3 db < test.sql
> 30
> 1024
> 1417
> 13.14u 1.06s 0:14.40 98.6%
>
Have you tried doing the query like this:
SELECT count(*) FROM (SEL
Thomas,
> > with sum(n1) added query runs twice as slow. as i was told its because
> > sqlite has to fetch data row. fine, but why its soo slow?! and it
>
>Because for each row it has to compute the aggregate key, find the
> aggregator for that key and increment the sum for that aggregate key
> with sum(n1) added query runs twice as slow. as i was told its because
> sqlite has to fetch data row. fine, but why its soo slow?! and it
Because for each row it has to compute the aggregate key, find the
aggregator for that key and increment the sum for that aggregate key.
That's a lot mo
Dear SQLite users,
consider this
schema
-
sqlite> .schema data
CREATE TABLE data(
a INTEGER,
b INTEGER,
c CHAR,
d INTEGER,
e INTEGER,
n1 FLOAT,
n2 FLOAT
);
CREATE INDEX data_by_a on data (a);
data set
-
sqlite> select count(*) from data where a <= 18234721;
92
Gé,
thanks for the suggestion. unfortunately it did not make any
difference :( below is the results. as you can see it takes 7+ seconds
to "group by" 333,392 records and i'm grouping by column on which i
have index. again, i'm not a database guy, but i think that is slow.
perhaps someone can comm
Maksim,
Some things you could try:
1) increase cache memory
You may be causing a lot of cache misses if the size of the query result
is very large compared to the size of the cache. Index-based searches
can cause multiple reloads of the same page because of a lack of
locality in the cache. An in
Robert,
> [snip]
>
> > i said i print these rows to /dev/null too in my perl code. plus the
> > perl code does some other things such as joining these rows with other
> > hashes and summing the numbers.
>
> That's fine. I was merely trying to account for the 50% speed difference
> between the t
> -Original Message-
> From: Maksim Yevmenkin [mailto:[EMAIL PROTECTED]
> Sent: Monday, April 11, 2005 12:27 PM
> To: Robert Simpson
> Cc: sqlite-users@sqlite.org
> Subject: Re: [sqlite] sqlite performance problem
>
[snip]
> i said i print these rows to /dev/
Robert,
> > time sqlite3 db 'select n1 from data where a <= 18234721' > /dev/null
> > 26.15u 0.59s 0:27.00 99.0%
> >
> > time sqlite3 db 'select n1 from data where a <= 18234721' > /dev/null
> > 26.04u 0.61s 0:26.91 99.0%
> >
> > time sqlite3 db 'select e from data where a <= 18234721' > /dev/null
Let's recap ...
> time sqlite3 db 'select n1 from data where a <= 18234721' > /dev/null
> 26.15u 0.59s 0:27.00 99.0%
>
> time sqlite3 db 'select n1 from data where a <= 18234721' > /dev/null
> 26.04u 0.61s 0:26.91 99.0%
>
> time sqlite3 db 'select e from data where a <= 18234721' > /dev/null
> 12.
Robert,
> > i guess, i can believe this. however its pretty disappointing to get
> > 50% improvement on 30 times less dataset :(
> >
> > but how do you explain this?
> >
> > sqlite> .schema data
> > CREATE TABLE data
> > (
> >a INTEGER,
> >b INTEGER,
> >c CHAR,
> >d
> -Original Message-
> From: Maksim Yevmenkin [mailto:[EMAIL PROTECTED]
> Sent: Monday, April 11, 2005 9:59 AM
> To: Christian Smith
> Cc: sqlite-users@sqlite.org
> Subject: Re: [sqlite] sqlite performance problem
>
> i guess, i can believe this. however its pre
Christian,
thanks for the reply.
> >i'm having strange performance problem with sqlite-3.2.0. consider the
> >following table
> >
> > [snip]
> >
> >now the problem:
> >
> >1) if i do a select with an idex it takes 27 sec. to get 92 rows
> >
> >> time sqlite3 db 'select n2 from data where a <=
On Fri, 8 Apr 2005, Maksim Yevmenkin wrote:
>Hello,
>
>i'm having strange performance problem with sqlite-3.2.0. consider the
>following table
>
> [snip]
>
>now the problem:
>
>1) if i do a select with an idex it takes 27 sec. to get 92 rows
>
>> time sqlite3 db 'select n2 from data where a <=
Hello,
i'm having strange performance problem with sqlite-3.2.0. consider the
following table
sqlite> .schema data
CREATE TABLE data
(
a INTEGER,
b INTEGER,
c CHAR,
d INTEGER,
e INTEGER,
n1 FLOAT,
n2 FLOAT
);
CREATE INDEX data_by_a ON data (a);
sq
On Sun, 2005-02-27 at 15:31 -0500, Clay Dowling wrote:
> >
> >Anyone using other compiler to build sqlite on Windows ? Maybe Borland ?
> >I seems there is a lot o room for faster sqlite.dll by using other
> >compiers than VC++ 6. We can run some tests to see which is faster ...
> >
> I thought I re
Ionut Filip wrote:
Hi,
So the VC++ 7.1 dll is much faster than the VC++ 6, up to 20% in some
operations, and the "MS mixed dll load bug" is not a problem, assuming
sqlite doesn't mix native code with managed .net code :).
Anyone using other compiler to build sqlite on Windows ? Maybe Borland ?
I se
; 'Keith Herold'
Subject: RE: [sqlite] sqlite performance variationin linux and windows
> -Original Message-
> From: Keith Herold [mailto:[EMAIL PROTECTED]
> Sent: Friday, February 25, 2005 3:57 PM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] sqlite performance
build a version up to 10-15% faster
I saw the same kind of speedup (and sometimes up to 25%) depending on
if assertions are enabled. I believe they are enabled by default.
Adding -DNDEBUG turns them off.
Roger
> -Original Message-
> From: Keith Herold [mailto:[EMAIL PROTECTED]
> Sent: Friday, February 25, 2005 3:57 PM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] sqlite performance variationin linux and windows
>
> http://www.adtmag.com/article.
http://www.adtmag.com/article.asp?id=7421
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dv_vstechart/html/vcconMixedDLLLoadingProblem.asp
It's been awhile since Iooked at this, but I distinctly remember
reading from one of the product managers at MS that this wouldn't be
fixed
On Fri, Feb 25, 2005 at 02:25:06PM -0800, Keith Herold wrote:
> Well, MS has said that VC++ .NET has a serious multithreading issue;
> a lot of companies are staying on VC6 for that reason.
Keith, I wasn't aware of that. Could you point us to more info on the
problem, please?
--
Andrew Piskorsk
Well, MS has said that VC++ .NET has a serious multithreading issue; a
lot of companies are staying on VC6 for that reason.
--Keith
On Fri, 25 Feb 2005 23:47:32 +0200, Ionut Filip <[EMAIL PROTECTED]> wrote:
>
> Hi Appadurai,
>
> From my experience the stock sqlite.dll is not very fast. I guess
Hi Appadurai,
>From my experience the stock sqlite.dll is not very fast. I guess it's
build with VC++ 6. Using VC++ 7.1 I was able to build a version up to
10-15% faster, not a bad optimization for 0 lines of code :) . It was a
little bigger, about +25%, but for desktop this is not a problem. I o
On 24 Feb 2005 at 19:08, Neelamegam Appadurai wrote:
> Can anyone please help me how to increase the performance on windows
> enviroment
Have a look at this Wiki
http://www.sqlite.org/cvstrac/wiki?p=PerformanceTuningWindows
Some basic things to be aware of in there with regards to Speed and
Win
Hi,
Neelamegam Appadurai wrote:
Can anyone please help me how to increase the performance on windows
enviroment
One thing you can do is to increase the page size from 1024 to 4096.
That seems to match either Windows' swapfile-pagesize or the default
page size of NTFS (there is disagreement ove
hi all,
I was using file in my application to store and read datas, Now i want
my application to use sqlite DB.
I compiled and installed sqlite db,version 3.0.8, in my linux system
as well as on a windows replacing the fileoperations .
The performance of sqlite against filesystem on linux machine w
=== On 2004-06-18, ben.carlyle wrote ===
..
>
>I'd like to take this soapbox opportunity to again thank drh and his
>associates for the wonderful work they put into sqlite and their genine
>personal commitment to this project. Sqlite is a great product, and a
>well-targeted one.
>
>Benjamin
>
J
TED]>
18/06/2004 12:09 AM
To: [EMAIL PROTECTED]
cc:
Subject:Re: [sqlite] SQLite performance with mid-size databases
Well this does not sound good at all. I would think
that SQLite would at least do a better job at queries.
The issues with performance and scal
Ulrik Petersen wrote:
this is also just a stab in the dark, and I subscribe to the digest
version of the mailinglist, so I may not have the latest.
Perhaps you are doing something like
SELECT A.x, A.y
FROM A
WHERE A.rowid = xxx
OR A.rowid = yyy
OR A.rowid = zzz
OR A.rowid = ...
etc.etc. with
Richard,
this is also just a stab in the dark, and I subscribe to the digest
version of the mailinglist, so I may not have the latest.
Perhaps you are doing something like
SELECT A.x, A.y
FROM A
WHERE A.rowid = xxx
OR A.rowid = yyy
OR A.rowid = zzz
OR A.rowid = ...
etc.etc. with may OR-condi
See below:
--- Jacob Engstrand <[EMAIL PROTECTED]> wrote:
> To disable the
> I1A index,
> rewrite the WHERE clause like this:
>
> SELECT * FROM t1 WHERE a+0=5 AND b='xyzzy';
>
> To disable the I1B index you could write
>
> SELECT * FROM t1 WHERE a=5 AND b LIKE 'xyzzy';
>
Why
Well this does not sound good at all. I would think
that SQLite would at least do a better job at queries.
The issues with performance and scalability is of
great concern. I was planning on create some new apps
that use SQLite, but now I'm wondering is this little
database is capable of handle ov
f data is visibly faster...indicating that
>the disk cache is very favorably impacting the speed of the queries.
>
>Richard
>
>-Original Message-
>From: Tim Anderson [mailto:[EMAIL PROTECTED]
>Sent: Wednesday, June 16, 2004 10:33 AM
>To: Richard Kuo
>Subje
Richard Kuo wrote:
I suspect some unnecessary disk access has to be the problem...despite
the small amount of new guide information being queried out, disk bytes
read is several times higher than with MS access and scrolling back over
previously accessed areas of data is visibly faster...indicating
he is very favorably impacting the speed of the queries.
Richard
-Original Message-
From: Tim Anderson [mailto:[EMAIL PROTECTED]
Sent: Wednesday, June 16, 2004 10:33 AM
To: Richard Kuo
Subject: RE: [sqlite] SQLite performance with mid-size databases
> -Original Message-
> Fr
On 2003-01-27, at 00.00, D. Richard Hipp wrote:
> Enterprise scale database engines do a more sophisticated job
> of selecting indices (when there is a choice like this) by
> collecting lots of statistics on the indices and using complex
> algorithms to make the choice. SQLite takes the easy way
On 2004-06-16, at 06.04, Richard Kuo wrote:
Empirically speaking, we display our data in a scrolling 2
dimensional grid format. With MS access, this grid responds
instantaneously when moving through the grid. With SQLite, there is
very noticable stalling and lag and the disk i/o is higher tha
> -Original Message-
> From: Richard Kuo [mailto:[EMAIL PROTECTED]
> Sent: 16 June 2004 05:04
> To: [EMAIL PROTECTED]
> Subject: [sqlite] SQLite performance with mid-size databases
>
> Hi. We are using SQLite to store and retrieve data rows where
> each ro
This could be related with the grid handling code.
In another post I already talked about the "SELECT COUNT(*) ..." performance problem,
that can only be solved by tuning the code in the grid control itself.
If the code is generic, it's a strong possibility it isn't optimized for this, and
assume
On Tue, 15 Jun 2004 23:04:04 -0500, you wrote:
>Hi. We are using SQLite to store and retrieve data rows where each
>row is roughly 2K total in size and in a table of 15 columns. The total
>size of the database ranges from 100-300 MB.
>
>The problem we are seeing is that query and insert
> Empirically speaking, we display our data in a
> scrolling 2
> dimensional grid format. With MS access, this grid
> responds
> instantaneously when moving through the grid. With
> SQLite, there is
> very noticable stalling and lag and the disk i/o is
> higher than MS
> Access by roughly
AIL PROTECTED]
Sent: Wednesday, June 16, 2004 7:04 AM
To: [EMAIL PROTECTED]
Subject: [sqlite] SQLite performance with mid-size databases
Hi. We are using SQLite to store and retrieve data rows where each
row is roughly 2K total in size and in a table of 15 columns. The total
size of the datab
Hi. We are using SQLite to store and retrieve data rows where each
row is roughly 2K total in size and in a table of 15 columns. The total
size of the database ranges from 100-300 MB.
The problem we are seeing is that query and insert performance is
unusually bad and scales up linearly
201 - 275 of 275 matches
Mail list logo