Hey everyone.
new to SQLite so please have patience with me
having two tables and doing left outer join
A
IDI INTEGER PRIMARY KEY
Parent INTEGER INDEX
Status INTEGER
B
IDR UNIQUE INTEGER FOREIGN KEY IndexME.IDI
Points TEXT (at average ~120 character string)
(this table is
Mira Suk mira@centrum.cz wrote:
query written here is a lot simplified (for example Points column is
filtered using custom function) however main culprit seems
to be LEFT OUTER JOIN as accessing that same column in query which only has B
table in it is lightning fast.
result of query
Mira Suk wrote:
query written here is a lot simplified (for example Points column is
filtered using custom function) however main culprit seems
to be LEFT OUTER JOIN as accessing that same column in query which only has
B table in it is lightning fast.
result of query is
just around
Mira Suk mira@centrum.cz wrote:
Mira Suk wrote:
query written here is a lot simplified (for example Points column is
filtered using custom function) however main culprit
seems to be LEFT OUTER JOIN as accessing that same column in query which
only has B table in it is lightning fast.
Ok then, show the result of prepending EXPLAIN QUERY PLAN to your statement.
--
Igor Tandetnik
First of all thanks for bearing with me :)
functions
TZB_MATCHRECURSIVE(int,int)
- disabled for this test - always return 1. applies filter recursively
TZB_ISCHILD(int)
- bitmask check
Mira Suk mira@centrum.cz wrote:
test 1.
query
SELECT [IndexME].[IDI], [IndexME].[Status], [IndexME].[Icon], [IndexME].[Text]
FROM [IndexME] LEFT OUTER JOIN [ItemsME]
ON [ItemsME].[IDR] = [IndexME].[IDI] WHERE
[IndexME].[Parent] = ?1 AND
The difference is that #2 mentions only one field from ItemsME, namely IDR. The
value of that field comes from the index, the table itself doesn't need to be
read at all. It's not even clear why #2 bothers to join with ItemsME at all -
it's a no-op.
#1 uses more fields from ItemsME, so it
Hi,
I'm using Motorola MC55 device, with 2GB external memory card.
For the SQlite Db I have used the following Pragma values
PRAGMA cache_size = 16000
PRAGMA temp_store = 2
PRAGMA synchronous = OFF
PRAGMA locking_mode = EXCLUSIVE
for some performance improvement
For insertion of records I
On Wed, Feb 16, 2011 at 6:13 AM, sasikuma...@tcs.com wrote:
Hi,
I'm using Motorola MC55 device, with 2GB external memory card.
For the SQlite Db I have used the following Pragma values
PRAGMA cache_size = 16000
PRAGMA temp_store = 2
PRAGMA synchronous = OFF
PRAGMA locking_mode =
-users@sqlite.org
Subject: EXT :[sqlite] Performance Problem
Hi,
I'm using Motorola MC55 device, with 2GB external memory card.
For the SQlite Db I have used the following Pragma values
PRAGMA cache_size = 16000
PRAGMA temp_store = 2
PRAGMA synchronous = OFF
PRAGMA locking_mode = EXCLUSIVE
From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on
behalf of sasikuma...@tcs.com [sasikuma...@tcs.com]
Sent: Wednesday, February 16, 2011 5:13 AM
To: sqlite-users@sqlite.org
Subject: EXT :[sqlite] Performance Problem
Hi,
I'm using Motorola MC55 device, with 2GB
Hello!
$ time sqlite3 test32k.db select count(*) from role_exist
1250
real0m58.908s
user0m0.056s
sys 0m0.864s
$ sqlite3 test32k.db
SQLite version 3.6.23
sqlite .schema role_exist
CREATE TABLE role_exist (
id INTEGER PRIMARY KEY,
uid BLOB NOT NULL DEFAULT (randomblob(16))
How does
$ time sqlite3 test32k.db select count(1) from role_exist
perform?
On Thu, Apr 1, 2010 at 5:52 AM, Alexey Pechnikov pechni...@mobigroup.ruwrote:
Hello!
$ time sqlite3 test32k.db select count(*) from role_exist
1250
real0m58.908s
user0m0.056s
sys 0m0.864s
$
So 58s for count of all records! The count(*) for all records may use
the counter from primary key b-tree, is't it?
What does this mean? I believe there's no any kind of counters in
b-tree. If you meant counter from auto-increment key then how about
gaps in the middle?
Pavel
On Thu, Apr 1,
On Thu, Apr 01, 2010 at 10:44:51AM -0400, Pavel Ivanov scratched on the wall:
So 58s for count of all records! The count(*) for all records may use
the counter from primary key b-tree, is't it?
What does this mean? I believe there's no any kind of counters in
b-tree. If you meant counter
Hello!
On Thursday 01 April 2010 18:04:10 Adam DeVita wrote:
How does
$ time sqlite3 test32k.db select count(1) from role_exist
perform?
Equal to count(*).
Best regards, Alexey Pechnikov.
http://pechnikov.tel/
___
sqlite-users mailing list
First, thanks, your suggestion worked.
To my surprise, it was enough to add limit 1 to the original query.
So:
select * from blocks,locations where locations.locid = blocks.locid AND ? =
blocks.startIpNum AND ? = blocks.endIpNum limit 1
takes about 1.398-005 seconds
and
select * from
row.
-Original Message-
From: Dani Va [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 31, 2007 8:30 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Performance problem for a simple select with range
First, thanks, your suggestion worked.
To my surprise, it was enough
Igor Tandetnik wrote:
Try searching for a value that doesn't fall into any block - you'll
likely find that the query takes a noticeable time to produce zero
records. Pick a large value that's greater than all startIpNum's.
Yes, you are right. That's why I'm going with the original
I think I have a performance problem for a simple select with range.
My Tables:
CREATE TABLE locations(
locidINTEGER PRIMARY KEY,
country TEXT,
regionTEXT,
cityTEXT,
postalCode TEXT,
[Default] On Mon, 29 Oct 2007 15:25:18 +0200, Dani Valevski
[EMAIL PROTECTED] wrote:
I think I have a performance problem for a simple select with range.
My Tables:
CREATE TABLE locations(
locidINTEGER PRIMARY KEY,
country TEXT,
regionTEXT,
Dani Valevski [EMAIL PROTECTED] wrote:
I think I have a performance problem for a simple select with range.
My Tables:
CREATE TABLE locations(locidINTEGER PRIMARY KEY, ...);
CREATE TABLE blocks(
startIpNum INTEGER,
endIpNum INTEGER,
Dear experts:
I'm having a performance problem I can't understand. I am running a
select count(*) query joining a table on itself, and the query runs
for five minutes using Sqlite3.exe before I get bored and kill it. This
is on a dual-core box with 4GB of memory, running Windows XP Pro. The
[EMAIL PROTECTED] wrote:
Stephen Toney [EMAIL PROTECTED] wrote:
Here's the problem query with the plan:
select count(*) from keyword a, keyword b where a.key=b.key and
a.value='music' and b.value='history';
A faster approach would be:
SELECT (SELECT count(*) FROM keyword
produced is better. Don't assume this to be
good advice without trying it. :)
-Tom
-Original Message-
From: Stephen Toney [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 01, 2007 7:00 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] Performance problem
Dear experts
On Thu, 2007-03-01 at 12:46 +, [EMAIL PROTECTED] wrote:
Or maybe better yet: Have you looked into using FTS2 for whatever
it is you are trying to do? Full-text search is hard to get right
and you appear to be trying to create your own. Why not use a FTS
subsystem that is already
Regarding:
Can only one index be used per query?
Yes, I believe that *is* the defined behaviour of sqlite (though it does
support compound indicies). Larger DBMS often have very involved code
to determine query plans.
Stephen Toney [EMAIL PROTECTED] wrote:
Thanks, Igor, Richard, and Tom,
Why doesn't SQLite use the index on key? I can see from the plan that it
doesn't, but why not? Can only one index be used per query?
This seems strange. I have used SQL Server and Visual Foxpro for this
same problem,
Stephen Toney [EMAIL PROTECTED] wrote:
4. We do not preserve case in the index, so it can ignore incorrect
capitalization in the search terms. Maybe FTS does this too?
That's a function of your stemmer. The default stemmers in FTS2
both ignore capitalization.
5. For historical reasons,
Richard,
Thanks for the additional info. I'll look into the multi-column index
idea. Sounds as if it might be the solution.
Stephen
On Thu, 2007-03-01 at 14:42 +, [EMAIL PROTECTED] wrote:
Stephen Toney [EMAIL PROTECTED] wrote:
Thanks, Igor, Richard, and Tom,
Why doesn't SQLite use
Problem summery:
---
Simple queries sometimes take ~400 ms
Analysis:
---
- A php script runs the same SQL query several times in different places
(in different transactions).
Some of the queries take around 400ms while the others (identical
queries) takes only 4-20ms.
You are almost certainly encountering disk cacheing effects.
Makavy, Erez (Erez) wrote:
Problem summery:
---
Simple queries sometimes take ~400 ms
Analysis:
---
- A php script runs the same SQL query several times in different places
(in different transactions).
Are you wrapping the transactions in between Begin/End Transactions?
BEGIN TRANSACTION;
INSERT INTO table (foo) VALUES (bar);
INSERT INTO table (foo) VALUES (par);
INSERT INTO table (foo) VALUES (tar);
INSERT INTO table (foo) VALUES (far);
..
INSERT INTO table (foo) VALUES (car);
INSERT INTO
On Mon, 21 Nov 2005, Shane Baker wrote:
I'm sure I must be doing something wrong. This is my first attempt at
working with SQLite.
We'll see...
I have a simple table, with 7 columns. There are 6 integers and a BLOB,
with the primary key being on an integer. When I try to run inserts (one
No, as I mentioned in my original message, I am not wrapping them. I
don't want to test an unrealistic scenario for my application. In my
application, there are multiple sources that will be inserting into the
database and pooling the information for a bulk insert won't work.
I understand that
Thank you very much for the feedback. I understand your point, hardware
takes a deterministic amount of time.
I have been basing my assumptions on these sources:
http://www.sqlite.org/cvstrac/wiki?p=PerformanceConsiderations (See
Transactions and performance)
Hi,
On Mon, 21 Nov 2005 10:56:41 -0500 (EST)
Shane Baker [EMAIL PROTECTED] wrote:
I just need to figure out why my performance is about 30x slower than what
others are reporting when using the library in similar ways.
Are you using sqlite on windows or MacOS X?
As I tested, sqlite performs
Thank you very much. I am happy to hear that the performance I am seeing
is in line with what others have observed. I am running this on Windows
XP.
On Tue, 22 Nov 2005, Akira Higuchi wrote:
Hi,
On Mon, 21 Nov 2005 10:56:41 -0500 (EST)
Shane Baker [EMAIL PROTECTED] wrote:
I just need to
On Tue, 12 Apr 2005, Maksim Yevmenkin wrote:
Dear SQLite users,
consider this
[snip]
it only took 4+ seconds to read, parse, perform hash table lookup and
sum the data. note that for unique 1417 keys it had to do hash lookup
and hash insert.
so, just with plain ascii file i get four times
On Fri, 2005-04-08 at 11:53 -0700, Maksim Yevmenkin wrote:
CREATE INDEX data_by_a ON data (a);
time sqlite3 db 'select n2 from data where a = 18234721' /dev/null
25.95u 0.71s 0:27.02 98.6%
If you make the index look like this:
CREATE INDEX data_by_a ON data(a, n2);
Then SQLite only
well, it could be true, but not in the queries i have posted. i group
by column a and there is an index on column a, so sqlite does not
have to do anything to compute key. it does not even have to back to
Do not confuse the index key with the aggregator key. The two may be
the same in
Maksim,
Some things you could try:
1) increase cache memory
You may be causing a lot of cache misses if the size of the query result
is very large compared to the size of the cache. Index-based searches
can cause multiple reloads of the same page because of a lack of
locality in the cache. An
Gé,
thanks for the suggestion. unfortunately it did not make any
difference :( below is the results. as you can see it takes 7+ seconds
to group by 333,392 records and i'm grouping by column on which i
have index. again, i'm not a database guy, but i think that is slow.
perhaps someone can
with sum(n1) added query runs twice as slow. as i was told its because
sqlite has to fetch data row. fine, but why its soo slow?! and it
Because for each row it has to compute the aggregate key, find the
aggregator for that key and increment the sum for that aggregate key.
That's a lot
Thomas,
with sum(n1) added query runs twice as slow. as i was told its because
sqlite has to fetch data row. fine, but why its soo slow?! and it
Because for each row it has to compute the aggregate key, find the
aggregator for that key and increment the sum for that aggregate key.
On Tue, 2005-04-12 at 16:17 -0700, Maksim Yevmenkin wrote:
SELECT count(*) FROM (SELECT a,n1 FROM data WHERE a = 18234721 GROUP BY a);
time sqlite3 db test.sql
30
1024
1417
13.14u 1.06s 0:14.40 98.6%
Have you tried doing the query like this:
SELECT count(*) FROM (SELECT a,nl
Maksim Yevmenkin wrote:
i think, i know what is going on here. the problem is that every time
i do an indexed scan sqlite has to
1) fetch index pages
2) fetch data pages that match where condition
because both index and data are in the same file sqlite has to perform
insane amount of seek() calls
SELECT count(*) FROM (SELECT a,n1 FROM data WHERE a = 18234721 GROUP BY a);
time sqlite3 db test.sql
30
1024
1417
13.14u 1.06s 0:14.40 98.6%
Have you tried doing the query like this:
SELECT count(*) FROM (SELECT a,nl FROM data WHERE a-18234721=0 GROUP
BY a);
i just
John.
i think, i know what is going on here. the problem is that every time
i do an indexed scan sqlite has to
1) fetch index pages
2) fetch data pages that match where condition
because both index and data are in the same file sqlite has to perform
insane amount of seek() calls and
Maksim Yevmenkin wrote:
so, just with plain ascii file i get four times the speed i get with
sqlite. note that my c program will scale linearly with the size of
dataset (just like i see with sqlite).
With anything related to computers, there are always tradeoffs - most
commonly power
Christian,
thanks for the reply.
i'm having strange performance problem with sqlite-3.2.0. consider the
following table
[snip]
now the problem:
1) if i do a select with an idex it takes 27 sec. to get 92 rows
time sqlite3 db 'select n2 from data where a = 18234721' /dev/null
-Original Message-
From: Maksim Yevmenkin [mailto:[EMAIL PROTECTED]
Sent: Monday, April 11, 2005 9:59 AM
To: Christian Smith
Cc: sqlite-users@sqlite.org
Subject: Re: [sqlite] sqlite performance problem
i guess, i can believe this. however its pretty disappointing to get
50
Robert,
i guess, i can believe this. however its pretty disappointing to get
50% improvement on 30 times less dataset :(
but how do you explain this?
sqlite .schema data
CREATE TABLE data
(
a INTEGER,
b INTEGER,
c CHAR,
d INTEGER,
e
Let's recap ...
time sqlite3 db 'select n1 from data where a = 18234721' /dev/null
26.15u 0.59s 0:27.00 99.0%
time sqlite3 db 'select n1 from data where a = 18234721' /dev/null
26.04u 0.61s 0:26.91 99.0%
time sqlite3 db 'select e from data where a = 18234721' /dev/null
12.22u 0.41s
Robert,
time sqlite3 db 'select n1 from data where a = 18234721' /dev/null
26.15u 0.59s 0:27.00 99.0%
time sqlite3 db 'select n1 from data where a = 18234721' /dev/null
26.04u 0.61s 0:26.91 99.0%
time sqlite3 db 'select e from data where a = 18234721' /dev/null
12.22u 0.41s
Robert,
[snip]
i said i print these rows to /dev/null too in my perl code. plus the
perl code does some other things such as joining these rows with other
hashes and summing the numbers.
That's fine. I was merely trying to account for the 50% speed difference
between the two
SQLite only uses a single index per table on any give query.
This is unlikely to change.
Would it be able to use a multi-column query on ipnode + author?
Hugh
Shi Elektronische Medien GmbH, Peter Spiske wrote:
the following simple query is very slow:
SELECT title FROM t1 WHERE
Hi,
the following simple query is very slow:
SELECT title FROM t1 WHERE ipnode='VZ' ORDER BY author;
The database is about 250 MB in size and the table the query is run against
has 12 cols and 120,000 rows.
Every col has an index.
The above query returns about 80% of the records.
As soon as
At 1:33 PM +0100 3/20/04, Shi Elektronische Medien GmbH, Peter Spiske wrote:
the following simple query is very slow:
SELECT title FROM t1 WHERE ipnode='VZ' ORDER BY author;
The database is about 250 MB in size and the table the query is run against
has 12 cols and 120,000 rows.
Every col has an
Shi Elektronische Medien GmbH, Peter Spiske wrote:
the following simple query is very slow:
SELECT title FROM t1 WHERE ipnode='VZ' ORDER BY author;
The database is about 250 MB in size and the table the query is run against
has 12 cols and 120,000 rows.
Every col has an index.
The above query
- Forwarded by Ben Carlyle/AU/IRSA/Rail on 07/11/2003 10:00 AM -
Ben Carlyle
07/11/2003 10:00 AM
To: Mrs. Brisby [EMAIL PROTECTED]@CORP
cc:
Subject:Re: [sqlite] Performance problem
Mrs. Brisby [EMAIL PROTECTED]
07/11/2003 12:08 AM
On Thu, 2003-11-06 at 19:00, [EMAIL PROTECTED] wrote:
How would you handle the lack of ordering associate with hash tables?
Sqlite can currently use indicies for three main tests: equals, less than,
and greater than. While hash-tables are good at finding equal-to in
constant time it usually
. Richard Hipp [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Thursday, November 06, 2003 4:14 AM
Subject: RE: [sqlite] Performance problem
On Wed, 2003-11-05 at 13:44, Clark, Chris wrote:
-Original Message-
From: Mrs. Brisby [mailto:[EMAIL PROTECTED]
MySQL has stated
Avner Levy wrote:
We have just finished testing the same scenario with MySql at amazingly
they continued to insert 1500-3000 rows per second even when the
database had 60,000,000 records. I don't know how this magic is done...
Nor do I. If anybody can clue me in, I would appreciate it. I
from.
Just some thoughts.
Allan
-Original Message-
From: Avner Levy [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 04, 2003 3:56 AM
To: D. Richard Hipp
Cc: [EMAIL PROTECTED]
Subject: Re: [sqlite] Performance problem
Hi,
We have just finished testing the same scenario with MySql
Allan Edwards wrote:
I have YET to see a database, small to massively scalable that could handle
BLOBS worth anything. ... I prefer the simplicity talk given early. If
someone wants blobs, do it the old fashioned way!
Your concerns are understood, for BLOBs that are truely large. But what
Your suggestion was to reconstruct the index from original
table data on a rollback. For a large transaction that touches
most pages of the index, this can give (at best) a 2:1 speedup.
But the other side, the potential slowdown is extreme.
Yeah, there is that drawback. Other DBMSs avoid
[EMAIL PROTECTED] wrote:
DRH: Will the changes to indicies allow us to define arbitrary collation
functions? If so, will those indicies be used when a query is done that
could use the arbitrary collation function?
Likely so. But no promises yet.
--
D. Richard Hipp -- [EMAIL PROTECTED] --
change of having someone look into the possibility of implementing some thing like
that?
regards
Greg
- Original Message -
From: D. Richard Hipp
To: [EMAIL PROTECTED]
Sent: Wednesday, November 05, 2003 1:45 PM
Subject: Re: [sqlite] Performance problem
[EMAIL PROTECTED
- Forwarded by Ben Carlyle/AU/IRSA/Rail on 04/11/2003 02:26 PM -
Ben Carlyle
04/11/2003 02:01 PM
To: Doug Currie [EMAIL PROTECTED]
cc:
Subject:Re: Re[2]: [sqlite] Performance problem
Doug Currie [EMAIL PROTECTED]
03/11/2003 05:39 AM
Please
70 matches
Mail list logo