Hi,
On 07.02.20 09:25, Clemens Ladisch wrote:
Jürgen Baier wrote:
CREATE TABLE main ( ATT1 INT, ATT2 INT, PRIMARY KEY (ATT1,ATT2) );
CREATE TABLE staging ( ATT1 INT, ATT2 INT );
Then I execute
DELETE FROM main WHERE EXISTS (SELECT 1 FROM staging WHERE main.att1 =
staging.att1 AND ma
On Fri, 7 Feb 2020 at 16:25, Clemens Ladisch wrote:
> Jürgen Baier wrote:
> > CREATE TABLE main ( ATT1 INT, ATT2 INT, PRIMARY KEY (ATT1,ATT2) );
> > CREATE TABLE staging ( ATT1 INT, ATT2 INT );
> >
> > Then I execute
> >
> > DELETE FROM main WHERE EXISTS (SELECT 1 FROM staging WHERE main.at
Jürgen Baier wrote:
> CREATE TABLE main ( ATT1 INT, ATT2 INT, PRIMARY KEY (ATT1,ATT2) );
> CREATE TABLE staging ( ATT1 INT, ATT2 INT );
>
> Then I execute
>
> DELETE FROM main WHERE EXISTS (SELECT 1 FROM staging WHERE main.att1 =
> staging.att1 AND main.att2 = staging.att2)
>
> which takes a
The difference is that #2 mentions only one field from ItemsME, namely IDR. The
value of that field comes from the index, the table itself doesn't need to be
read at all. It's not even clear why #2 bothers to join with ItemsME at all -
it's a no-op.
#1 uses more fields from ItemsME, so it nee
Mira Suk wrote:
> test 1.
>
> query
> SELECT [IndexME].[IDI], [IndexME].[Status], [IndexME].[Icon], [IndexME].[Text]
> FROM [IndexME] LEFT OUTER JOIN [ItemsME]
> ON [ItemsME].[IDR] = [IndexME].[IDI] WHERE
> [IndexME].[Parent] = ?1 AND
> (TZB_MATCHRECURSIVE([Ind
>Ok then, show the result of prepending EXPLAIN QUERY PLAN to your statement.
>--
>Igor Tandetnik
First of all thanks for bearing with me :)
functions
TZB_MATCHRECURSIVE(int,int)
- disabled for this test - always return 1. applies filter recursively
TZB_ISCHILD(int)
- bitmask check
TZB_MATCHD
Mira Suk wrote:
>> Mira Suk wrote:
>>> query written here is a lot simplified (for example "Points" column is
>>> filtered using custom function) however main culprit
>>> seems to be LEFT OUTER JOIN as accessing that same column in query which
>>> only has B table in it is lightning fast.
>>>
> Mira Suk wrote:
>> query written here is a lot simplified (for example "Points" column is
>> filtered using custom function) however main culprit seems
>> to be LEFT OUTER JOIN as accessing that same column in query which only has
>> B table in it is lightning fast.
>>
>> result of query is
Mira Suk wrote:
> query written here is a lot simplified (for example "Points" column is
> filtered using custom function) however main culprit seems
> to be LEFT OUTER JOIN as accessing that same column in query which only has B
> table in it is lightning fast.
>
> result of query is
> just aro
On the MC55 and MC70 we use with Sqlite 3.5.9:
PRAGMA temp_store = MEMORY
PRAGMA journal_mode = PERSIST
PRAGMA journal_size_limit = 50
On 2/16/2011 5:24 AM, Black, Michael (IS) wrote:
> Try this benchmark program and see what numbers you get. You need to compare
> to other machines with the
Try this benchmark program and see what numbers you get. You need to compare
to other machines with the same benchmark to see if it's the machine or your
programming/architecture.
The MC55 is a 520Mhz PXA270 so I would expect to see more than a 6X difference
from my 3Ghz box (memory speed is no
On Wed, Feb 16, 2011 at 6:13 AM, wrote:
> Hi,
>
> I'm using Motorola MC55 device, with 2GB external memory card.
>
> For the SQlite Db I have used the following Pragma values
>
> PRAGMA cache_size = 16000
> PRAGMA temp_store = 2
> PRAGMA synchronous = OFF
> PRAGMA locking_mode = EXCLUSIVE
>
>
Hello!
On Thursday 01 April 2010 18:04:10 Adam DeVita wrote:
> How does
> $ time sqlite3 test32k.db "select count(1) from role_exist"
> perform?
Equal to count(*).
Best regards, Alexey Pechnikov.
http://pechnikov.tel/
___
sqlite-users mailing list
sqli
On Thu, Apr 01, 2010 at 10:44:51AM -0400, Pavel Ivanov scratched on the wall:
> > So 58s for count of all records! The count(*) for all records may use
> > the counter from primary key b-tree, is't it?
>
> What does this mean? I believe there's no any kind of counters in
> b-tree. If you meant cou
> So 58s for count of all records! The count(*) for all records may use
> the counter from primary key b-tree, is't it?
What does this mean? I believe there's no any kind of counters in
b-tree. If you meant counter from auto-increment key then how about
gaps in the middle?
Pavel
On Thu, Apr 1,
How does
$ time sqlite3 test32k.db "select count(1) from role_exist"
perform?
On Thu, Apr 1, 2010 at 5:52 AM, Alexey Pechnikov wrote:
> Hello!
>
> $ time sqlite3 test32k.db "select count(*) from role_exist"
> 1250
>
> real0m58.908s
> user0m0.056s
> sys 0m0.864s
>
> $ sqlite3 test
Igor Tandetnik wrote:
>
> Try searching for a value that doesn't fall into any block - you'll
> likely find that the query takes a noticeable time to produce zero
> records. Pick a large value that's greater than all startIpNum's.
>
Yes, you are right. That's why I'm going with the original
it finds that first matching
row.
> -Original Message-
> From: Dani Va [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, October 31, 2007 8:30 AM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Performance problem for a simple select with range
>
>
> First, thanks, you
First, thanks, your suggestion worked.
To my surprise, it was enough to add "limit 1" to the original query.
So:
select * from blocks,locations where locations.locid = blocks.locid AND ? >=
blocks.startIpNum AND ? <= blocks.endIpNum limit 1
takes about 1.398-005 seconds
and
select * from b
"Dani Valevski" <[EMAIL PROTECTED]> wrote:
> I think I have a performance problem for a simple select with range.
>
> My Tables:
> CREATE TABLE locations(locidINTEGER PRIMARY KEY, ...);
>
> CREATE TABLE blocks(
> startIpNum INTEGER,
> endIpNum INTEGER,
>
[Default] On Mon, 29 Oct 2007 15:25:18 +0200, "Dani Valevski"
<[EMAIL PROTECTED]> wrote:
>I think I have a performance problem for a simple select with range.
>
>My Tables:
>CREATE TABLE locations(
>locidINTEGER PRIMARY KEY,
>country TEXT,
>regio
Richard,
Thanks for the additional info. I'll look into the multi-column index
idea. Sounds as if it might be the solution.
Stephen
On Thu, 2007-03-01 at 14:42 +, [EMAIL PROTECTED] wrote:
> Stephen Toney <[EMAIL PROTECTED]> wrote:
> > Thanks, Igor, Richard, and Tom,
> >
> > Why doesn't SQ
Stephen Toney <[EMAIL PROTECTED]> wrote:
>
> 4. We do not preserve case in the index, so it can ignore incorrect
> capitalization in the search terms. Maybe FTS does this too?
That's a function of your stemmer. The default stemmers in FTS2
both ignore capitalization.
>
> 5. For historical reas
Stephen Toney <[EMAIL PROTECTED]> wrote:
> Thanks, Igor, Richard, and Tom,
>
> Why doesn't SQLite use the index on key? I can see from the plan that it
> doesn't, but why not? Can only one index be used per query?
>
> This seems strange. I have used SQL Server and Visual Foxpro for this
> same pr
Regarding:
"Can only one index be used per query?"
Yes, I believe that *is* the defined behaviour of sqlite (though it does
support compound indicies). Larger DBMS often have very involved code
to determine query plans.
---
On Thu, 2007-03-01 at 12:46 +, [EMAIL PROTECTED] wrote:
> Or maybe better yet: Have you looked into using FTS2 for whatever
> it is you are trying to do? Full-text search is hard to get right
> and you appear to be trying to create your own. Why not use a FTS
> subsystem that is already wri
Thanks, Igor, Richard, and Tom,
Why doesn't SQLite use the index on key? I can see from the plan that it
doesn't, but why not? Can only one index be used per query?
This seems strange. I have used SQL Server and Visual Foxpro for this
same problem, and they both handle this query in a second if t
[EMAIL PROTECTED] wrote:
> Stephen Toney <[EMAIL PROTECTED]> wrote:
>> >
> > Here's the problem query with the plan:
> >
> > select count(*) from keyword a, keyword b where a.key=b.key and
> > a.value='music' and b.value='history';
> >
>
> A faster approach would be:
>
>SELECT (SELECT coun
You will likely be well served by a compound index on (value,key).
As the schema stands now, the indexes will help find records with
matching values, but not with matching keys; providing one index that
correlates the two should help.
Disclaimer: I haven't recreated your schema, added said
Stephen Toney <[EMAIL PROTECTED]> wrote:
> Dear experts:
>
> I'm having a performance problem I can't understand. I am running a
> "select count(*)" query joining a table on itself, and the query runs
> for five minutes using Sqlite3.exe before I get bored and kill it. This
> is on a dual-core box
You are almost certainly encountering disk cacheing effects.
Makavy, Erez (Erez) wrote:
Problem summery:
---
Simple queries sometimes take ~400 ms
Analysis:
---
- A php script runs the same SQL query several times in different places
(in different transactions).
So
Thank you very much. I am happy to hear that the performance I am seeing
is in line with what others have observed. I am running this on Windows
XP.
On Tue, 22 Nov 2005, Akira Higuchi wrote:
> Hi,
>
> On Mon, 21 Nov 2005 10:56:41 -0500 (EST)
> Shane Baker <[EMAIL PROTECTED]> wrote:
>
> > I just
Hi,
On Mon, 21 Nov 2005 10:56:41 -0500 (EST)
Shane Baker <[EMAIL PROTECTED]> wrote:
> I just need to figure out why my performance is about 30x slower than what
> others are reporting when using the library in similar ways.
Are you using sqlite on windows or MacOS X?
As I tested, sqlite perform
Thank you very much for the feedback. I understand your point, hardware
takes a deterministic amount of time.
I have been basing my assumptions on these sources:
http://www.sqlite.org/cvstrac/wiki?p=PerformanceConsiderations (See
"Transactions and performance")
http://blog.amber.org/2004/11/28/s
No, as I mentioned in my original message, I am not wrapping them. I
don't want to test an unrealistic scenario for my application. In my
application, there are multiple sources that will be inserting into the
database and pooling the information for a bulk insert won't work.
I understand that I
On Mon, 21 Nov 2005, Shane Baker wrote:
>I'm sure I must be doing something wrong. This is my first attempt at
>working with SQLite.
We'll see...
>
>I have a simple table, with 7 columns. There are 6 integers and a BLOB,
>with the primary key being on an integer. When I try to run inserts (
Are you wrapping the transactions in between Begin/End Transactions?
BEGIN TRANSACTION;
INSERT INTO table (foo) VALUES (bar);
INSERT INTO table (foo) VALUES (par);
INSERT INTO table (foo) VALUES (tar);
INSERT INTO table (foo) VALUES (far);
..
INSERT INTO table (foo) VALUES (car);
INSERT INTO table
> SQLite only uses a single index per table on any give query.
> This is unlikely to change.
Would it be able to use a multi-column query on ipnode + author?
Hugh
> Shi Elektronische Medien GmbH, Peter Spiske wrote:
> >
> > the following simple query is very slow:
> > SELECT title FROM t1 WHE
Shi Elektronische Medien GmbH, Peter Spiske wrote:
the following simple query is very slow:
SELECT title FROM t1 WHERE ipnode='VZ' ORDER BY author;
The database is about 250 MB in size and the table the query is run against
has 12 cols and 120,000 rows.
Every col has an index.
The above query ret
At 1:33 PM +0100 3/20/04, Shi Elektronische Medien GmbH, Peter Spiske wrote:
the following simple query is very slow:
SELECT title FROM t1 WHERE ipnode='VZ' ORDER BY author;
The database is about 250 MB in size and the table the query is run against
has 12 cols and 120,000 rows.
Every col has an
On Thu, 2003-11-06 at 23:31, [EMAIL PROTECTED] wrote:
> I guess in this case the hashes aren't really hashes anymore, they're more
> like arrays. There wouldn't be any bucket under hash unless the key was
> identical for two values. The B-tree entry becomes x div 4 (or an
> appropriate number).
"Mrs. Brisby" <[EMAIL PROTECTED]>
07/11/2003 12:50 PM
To: [EMAIL PROTECTED]
cc: [EMAIL PROTECTED]
Subject: Re: [sqlite] Performance problem
> On Thu, 2003-11-06 at 19:00, [EMAIL PROTECTED] wrote:
> > How would you handle the l
On Thu, 2003-11-06 at 19:00, [EMAIL PROTECTED] wrote:
> How would you handle the lack of ordering associate with hash tables?
> Sqlite can currently use indicies for three main tests: equals, less than,
> and greater than. While hash-tables are good at finding equal-to in
> constant time it usua
- Forwarded by Ben Carlyle/AU/IRSA/Rail on 07/11/2003 10:00 AM -
Ben Carlyle
07/11/2003 10:00 AM
To: "Mrs. Brisby" <[EMAIL PROTECTED]>@CORP
cc:
Subject: Re: [sqlite] Performance problem
"Mrs. Brisby" <[EMAIL PR
On Wed, 2003-11-05 at 23:59, Jonas Forsman / Axier.SE wrote:
> According to the postgresql documentation, the hash algorithm is
> discouraged compared to b-tree for performance reasons.
>
> http://www.postgresql.org/docs/view.php?version=7.3&idoc=1&file=indexes-types.html
>
> Note: Testing has sh
t;
To: "Clark, Chris" <[EMAIL PROTECTED]>
Cc: "D. Richard Hipp" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Thursday, November 06, 2003 4:14 AM
Subject: RE: [sqlite] Performance problem
> On Wed, 2003-11-05 at 13:44, Clark, Chris wrote:
> > &
On Wed, 2003-11-05 at 13:44, Clark, Chris wrote:
> > -Original Message-
> > From: Mrs. Brisby [mailto:[EMAIL PROTECTED]
> >
> > MySQL has stated in-documentation that it uses a B-tree for
> > it's index.
> > I think this is a mistake- especially for larger indexes.
> > Using several B-tr
> -Original Message-
> From: Mrs. Brisby [mailto:[EMAIL PROTECTED]
>
> MySQL has stated in-documentation that it uses a B-tree for
> it's index.
> I think this is a mistake- especially for larger indexes.
> Using several B-trees attached to a hash-table is much faster
> (if the hash i
[EMAIL PROTECTED] wrote:
DRH: Will the changes to indicies allow us to define arbitrary collation
functions? If so, will those indicies be used when a query is done that
could use the arbitrary collation function?
Likely so. But no promises yet.
--
D. Richard Hipp -- [EMAIL PROTECTED] -- 704.9
On Tue, 2003-11-04 at 17:12, D. Richard Hipp wrote:
> I'm not sure what you or Mrs.Brisby mean by "packing".
My apologies then; my statement was made before espresso so I don't
doubt I was making much sense.
Here goes:
Because MySQL stores the literal values of integers as a four octet
value the
Paul Nash wrote:
Regard the insertion efficiency. I know nothing about SQLite internals and
litle about B-trees which I understand SQLite uses, but a look at Knuth
provides some tweaks to the B-tree algorithms which improve performance.
Mrs.Brisby suspects MySQL is packing indexes and this is the f
I have just started using SQLite and it looks great. I support it remaining
small simple and correct. If it is small, then perfomance in most cases is
not such an issue. I have not yet compared it to the micro db I wrote for
webpage systems and will be interested to see the reults yet.
Regard the
Doug Currie wrote:
Using the OS or RAID may solve many problems. In this case, though, I
proposed a solution that eliminated all of the index journaling
normally applied by sqlite. In D. Richard Hipp's example there were
3 writes to (mostly) different pages for idx2. With sqlite
presently this
Allan Edwards wrote:
>
> I have YET to see a database, small to massively scalable that could handle
> BLOBS worth anything. ... I prefer the simplicity talk given early. If
> someone wants blobs, do it the old fashioned way!
>
Your concerns are understood, for BLOBs that are truely large. But w
d way!
Thanks,
Allan
-Original Message-
From: D. Richard Hipp [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 04, 2003 4:40 AM
To: [EMAIL PROTECTED]
Subject: Re: [sqlite] Performance problem
Avner Levy wrote:
>
> We have just finished testing the same scenario with MySql at
>
base
of code to start from.
Just some thoughts.
Allan
-Original Message-
From: Avner Levy [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 04, 2003 3:56 AM
To: D. Richard Hipp
Cc: [EMAIL PROTECTED]
Subject: Re: [sqlite] Performance problem
Hi,
We have just finished testing the same scen
On Tue, 2003-11-04 at 05:39, D. Richard Hipp wrote:
> Avner Levy wrote:
> >
> > We have just finished testing the same scenario with MySql at amazingly
> > they continued to insert 1500-3000 rows per second even when the
> > database had 60,000,000 records. I don't know how this magic is done...
Avner Levy wrote:
We have just finished testing the same scenario with MySql at amazingly
they continued to insert 1500-3000 rows per second even when the
database had 60,000,000 records. I don't know how this magic is done...
Nor do I. If anybody can clue me in, I would appreciate it. I suspec
Hi,
We have just finished testing the same scenario with MySql at amazingly
they continued to insert 1500-3000 rows per second even when the
database had 60,000,000 records.
I don't know how this magic is done, but that what the tests showed
(with the same machine / OS).
There is one thing I don
Avner Levy wrote:
Hi,
I've written the following program to test the sqlite performance for a
specific scenario I have.
I've used most tricks I've found but still when the database gets big
the performance gets unacceptable.
The performance problem happens of course only if the indexes are define
60 matches
Mail list logo