Hi,
On 07.02.20 09:25, Clemens Ladisch wrote:
Jürgen Baier wrote:
CREATE TABLE main ( ATT1 INT, ATT2 INT, PRIMARY KEY (ATT1,ATT2) );
CREATE TABLE staging ( ATT1 INT, ATT2 INT );
Then I execute
DELETE FROM main WHERE EXISTS (SELECT 1 FROM staging WHERE main.att1 =
staging.att1 AND ma
On Fri, 7 Feb 2020 at 16:25, Clemens Ladisch wrote:
> Jürgen Baier wrote:
> > CREATE TABLE main ( ATT1 INT, ATT2 INT, PRIMARY KEY (ATT1,ATT2) );
> > CREATE TABLE staging ( ATT1 INT, ATT2 INT );
> >
> > Then I execute
> >
> > DELETE FROM main WHERE EXISTS (SELECT 1 FROM staging WHERE main.at
Jürgen Baier wrote:
> CREATE TABLE main ( ATT1 INT, ATT2 INT, PRIMARY KEY (ATT1,ATT2) );
> CREATE TABLE staging ( ATT1 INT, ATT2 INT );
>
> Then I execute
>
> DELETE FROM main WHERE EXISTS (SELECT 1 FROM staging WHERE main.att1 =
> staging.att1 AND main.att2 = staging.att2)
>
> which takes a
Hi,
I have a question regarding the performance of DELETE FROM (or maybe
better: correlated subqueries).
I have a table "main" and a table "staging". In "staging" I have a
subset of "main". I want to delete all rows from "main" which are also
in "staging".
CREATE TABLE main ( ATT1 INT, A
The difference is that #2 mentions only one field from ItemsME, namely IDR. The
value of that field comes from the index, the table itself doesn't need to be
read at all. It's not even clear why #2 bothers to join with ItemsME at all -
it's a no-op.
#1 uses more fields from ItemsME, so it nee
Mira Suk wrote:
> test 1.
>
> query
> SELECT [IndexME].[IDI], [IndexME].[Status], [IndexME].[Icon], [IndexME].[Text]
> FROM [IndexME] LEFT OUTER JOIN [ItemsME]
> ON [ItemsME].[IDR] = [IndexME].[IDI] WHERE
> [IndexME].[Parent] = ?1 AND
> (TZB_MATCHRECURSIVE([Ind
>Ok then, show the result of prepending EXPLAIN QUERY PLAN to your statement.
>--
>Igor Tandetnik
First of all thanks for bearing with me :)
functions
TZB_MATCHRECURSIVE(int,int)
- disabled for this test - always return 1. applies filter recursively
TZB_ISCHILD(int)
- bitmask check
TZB_MATCHD
Mira Suk wrote:
>> Mira Suk wrote:
>>> query written here is a lot simplified (for example "Points" column is
>>> filtered using custom function) however main culprit
>>> seems to be LEFT OUTER JOIN as accessing that same column in query which
>>> only has B table in it is lightning fast.
>>>
> Mira Suk wrote:
>> query written here is a lot simplified (for example "Points" column is
>> filtered using custom function) however main culprit seems
>> to be LEFT OUTER JOIN as accessing that same column in query which only has
>> B table in it is lightning fast.
>>
>> result of query is
Mira Suk wrote:
> query written here is a lot simplified (for example "Points" column is
> filtered using custom function) however main culprit seems
> to be LEFT OUTER JOIN as accessing that same column in query which only has B
> table in it is lightning fast.
>
> result of query is
> just aro
Hey everyone.
new to SQLite so please have patience with me
having two tables and doing left outer join
A
IDI INTEGER PRIMARY KEY
Parent INTEGER INDEX
Status INTEGER
B
IDR UNIQUE INTEGER FOREIGN KEY IndexME.IDI
Points TEXT (at average ~120 character string)
(this table is primarily
+n;
> if (interval != 1&& (n% interval)==0) {
> sqlite3_exec(db,"commit",NULL,NULL,NULL);
> sqlite3_exec(db,"begin",NULL,NULL,NULL);
> }
> sqlite3_bind_int(stmt,1,n);
> rc = sqlite3_step(stmt);
> if (rc != SQLITE_DONE) {
>
= 1)
sqlite3_exec(db,"commit",NULL,NULL,NULL);
printf("commits per second: %.1f\n",nrec/(elapsed()-t1));
sqlite3_close(db);
}
Michael D. Black
Senior Scientist
NG Information Systems
Advanced Analytics Directorate
Fro
On Wed, Feb 16, 2011 at 6:13 AM, wrote:
> Hi,
>
> I'm using Motorola MC55 device, with 2GB external memory card.
>
> For the SQlite Db I have used the following Pragma values
>
> PRAGMA cache_size = 16000
> PRAGMA temp_store = 2
> PRAGMA synchronous = OFF
> PRAGMA locking_mode = EXCLUSIVE
>
>
Hi,
I'm using Motorola MC55 device, with 2GB external memory card.
For the SQlite Db I have used the following Pragma values
PRAGMA cache_size = 16000
PRAGMA temp_store = 2
PRAGMA synchronous = OFF
PRAGMA locking_mode = EXCLUSIVE
for some performance improvement
For insertion of records I
Hello!
On Thursday 01 April 2010 18:04:10 Adam DeVita wrote:
> How does
> $ time sqlite3 test32k.db "select count(1) from role_exist"
> perform?
Equal to count(*).
Best regards, Alexey Pechnikov.
http://pechnikov.tel/
___
sqlite-users mailing list
sqli
On Thu, Apr 01, 2010 at 10:44:51AM -0400, Pavel Ivanov scratched on the wall:
> > So 58s for count of all records! The count(*) for all records may use
> > the counter from primary key b-tree, is't it?
>
> What does this mean? I believe there's no any kind of counters in
> b-tree. If you meant cou
> So 58s for count of all records! The count(*) for all records may use
> the counter from primary key b-tree, is't it?
What does this mean? I believe there's no any kind of counters in
b-tree. If you meant counter from auto-increment key then how about
gaps in the middle?
Pavel
On Thu, Apr 1,
How does
$ time sqlite3 test32k.db "select count(1) from role_exist"
perform?
On Thu, Apr 1, 2010 at 5:52 AM, Alexey Pechnikov wrote:
> Hello!
>
> $ time sqlite3 test32k.db "select count(*) from role_exist"
> 1250
>
> real0m58.908s
> user0m0.056s
> sys 0m0.864s
>
> $ sqlite3 test
Hello!
$ time sqlite3 test32k.db "select count(*) from role_exist"
1250
real0m58.908s
user0m0.056s
sys 0m0.864s
$ sqlite3 test32k.db
SQLite version 3.6.23
sqlite> .schema role_exist
CREATE TABLE role_exist (
id INTEGER PRIMARY KEY,
uid BLOB NOT NULL DEFAULT (randomblob(16)) U
Igor Tandetnik wrote:
>
> Try searching for a value that doesn't fall into any block - you'll
> likely find that the query takes a noticeable time to produce zero
> records. Pick a large value that's greater than all startIpNum's.
>
Yes, you are right. That's why I'm going with the original
it finds that first matching
row.
> -Original Message-
> From: Dani Va [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, October 31, 2007 8:30 AM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Performance problem for a simple select with range
>
>
> First, thanks, you
First, thanks, your suggestion worked.
To my surprise, it was enough to add "limit 1" to the original query.
So:
select * from blocks,locations where locations.locid = blocks.locid AND ? >=
blocks.startIpNum AND ? <= blocks.endIpNum limit 1
takes about 1.398-005 seconds
and
select * from b
"Dani Valevski" <[EMAIL PROTECTED]> wrote:
> I think I have a performance problem for a simple select with range.
>
> My Tables:
> CREATE TABLE locations(locidINTEGER PRIMARY KEY, ...);
>
> CREATE TABLE blocks(
> startIpNum INTEGER,
> endIpNum INTEGER,
>
[Default] On Mon, 29 Oct 2007 15:25:18 +0200, "Dani Valevski"
<[EMAIL PROTECTED]> wrote:
>I think I have a performance problem for a simple select with range.
>
>My Tables:
>CREATE TABLE locations(
>locidINTEGER PRIMARY KEY,
>country TEXT,
>regio
I think I have a performance problem for a simple select with range.
My Tables:
CREATE TABLE locations(
locidINTEGER PRIMARY KEY,
country TEXT,
regionTEXT,
cityTEXT,
postalCode TEXT,
latitud
Richard,
Thanks for the additional info. I'll look into the multi-column index
idea. Sounds as if it might be the solution.
Stephen
On Thu, 2007-03-01 at 14:42 +, [EMAIL PROTECTED] wrote:
> Stephen Toney <[EMAIL PROTECTED]> wrote:
> > Thanks, Igor, Richard, and Tom,
> >
> > Why doesn't SQ
Stephen Toney <[EMAIL PROTECTED]> wrote:
>
> 4. We do not preserve case in the index, so it can ignore incorrect
> capitalization in the search terms. Maybe FTS does this too?
That's a function of your stemmer. The default stemmers in FTS2
both ignore capitalization.
>
> 5. For historical reas
Stephen Toney <[EMAIL PROTECTED]> wrote:
> Thanks, Igor, Richard, and Tom,
>
> Why doesn't SQLite use the index on key? I can see from the plan that it
> doesn't, but why not? Can only one index be used per query?
>
> This seems strange. I have used SQL Server and Visual Foxpro for this
> same pr
Regarding:
"Can only one index be used per query?"
Yes, I believe that *is* the defined behaviour of sqlite (though it does
support compound indicies). Larger DBMS often have very involved code
to determine query plans.
---
On Thu, 2007-03-01 at 12:46 +, [EMAIL PROTECTED] wrote:
> Or maybe better yet: Have you looked into using FTS2 for whatever
> it is you are trying to do? Full-text search is hard to get right
> and you appear to be trying to create your own. Why not use a FTS
> subsystem that is already wri
aid index and
> checked that the query plan produced is better. Don't assume this to be
> good advice without trying it. :)
>
>-Tom
>
> > -Original Message-
> > From: Stephen Toney [mailto:[EMAIL PROTECTED]
> > Sent: Thursday, March 01, 2007
[EMAIL PROTECTED] wrote:
> Stephen Toney <[EMAIL PROTECTED]> wrote:
>> >
> > Here's the problem query with the plan:
> >
> > select count(*) from keyword a, keyword b where a.key=b.key and
> > a.value='music' and b.value='history';
> >
>
> A faster approach would be:
>
>SELECT (SELECT coun
org
> Subject: [sqlite] Performance problem
>
> Dear experts:
>
> I'm having a performance problem I can't understand. I am running a
> "select count(*)" query joining a table on itself, and the query runs
> for five minutes using Sqlite3.exe before I get bored
Stephen Toney <[EMAIL PROTECTED]> wrote:
> Dear experts:
>
> I'm having a performance problem I can't understand. I am running a
> "select count(*)" query joining a table on itself, and the query runs
> for five minutes using Sqlite3.exe before I get bored and kill it. This
> is on a dual-core box
Dear experts:
I'm having a performance problem I can't understand. I am running a
"select count(*)" query joining a table on itself, and the query runs
for five minutes using Sqlite3.exe before I get bored and kill it. This
is on a dual-core box with 4GB of memory, running Windows XP Pro. The
Sqli
You are almost certainly encountering disk cacheing effects.
Makavy, Erez (Erez) wrote:
Problem summery:
---
Simple queries sometimes take ~400 ms
Analysis:
---
- A php script runs the same SQL query several times in different places
(in different transactions).
So
Problem summery:
---
Simple queries sometimes take ~400 ms
Analysis:
---
- A php script runs the same SQL query several times in different places
(in different transactions).
Some of the queries take around 400ms while the others (identical
queries) takes only 4-20ms.
Thank you very much. I am happy to hear that the performance I am seeing
is in line with what others have observed. I am running this on Windows
XP.
On Tue, 22 Nov 2005, Akira Higuchi wrote:
> Hi,
>
> On Mon, 21 Nov 2005 10:56:41 -0500 (EST)
> Shane Baker <[EMAIL PROTECTED]> wrote:
>
> > I just
Hi,
On Mon, 21 Nov 2005 10:56:41 -0500 (EST)
Shane Baker <[EMAIL PROTECTED]> wrote:
> I just need to figure out why my performance is about 30x slower than what
> others are reporting when using the library in similar ways.
Are you using sqlite on windows or MacOS X?
As I tested, sqlite perform
Thank you very much for the feedback. I understand your point, hardware
takes a deterministic amount of time.
I have been basing my assumptions on these sources:
http://www.sqlite.org/cvstrac/wiki?p=PerformanceConsiderations (See
"Transactions and performance")
http://blog.amber.org/2004/11/28/s
No, as I mentioned in my original message, I am not wrapping them. I
don't want to test an unrealistic scenario for my application. In my
application, there are multiple sources that will be inserting into the
database and pooling the information for a bulk insert won't work.
I understand that I
On Mon, 21 Nov 2005, Shane Baker wrote:
>I'm sure I must be doing something wrong. This is my first attempt at
>working with SQLite.
We'll see...
>
>I have a simple table, with 7 columns. There are 6 integers and a BLOB,
>with the primary key being on an integer. When I try to run inserts (
Are you wrapping the transactions in between Begin/End Transactions?
BEGIN TRANSACTION;
INSERT INTO table (foo) VALUES (bar);
INSERT INTO table (foo) VALUES (par);
INSERT INTO table (foo) VALUES (tar);
INSERT INTO table (foo) VALUES (far);
..
INSERT INTO table (foo) VALUES (car);
INSERT INTO table
I'm sure I must be doing something wrong. This is my first attempt at
working with SQLite.
I have a simple table, with 7 columns. There are 6 integers and a BLOB,
with the primary key being on an integer. When I try to run inserts (one
insert per transacion - I know this is not optimal, but it
> well, it could be true, but not in the queries i have posted. i "group
> by" column "a" and there is an index on column "a", so sqlite does not
> have to do anything to compute key. it does not even have to back to
Do not confuse the index key with the aggregator key. The two may be
the s
On Fri, 2005-04-08 at 11:53 -0700, Maksim Yevmenkin wrote:
> CREATE INDEX data_by_a ON data (a);
>
> > time sqlite3 db 'select n2 from data where a <= 18234721' > /dev/null
> 25.95u 0.71s 0:27.02 98.6%
>
If you make the index look like this:
CREATE INDEX data_by_a ON data(a, n2);
Then SQLite
On Tue, 12 Apr 2005, Maksim Yevmenkin wrote:
>Dear SQLite users,
>
>consider this
>
> [snip]
>
>it only took 4+ seconds to read, parse, perform hash table lookup and
>sum the data. note that for unique 1417 keys it had to do hash lookup
>and hash insert.
>
>so, just with plain ascii file i get fou
Maksim Yevmenkin wrote:
>
>>>so, just with plain ascii file i get four times the speed i get with
>>>sqlite. note that my c program will scale linearly with the size of
>>>dataset (just like i see with sqlite).
>>>
>>>
>> With anything related to computers, there are always tradeoffs - mos
John.
> >i think, i know what is going on here. the problem is that every time
> >i do an indexed scan sqlite has to
> >
> >1) fetch index pages
> >
> >2) fetch data pages that match "where" condition
> >
> >because both index and data are in the same file sqlite has to perform
> >insane amount of
> > SELECT count(*) FROM (SELECT a,n1 FROM data WHERE a <= 18234721 GROUP BY a);
> >
> > > time sqlite3 db < test.sql
> > 30
> > 1024
> > 1417
> > 13.14u 1.06s 0:14.40 98.6%
>
> Have you tried doing the query like this:
>
> SELECT count(*) FROM (SELECT a,nl FROM data WHERE a-18234721<=0 GRO
Maksim Yevmenkin wrote:
i think, i know what is going on here. the problem is that every time
i do an indexed scan sqlite has to
1) fetch index pages
2) fetch data pages that match "where" condition
because both index and data are in the same file sqlite has to perform
insane amount of seek() calls
On Tue, 2005-04-12 at 16:17 -0700, Maksim Yevmenkin wrote:
> SELECT count(*) FROM (SELECT a,n1 FROM data WHERE a <= 18234721 GROUP BY a);
>
> > time sqlite3 db < test.sql
> 30
> 1024
> 1417
> 13.14u 1.06s 0:14.40 98.6%
>
Have you tried doing the query like this:
SELECT count(*) FROM (SEL
Thomas,
> > with sum(n1) added query runs twice as slow. as i was told its because
> > sqlite has to fetch data row. fine, but why its soo slow?! and it
>
>Because for each row it has to compute the aggregate key, find the
> aggregator for that key and increment the sum for that aggregate key
> with sum(n1) added query runs twice as slow. as i was told its because
> sqlite has to fetch data row. fine, but why its soo slow?! and it
Because for each row it has to compute the aggregate key, find the
aggregator for that key and increment the sum for that aggregate key.
That's a lot mo
Dear SQLite users,
consider this
schema
-
sqlite> .schema data
CREATE TABLE data(
a INTEGER,
b INTEGER,
c CHAR,
d INTEGER,
e INTEGER,
n1 FLOAT,
n2 FLOAT
);
CREATE INDEX data_by_a on data (a);
data set
-
sqlite> select count(*) from data where a <= 18234721;
92
Gé,
thanks for the suggestion. unfortunately it did not make any
difference :( below is the results. as you can see it takes 7+ seconds
to "group by" 333,392 records and i'm grouping by column on which i
have index. again, i'm not a database guy, but i think that is slow.
perhaps someone can comm
Maksim,
Some things you could try:
1) increase cache memory
You may be causing a lot of cache misses if the size of the query result
is very large compared to the size of the cache. Index-based searches
can cause multiple reloads of the same page because of a lack of
locality in the cache. An in
Robert,
> [snip]
>
> > i said i print these rows to /dev/null too in my perl code. plus the
> > perl code does some other things such as joining these rows with other
> > hashes and summing the numbers.
>
> That's fine. I was merely trying to account for the 50% speed difference
> between the t
> -Original Message-
> From: Maksim Yevmenkin [mailto:[EMAIL PROTECTED]
> Sent: Monday, April 11, 2005 12:27 PM
> To: Robert Simpson
> Cc: sqlite-users@sqlite.org
> Subject: Re: [sqlite] sqlite performance problem
>
[snip]
> i said i print these rows to /dev/
Robert,
> > time sqlite3 db 'select n1 from data where a <= 18234721' > /dev/null
> > 26.15u 0.59s 0:27.00 99.0%
> >
> > time sqlite3 db 'select n1 from data where a <= 18234721' > /dev/null
> > 26.04u 0.61s 0:26.91 99.0%
> >
> > time sqlite3 db 'select e from data where a <= 18234721' > /dev/null
Let's recap ...
> time sqlite3 db 'select n1 from data where a <= 18234721' > /dev/null
> 26.15u 0.59s 0:27.00 99.0%
>
> time sqlite3 db 'select n1 from data where a <= 18234721' > /dev/null
> 26.04u 0.61s 0:26.91 99.0%
>
> time sqlite3 db 'select e from data where a <= 18234721' > /dev/null
> 12.
Robert,
> > i guess, i can believe this. however its pretty disappointing to get
> > 50% improvement on 30 times less dataset :(
> >
> > but how do you explain this?
> >
> > sqlite> .schema data
> > CREATE TABLE data
> > (
> >a INTEGER,
> >b INTEGER,
> >c CHAR,
> >d
> -Original Message-
> From: Maksim Yevmenkin [mailto:[EMAIL PROTECTED]
> Sent: Monday, April 11, 2005 9:59 AM
> To: Christian Smith
> Cc: sqlite-users@sqlite.org
> Subject: Re: [sqlite] sqlite performance problem
>
> i guess, i can believe this. however its pre
Christian,
thanks for the reply.
> >i'm having strange performance problem with sqlite-3.2.0. consider the
> >following table
> >
> > [snip]
> >
> >now the problem:
> >
> >1) if i do a select with an idex it takes 27 sec. to get 92 rows
> >
> >> time sqlite3 db 'select n2 from data where a <=
On Fri, 8 Apr 2005, Maksim Yevmenkin wrote:
>Hello,
>
>i'm having strange performance problem with sqlite-3.2.0. consider the
>following table
>
> [snip]
>
>now the problem:
>
>1) if i do a select with an idex it takes 27 sec. to get 92 rows
>
>> time sqlite3 db 'select n2 from data where a <=
Hello,
i'm having strange performance problem with sqlite-3.2.0. consider the
following table
sqlite> .schema data
CREATE TABLE data
(
a INTEGER,
b INTEGER,
c CHAR,
d INTEGER,
e INTEGER,
n1 FLOAT,
n2 FLOAT
);
CREATE INDEX data_by_a ON data (a);
sq
> SQLite only uses a single index per table on any give query.
> This is unlikely to change.
Would it be able to use a multi-column query on ipnode + author?
Hugh
> Shi Elektronische Medien GmbH, Peter Spiske wrote:
> >
> > the following simple query is very slow:
> > SELECT title FROM t1 WHE
Shi Elektronische Medien GmbH, Peter Spiske wrote:
the following simple query is very slow:
SELECT title FROM t1 WHERE ipnode='VZ' ORDER BY author;
The database is about 250 MB in size and the table the query is run against
has 12 cols and 120,000 rows.
Every col has an index.
The above query ret
At 1:33 PM +0100 3/20/04, Shi Elektronische Medien GmbH, Peter Spiske wrote:
the following simple query is very slow:
SELECT title FROM t1 WHERE ipnode='VZ' ORDER BY author;
The database is about 250 MB in size and the table the query is run against
has 12 cols and 120,000 rows.
Every col has an
Hi,
the following simple query is very slow:
SELECT title FROM t1 WHERE ipnode='VZ' ORDER BY author;
The database is about 250 MB in size and the table the query is run against
has 12 cols and 120,000 rows.
Every col has an index.
The above query returns about 80% of the records.
As soon as the
On Thu, 2003-11-06 at 23:31, [EMAIL PROTECTED] wrote:
> I guess in this case the hashes aren't really hashes anymore, they're more
> like arrays. There wouldn't be any bucket under hash unless the key was
> identical for two values. The B-tree entry becomes x div 4 (or an
> appropriate number).
"Mrs. Brisby" <[EMAIL PROTECTED]>
07/11/2003 12:50 PM
To: [EMAIL PROTECTED]
cc: [EMAIL PROTECTED]
Subject: Re: [sqlite] Performance problem
> On Thu, 2003-11-06 at 19:00, [EMAIL PROTECTED] wrote:
> > How would you handle the l
On Thu, 2003-11-06 at 19:00, [EMAIL PROTECTED] wrote:
> How would you handle the lack of ordering associate with hash tables?
> Sqlite can currently use indicies for three main tests: equals, less than,
> and greater than. While hash-tables are good at finding equal-to in
> constant time it usua
- Forwarded by Ben Carlyle/AU/IRSA/Rail on 07/11/2003 10:00 AM -
Ben Carlyle
07/11/2003 10:00 AM
To: "Mrs. Brisby" <[EMAIL PROTECTED]>@CORP
cc:
Subject: Re: [sqlite] Performance problem
"Mrs. Brisby" <[EMAIL PR
On Wed, 2003-11-05 at 23:59, Jonas Forsman / Axier.SE wrote:
> According to the postgresql documentation, the hash algorithm is
> discouraged compared to b-tree for performance reasons.
>
> http://www.postgresql.org/docs/view.php?version=7.3&idoc=1&file=indexes-types.html
>
> Note: Testing has sh
t;
To: "Clark, Chris" <[EMAIL PROTECTED]>
Cc: "D. Richard Hipp" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Thursday, November 06, 2003 4:14 AM
Subject: RE: [sqlite] Performance problem
> On Wed, 2003-11-05 at 13:44, Clark, Chris wrote:
> > &
On Wed, 2003-11-05 at 13:44, Clark, Chris wrote:
> > -Original Message-
> > From: Mrs. Brisby [mailto:[EMAIL PROTECTED]
> >
> > MySQL has stated in-documentation that it uses a B-tree for
> > it's index.
> > I think this is a mistake- especially for larger indexes.
> > Using several B-tr
> -Original Message-
> From: Mrs. Brisby [mailto:[EMAIL PROTECTED]
>
> MySQL has stated in-documentation that it uses a B-tree for
> it's index.
> I think this is a mistake- especially for larger indexes.
> Using several B-trees attached to a hash-table is much faster
> (if the hash i
eature to have.
Any change of having someone look into the possibility of implementing some thing like
that?
regards
Greg
- Original Message -
From: D. Richard Hipp
To: [EMAIL PROTECTED]
Sent: Wednesday, November 05, 2003 1:45 PM
Subject: Re: [sqlite] Performance problem
[EMAIL PROTECTED] wrote:
DRH: Will the changes to indicies allow us to define arbitrary collation
functions? If so, will those indicies be used when a query is done that
could use the arbitrary collation function?
Likely so. But no promises yet.
--
D. Richard Hipp -- [EMAIL PROTECTED] -- 704.9
- Forwarded by Ben Carlyle/AU/IRSA/Rail on 05/11/2003 12:37 PM -
Ben Carlyle
05/11/2003 12:37 PM
To: Doug Currie <[EMAIL PROTECTED]>
cc:
Subject:Re: Re[4]: [sqlite] Performance problem
Doug Currie <[EMAIL PROTECTED]>
05/11/2003 05:
On Tue, 2003-11-04 at 17:12, D. Richard Hipp wrote:
> I'm not sure what you or Mrs.Brisby mean by "packing".
My apologies then; my statement was made before espresso so I don't
doubt I was making much sense.
Here goes:
Because MySQL stores the literal values of integers as a four octet
value the
> Your suggestion was to reconstruct the index from original
> table data on a rollback. For a large transaction that touches
> most pages of the index, this can give (at best) a 2:1 speedup.
> But the other side, the potential slowdown is extreme.
Yeah, there is that drawback. Other DBMSs avoid
Paul Nash wrote:
Regard the insertion efficiency. I know nothing about SQLite internals and
litle about B-trees which I understand SQLite uses, but a look at Knuth
provides some tweaks to the B-tree algorithms which improve performance.
Mrs.Brisby suspects MySQL is packing indexes and this is the f
I have just started using SQLite and it looks great. I support it remaining
small simple and correct. If it is small, then perfomance in most cases is
not such an issue. I have not yet compared it to the micro db I wrote for
webpage systems and will be interested to see the reults yet.
Regard the
Doug Currie wrote:
Using the OS or RAID may solve many problems. In this case, though, I
proposed a solution that eliminated all of the index journaling
normally applied by sqlite. In D. Richard Hipp's example there were
3 writes to (mostly) different pages for idx2. With sqlite
presently this
Monday, November 3, 2003, 11:22:55 PM, Ben Carlyle wrote:
>> > Can any readers suggest ways that I have not thought of for making
>> > large numbers of non-localized INSERTs go faster?
>> Instead of journaling the old ("undo") pages of idx2, simply record
>> the fact that idx2 is modified (i.e.,
Allan Edwards wrote:
>
> I have YET to see a database, small to massively scalable that could handle
> BLOBS worth anything. ... I prefer the simplicity talk given early. If
> someone wants blobs, do it the old fashioned way!
>
Your concerns are understood, for BLOBs that are truely large. But w
d way!
Thanks,
Allan
-Original Message-
From: D. Richard Hipp [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 04, 2003 4:40 AM
To: [EMAIL PROTECTED]
Subject: Re: [sqlite] Performance problem
Avner Levy wrote:
>
> We have just finished testing the same scenario with MySql at
>
base
of code to start from.
Just some thoughts.
Allan
-Original Message-
From: Avner Levy [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 04, 2003 3:56 AM
To: D. Richard Hipp
Cc: [EMAIL PROTECTED]
Subject: Re: [sqlite] Performance problem
Hi,
We have just finished testing the same scen
On Tue, 2003-11-04 at 05:39, D. Richard Hipp wrote:
> Avner Levy wrote:
> >
> > We have just finished testing the same scenario with MySql at amazingly
> > they continued to insert 1500-3000 rows per second even when the
> > database had 60,000,000 records. I don't know how this magic is done...
Avner Levy wrote:
We have just finished testing the same scenario with MySql at amazingly
they continued to insert 1500-3000 rows per second even when the
database had 60,000,000 records. I don't know how this magic is done...
Nor do I. If anybody can clue me in, I would appreciate it. I suspec
Hi,
We have just finished testing the same scenario with MySql at amazingly
they continued to insert 1500-3000 rows per second even when the
database had 60,000,000 records.
I don't know how this magic is done, but that what the tests showed
(with the same machine / OS).
There is one thing I don
- Forwarded by Ben Carlyle/AU/IRSA/Rail on 04/11/2003 02:26 PM -
Ben Carlyle
04/11/2003 02:01 PM
To: Doug Currie <[EMAIL PROTECTED]>
cc:
Subject:Re: Re[2]: [sqlite] Performance problem
Doug Currie <[EMAIL PROTECTED]>
03/11/2003 05:
> If you disable idx2, you will see that the inserts go much faster. I
> also observe that the journal file is about 1/3rd the size of the database,
> which is consistent with the observation that every disk block associated
> with idx2 needs to be changed. My TCL test script for this situation
>
Avner Levy wrote:
Hi,
I've written the following program to test the sqlite performance for a
specific scenario I have.
I've used most tricks I've found but still when the database gets big
the performance gets unacceptable.
The performance problem happens of course only if the indexes are define
Hi,
I've written the following program to test the sqlite performance for a
specific scenario I have.
I've used most tricks I've found but still when the database gets big
the performance gets unacceptable.
The performance problem happens of course only if the indexes are defined.
Since I need th
98 matches
Mail list logo