Re: [sqlite] Performance problem with 3.2.7

2005-11-21 Thread Chris Schirlinger
Are you wrapping the transactions in between Begin/End Transactions?

BEGIN TRANSACTION;
INSERT INTO table (foo) VALUES (bar);
INSERT INTO table (foo) VALUES (par);
INSERT INTO table (foo) VALUES (tar);
INSERT INTO table (foo) VALUES (far);
..
INSERT INTO table (foo) VALUES (car);
INSERT INTO table (foo) VALUES (jar);
INSERT INTO table (foo) VALUES (mar);
COMMIT TRANSACTION;

Check out this document for more info 
http://www.sqlite.org/lang_transaction.html

 I'm sure I must be doing something wrong.  This is my first attempt at
 working with SQLite.




[sqlite] Looking for source sqlite-3.0.7.tar.gz

2005-11-21 Thread Sami Islam
Hello,

I would like to compile the sqlite3odbc v. 0.65 from C.Werner and require
the sqlite-3.0.7 source for it. I tried searching for it in the internet
without any success. I can't even log on to the CVS Repository und
www.sqlite.org with the password: anonymous.

Can anyone point me to the right direction where I can get a source for
3.0.7?

Thanks,
Sami




Re: [sqlite] Rows to columns

2005-11-21 Thread Christian Smith
On Sat, 19 Nov 2005, Matthias Teege wrote:

Christian Smith schrieb:

 Index the pairs table, like I do in my schema. You might want to index by
 id and field as these are primarily what you use to read data in this

Does it make any difference if the index is unique or not?


Yes. You'll want a unique index as a row can have only a single column of
each name. It makes no sense to have more than column of the same name,
and so makes no sense storing multiple values for the same field value of
the same id.



 Now, all lookups used to implement the view are done using index lookups.

Can I check that a query use the index?


You can examine the output of explain sql query in the sqlite shell.
You should get something like:

xx|OpenRead|xx|xx|(pairs autoindex 1)

for each cursor that iterates through the pairs table. There should be a
cursor per field in the view.


Many thanks
Matthias


-- 
/\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \


Re: [sqlite] Performance problem with 3.2.7

2005-11-21 Thread Christian Smith
On Mon, 21 Nov 2005, Shane Baker wrote:

I'm sure I must be doing something wrong.  This is my first attempt at
working with SQLite.


We'll see...



I have a simple table, with 7 columns.  There are 6 integers and a BLOB,
with the primary key being on an integer.  When I try to run inserts (one
insert per transacion - I know this is not optimal, but it represents my
application's usage), I am only getting about 7 inserts per second, on
average.

My first suspect was the BLOB and the fact that I was binding this
parameter using SQLITE_TRANSIENT (using sqlite3_bind_blob).  I removed the
BLOB from the schema altogether, leaving just 6 integers, and I still have
the same performance.


The performance problem is the synchronous IO bottleneck of doing only a
single insert per transaction.



For reference, I am getting around 10,000 queries per second when I lookup
a row based on the primary key column.

All performance measurements I've seen posted by others suggest between
200 and 300 inserts per second with one insert per transaction.


Probably not, unless this is to a FLASH device, for example. The
Bottleneck in hard disk IO is the rotational and head movement latencies
to write data to the platters. Assuming no head movement, a 7200 rpm disk
will only allow the same sector to be rewritten 1/7200 times a minute,
which is 120 times a second. Add in that many different sectors need to be
updated synchronously, and throughput drops dramatically.

A quick test indicates that I can almost double the performance on
Linux/ext3 by having data=journal option set in the mount flags. This is
because head movement is reduced significantly. A test that previously
took ~500 seconds (13785 inserts without transactions) took 280 seconds
with data=journal. For reference, the same data inserted with a single
transaction took ~1.2 seconds!



I haven't run a profiler yet but hope to do this tomorrow.  Does anyone
have any ideas as to what I might be doing wrong, or where I should look?


If you can change your model to insert more than 1 row per transaction,
you should see a significant performance increase. You'll see roughly N
times the performance for small N.

If this is not an option, look at your storage and how you can reduce
latency. FLASH devices have low latency, being solid state, and some RAID
controllers have battery backed buffers, and so may have lower latency.


Thanks in advance.


Christian

-- 
/\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \


Re: [sqlite] Performance problem with 3.2.7

2005-11-21 Thread Shane Baker
No, as I mentioned in my original message, I am not wrapping them.  I
don't want to test an unrealistic scenario for my application.  In my
application, there are multiple sources that will be inserting into the
database and pooling the information for a bulk insert won't work.

I understand that I will get better performance by inserting all of my
rows inside a transaction.  What I don't understand is why, when NOT using
a transaction, I get about 7 inserts per second compared to others who are
reporting between 200 and 300.  I am working with reasonable hardware.

I just need to figure out why my performance is about 30x slower than what
others are reporting when using the library in similar ways.



On Mon, 21 Nov 2005, Chris Schirlinger wrote:

 Are you wrapping the transactions in between Begin/End Transactions?

 BEGIN TRANSACTION;
 INSERT INTO table (foo) VALUES (bar);
 INSERT INTO table (foo) VALUES (par);
 INSERT INTO table (foo) VALUES (tar);
 INSERT INTO table (foo) VALUES (far);
 ..
 INSERT INTO table (foo) VALUES (car);
 INSERT INTO table (foo) VALUES (jar);
 INSERT INTO table (foo) VALUES (mar);
 COMMIT TRANSACTION;

 Check out this document for more info
 http://www.sqlite.org/lang_transaction.html

  I'm sure I must be doing something wrong.  This is my first attempt at
  working with SQLite.



Re: [sqlite] Performance problem with 3.2.7

2005-11-21 Thread Shane Baker
Thank you very much for the feedback.  I understand your point, hardware
takes a deterministic amount of time.

I have been basing my assumptions on these sources:

http://www.sqlite.org/cvstrac/wiki?p=PerformanceConsiderations (See
Transactions and performance)
http://blog.amber.org/2004/11/28/sqlite-insertion-performance/

There was one other, but I can't find it.

For the time being, I don't think that inserts are going to happen very
frequently in my application and I can probably roll updates into
transactions.

Thanks again.


On Mon, 21 Nov 2005, Christian Smith wrote:

 On Mon, 21 Nov 2005, Shane Baker wrote:

 I'm sure I must be doing something wrong.  This is my first attempt at
 working with SQLite.


 We'll see...


 
 I have a simple table, with 7 columns.  There are 6 integers and a BLOB,
 with the primary key being on an integer.  When I try to run inserts (one
 insert per transacion - I know this is not optimal, but it represents my
 application's usage), I am only getting about 7 inserts per second, on
 average.
 
 My first suspect was the BLOB and the fact that I was binding this
 parameter using SQLITE_TRANSIENT (using sqlite3_bind_blob).  I removed the
 BLOB from the schema altogether, leaving just 6 integers, and I still have
 the same performance.


 The performance problem is the synchronous IO bottleneck of doing only a
 single insert per transaction.


 
 For reference, I am getting around 10,000 queries per second when I lookup
 a row based on the primary key column.
 
 All performance measurements I've seen posted by others suggest between
 200 and 300 inserts per second with one insert per transaction.


 Probably not, unless this is to a FLASH device, for example. The
 Bottleneck in hard disk IO is the rotational and head movement latencies
 to write data to the platters. Assuming no head movement, a 7200 rpm disk
 will only allow the same sector to be rewritten 1/7200 times a minute,
 which is 120 times a second. Add in that many different sectors need to be
 updated synchronously, and throughput drops dramatically.

 A quick test indicates that I can almost double the performance on
 Linux/ext3 by having data=journal option set in the mount flags. This is
 because head movement is reduced significantly. A test that previously
 took ~500 seconds (13785 inserts without transactions) took 280 seconds
 with data=journal. For reference, the same data inserted with a single
 transaction took ~1.2 seconds!


 
 I haven't run a profiler yet but hope to do this tomorrow.  Does anyone
 have any ideas as to what I might be doing wrong, or where I should look?


 If you can change your model to insert more than 1 row per transaction,
 you should see a significant performance increase. You'll see roughly N
 times the performance for small N.

 If this is not an option, look at your storage and how you can reduce
 latency. FLASH devices have low latency, being solid state, and some RAID
 controllers have battery backed buffers, and so may have lower latency.

 
 Thanks in advance.
 

 Christian

 --
 /\
 \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
  X   - AGAINST MS ATTACHMENTS
 / \



Re: [sqlite] Re: functions that return tables

2005-11-21 Thread Dennis Cote

Igor Tandetnik wrote:


From: Dennis Cote [EMAIL PROTECTED]

I don't know of a way to do what you want with a user defined 
function, but your example can be solved quite simply using SQL. The 
following query will return a table with the required results.


select * from test order by col desc limit 3;

If you have an index on col then it will also be very fast regardless 
of the size of the table, if not  it will do a single table scan to 
find the three maximum values.



Are you sure it will do a single scan, rather than sorting into a 
temporary table and picking three topmost records? What if I want 1000 
topmost records, would that still be done in a single scan? If so, how 
efficiently will this temporary table of 1000 records managed?


The best algorithm for picking M largest elements out of N runs in O(N 
log M), and it requires that the table of M best items seen so far be 
maintained in a rather fancy data structure (a heap). Does the SQLite 
query planner really implement something like that?



Igor,

If you have an index on col then SQLite will use that index to iterate 
through the N largest values, it simply goes to the end of the index and 
steps backwards through the index one record at a time. The limit clause 
causes it to stop after N steps.


SQLite doesn't create a temporary table if it can use an index to step 
through a table in the correct order. Temporary tables are only needed 
if it has to collect all the records first and then do a sort.


The real work is done when the records are inserted. That is where 
SQLite incurs the cost of inserting an index entry at the correct 
location. That is an order log N operation for a btree index. For a 
table with N records it takes O(N log N) operations to insert all the 
records.


There is no free lunch here. Using an index slows all the inserts so 
that it can speed up the lookups. This is a good trade off if you do 
lots of lookups.


HTH
Dennis Cote



[sqlite] Calculating the mode

2005-11-21 Thread Tim Martin
Does anyone have any working solutions for calculating the mode of a set
of values in SQLite?

In SQL For Smarties, Celko gives two solutions, neither of which seem to
work in SQLite:

1)
SELECT salary, COUNT(*) AS frequency
  FROM Payroll
 GROUP BY salary
HAVING COUNT(*)
   = ALL (SELECT COUNT(*)
 FROM Payroll
GROUP BY salary);

This won't work because the ALL operator isn't supported. I know the ALL
operator was discussed a while back on this list but no conclusion was
reached that it would be added. Kurt Welgehausen suggested that the
transformation

x op ALL (SELECT y FROM t WHERE ...)
to
NOT EXISTS (SELECT y FROM t WHERE NOT (x op y) AND ...)

would work around it, but as far as I can see this will only work in a
WHERE clause and not a HAVING clause.


2)
WITH (SELECT salary, COUNT(*)
FROM Payroll
   GROUP BY salary)
 AS P1 (salary, occurs)
SELECT salary
  FROM P1
 WHERE P1.occurs
  = (SELECT MAX(occurs) IN P1);

This won't work because derived tables aren't supported. I could create a
temporary table outside the select, but I can't do this within a trigger.


Regards,

Tim


Re: [sqlite] functions that return tables

2005-11-21 Thread Nathan Kurz
On Fri, Nov 18, 2005 at 04:25:12PM -0700, Dennis Cote wrote:
 Is there any reasonable way to accomplish this?  Or am I left with
 defining a new function type that returns a handle to a temp table,
 and new parsing logic to wrap the right OP codes around that function?
 
 I don't know of a way to do what you want with a user defined function, 
 but your example can be solved quite simply using SQL. The following 
 query will return a table with the required results.
 
 select * from test order by col desc limit 3;
 
 If you have an index on col then it will also be very fast regardless of 
 the size of the table, if not  it will do a single table scan to find 
 the three maximum values.

Thanks Dennis.  Unfortunately, I was only using the max() function as
a simplified case to try to show what I wanted.  In reality, I need to
try to find which which N rows are most similar to a given row in the
table based on a custom vector similarity function that compares a
blob in the test field against a blob in each of the other rows (as
well as some joined in parameters).  So a precalculated index is out.

Also, I think the part that Igor was pointing out was the if not it
will do a single table scan.  His point (which I think is true) is
that in if no index is available with the information then a temporary
copy of the entire table has to be created and sorted.  The 'complex
heap approach' is indeed what I'm doing in my user defined aggegate
function, as the cost of the temp table creation and sorting is the
reason I can't do this as a simple function with an 'ORDER BY'.

So I'm still looking for how it could be done with user defined
functions, even if that means significantly extending the way that
user defined functions are handled in SQLite.  Advice on how to do
that (or suggestions on better alternatives) greatly appreciated.

--nate



RE: [sqlite] Unable to load DLL Help!

2005-11-21 Thread Matt
Not totally sure, the descriptions of the error just sounded different...But
could be related...

-Original Message-
From: Rob Lohman [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 21, 2005 12:33 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Unable to load DLL Help!

How can you be sure? It appears from that ticket that
there is something wrong with the default binary for
Windows (I've verified the relocation problem myself).
I can imagine that the more tight security on win2k3
(sp1) simply doesn't want to load the DLL when it is
not 100% ok? It sounds pretty feasable, especially
since a recompile seems to solve the issue.

- Original Message - 
From: Matt [EMAIL PROTECTED]
To: sqlite-users@sqlite.org
Sent: Monday, November 21, 2005 1:00 AM
Subject: RE: [sqlite] Unable to load DLL Help!


 No, it doesn't appear to be related to this.

 -Matt

 -Original Message-
 From: Joe Wilson [mailto:[EMAIL PROTECTED]
 Sent: Sunday, November 20, 2005 3:19 PM
 To: sqlite-users@sqlite.org
 Subject: RE: [sqlite] Unable to load DLL Help!

 Was it related to this bug?

 Ticket 1474: SQLITE3.DLL fails to load if address 0x6090 occupied
 http://www.sqlite.org/cvstrac/tktview?tn=1474

 --- Matt [EMAIL PROTECTED] wrote:
 For the archives...I finally solved this problem. I don't think the
 distributed windows dll works with windows 2003 (sp1). I downloaded the
 source and compiled it manually and everything works fine!

 Thanks
 Matt

 -Original Message-
 From: Chris Schirlinger [mailto:[EMAIL PROTECTED]
 Sent: Saturday, November 19, 2005 1:18 PM
 To: sqlite-users@sqlite.org
 Subject: Re: [sqlite] Unable to load DLL Help!

   [DllNotFoundException: Unable to load DLL (sqlite3).]
 ...
  Im assuming unable to load dll is not the actual problem, the dll is
 in
  the correct directory and was working fine until today. I tried an old
  version of sqlite3.dll (before the autoincrement feature was
 implemented)
  which seemed to work ok, but the database im using uses that feature so
 I
  need to get a more current version working. Any help is much
 appreciated,
 I
  need to solve this problem ASAP. Thanks!

 I just struggled with this same error (different system, .net 2, we
 built the SQlite DLL ourselves)

 The error in my case was the SQLIte DLL had dependencies we didn't
 realize it had (A bad project setup so it was linking in stuff it
 shouldn't have)

 It was misleading, since we were getting dllnotfoundexception on an
 assembly that existed, calling an assembly that existed calling a
 Win32 DLL that existed that was looking for something that *DIDN'T*
 exist

 That error only signifies a missing DLL somewhere in the dependency
 list from what my research turned up, I'd take another look to make
 sure some hidden dependency hasn't crept in somewhere (Not necessarly
 in the SQLite DLL, but could even be in ones called by it)




 __
 Yahoo! FareChase: Search multiple travel sites in one click.
 http://farechase.yahoo.com



 






[sqlite] Re: Calculating the mode

2005-11-21 Thread Igor Tandetnik

Tim Martin wrote:

Does anyone have any working solutions for calculating the mode of a
set of values in SQLite?


I'm not exactly sure what  mode is. From your examples, it seems you 
want to get an element that occurs most often. This should do it:


select salary, count(*) occurs
from payroll
group by salary
having occurs=
 (select count(*) c from payroll group by salary order by c desc limit 
1)


Igor Tandetnik 



[sqlite] any lightweight linux DB browser

2005-11-21 Thread Eno Thereska

Hi,

I am using sqlitebrowser, but it eats up a lot of memory. For some 
reason it decides to cache query results and has a caching policy which 
works well for DBs that fit in memory, but doesn't otherwise.


I am looking to patch it. Meanwhile, if anyone has seen similar patches 
or knows of a better browser *for linux* please let me know.


Thanks
Eno



Re: [sqlite] Calculating the mode

2005-11-21 Thread drh
Igor Tandetnik [EMAIL PROTECTED] wrote:
 Tim Martin wrote:
  Does anyone have any working solutions for calculating the mode of a
  set of values in SQLite?
 
 I'm not exactly sure what  mode is. From your examples, it seems you 
 want to get an element that occurs most often. This should do it:
 
 select salary, count(*) occurs
 from payroll
 group by salary
 having occurs=
   (select count(*) c from payroll group by salary order by c desc limit 
 1)
 

Or how about:

  select salary, count(*) frequency
from payroll
   group by salary
   order by frequency
   limit 1;





Re: [sqlite] Looking for source sqlite-3.0.7.tar.gz

2005-11-21 Thread Jay Sprenkle
I have sqlite-3.0.8.tar.gz. Will that work?

On 11/21/05, Sami Islam [EMAIL PROTECTED] wrote:
 Hello,

 I would like to compile the sqlite3odbc v. 0.65 from C.Werner and require
 the sqlite-3.0.7 source for it. I tried searching for it in the internet
 without any success. I can't even log on to the CVS Repository und
 www.sqlite.org with the password: anonymous.

 Can anyone point me to the right direction where I can get a source for
 3.0.7?

 Thanks,
 Sami






--
---
The Castles of Dereth Calendar: a tour of the art and architecture of
Asheron's Call
http://www.lulu.com/content/77264


Re: Re[2]: [sqlite] index question

2005-11-21 Thread Jay Sprenkle
FYI:
If you have a very small number of rows in the table and index will
make it slower,
rather than faster.

On 11/21/05, Wilfried Mestdagh [EMAIL PROTECTED] wrote:
 Hi Bert,

 'select distinct Name ' +
   'from Rx ' +
   'where RxDT = ' + DT + ' ' +
   'order by Name'

 One thing is not yet clear to me. That is the 'distinct'. To have this
 as fast as possible I have to make also a separate index on RxDt,
 Name. Is that correct ?  (or the way around: Name, RxDt ?)


[sqlite] Re: Calculating the mode

2005-11-21 Thread Igor Tandetnik

[EMAIL PROTECTED] wrote:

Igor Tandetnik [EMAIL PROTECTED] wrote:

Tim Martin wrote:

Does anyone have any working solutions for calculating the mode of a
set of values in SQLite?


I'm not exactly sure what  mode is. From your examples, it seems
you want to get an element that occurs most often. This should do it:

select salary, count(*) occurs
from payroll
group by salary
having occurs=
  (select count(*) c from payroll group by salary order by c desc
limit 1)



Or how about:

  select salary, count(*) frequency
from payroll
   group by salary
   order by frequency
   limit 1;


This does not quite do the right thing if there are two or more values 
with equal frequency.


Igor Tandetnik 



[sqlite] Re: Re: functions that return tables

2005-11-21 Thread Igor Tandetnik

Dennis Cote wrote:

Igor Tandetnik wrote:


From: Dennis Cote


I don't know of a way to do what you want with a user defined
function, but your example can be solved quite simply using SQL. The
following query will return a table with the required results.

select * from test order by col desc limit 3;

If you have an index on col then it will also be very fast
regardless of the size of the table, *** if not  it will do a single
table scan to find the three maximum values. ***


(emphasis mine).


Are you sure it will do a single scan, rather than sorting into a
temporary table and picking three topmost records?


If you have an index on col then SQLite will use that index.


That's obvious. What about the case when there is no index on col? You 
specifically claimed SQLite can manage in a single table scan even in 
this case. I find it rather difficult to believe - assuming that by a 
single table scan you mean some O(N) operation. Maybe I misunderstand 
your use of this term.


Igor Tandetnik



Re: [sqlite] any lightweight linux DB browser

2005-11-21 Thread juan perez
You can't get any lighter than the sqlite3 tool that ships with SQLite3. 
  Can you be more specific in terms of what you need?


Eno Thereska wrote:

Hi,

I am using sqlitebrowser, but it eats up a lot of memory. For some 
reason it decides to cache query results and has a caching policy which 
works well for DBs that fit in memory, but doesn't otherwise.


I am looking to patch it. Meanwhile, if anyone has seen similar patches 
or knows of a better browser *for linux* please let me know.


Thanks
Eno




Re: [sqlite] any lightweight linux DB browser

2005-11-21 Thread Eno Thereska
Something with a GUI would be ideal. I agree that the sqlite3 tool is 
very lightweight, unfortunately it has no GUI.


Thanks,
Eno

juan perez wrote:
You can't get any lighter than the sqlite3 tool that ships with SQLite3. 
  Can you be more specific in terms of what you need?


Eno Thereska wrote:


Hi,

I am using sqlitebrowser, but it eats up a lot of memory. For some 
reason it decides to cache query results and has a caching policy 
which works well for DBs that fit in memory, but doesn't otherwise.


I am looking to patch it. Meanwhile, if anyone has seen similar 
patches or knows of a better browser *for linux* please let me know.


Thanks
Eno







--


Eno Thereska
-
Carnegie Mellon University
Parallel Data Laboratory
CIC Building Cube 2221-D
5000 Forbes Ave Pittsburgh, PA 15213
Tel: 412-268-5908



Re: [sqlite] any lightweight linux DB browser

2005-11-21 Thread drh
Eno Thereska [EMAIL PROTECTED] wrote:
 Something with a GUI would be ideal. I agree that the sqlite3 tool is 
 very lightweight, unfortunately it has no GUI.
 

It is not clear what you mean by GUI.  If you mean something
that displays in a separate window rather than in your shell
window, then there is:

  http://www.sqlite.org/cvstrac/wiki?p=ConsoleForTclTk

That program gives you a GUI.  But the GUI just has a command-line
prompt into which you type SQL statements.  It is no more capable
than the command-line shell.  




Re: [sqlite] Re: Calculating the mode

2005-11-21 Thread Kurt Welgehausen
 select salary, count(*) occurs from payroll
 group by salary having occurs =
   (select count(*) c from payroll group by salary
order by c desc limit)

OR

select salary, count(*) from payroll group by salary
having count(*) = (select max(cnt) from
   (select count(*) cnt from payroll group by salary))

Regards


Re: [sqlite] any lightweight linux DB browser

2005-11-21 Thread Ray Mosley
I'm still on SQlite 2.8, and I use sqlitecc.exe; I think I saw an
announcement that a version was available for SQlite v3.

 On 11/21/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 Eno Thereska [EMAIL PROTECTED] wrote:
  Something with a GUI would be ideal. I agree that the sqlite3 tool is
  very lightweight, unfortunately it has no GUI.
 

 It is not clear what you mean by GUI. If you mean something
 that displays in a separate window rather than in your shell
 window, then there is:

 http://www.sqlite.org/cvstrac/wiki?p=ConsoleForTclTk

 That program gives you a GUI. But the GUI just has a command-line
 prompt into which you type SQL statements. It is no more capable
 than the command-line shell.





--
Ray Mosley


Re: [sqlite] index question

2005-11-21 Thread Bert Verhees

Sorry, my emailer messed things up, try it again

0OpenVirtual10keyinfo(1,BINARY)
1Goto031   
2Integer00   
3OpenRead02   
4SetNumColumns02   
5Integer00   
6OpenRead23keyinfo(1,BINARY)
7Integer10   
8NotNull-111   
9Pop10   
10Goto028   
11MakeRecord10n
12MemStore00   
13MoveGe228   
14MemLoad00   
15IdxGE228+
16RowKey20   
17IdxIsNull127   
18IdxRowid20   
19MoveGe00   
20Column00   
21MakeRecord-10   
22Distinct125   
23Pop20   
24Goto027   
25IdxInsert10   
26Callback10   
27Next214   
28Close00   
29Close20   
30Halt00   
31Transaction00   
32VerifyCookie03   
33Goto02   
34Noop00   


schema
tables
 table id=temp
  ddlCREATE TABLE temp (Name varchar(255),RxDT DateTime)/ddl
  columns
   column id=Name type=varchar(255)/
   column id=RxDT type=DateTime/
  /columns
  indexes
   index id=w unique=0
ddlCREATE INDEX t on temp(rxdt)/ddl
column id=Name/
   /index
   index id=t unique=0
ddlCREATE INDEX t on temp(rxdt)/ddl
column id=RxDT/
   /index
  /indexes
 /table
/tables
/schema



Wilfried Mestdagh wrote:


Hi Bert,

 


'select distinct Name ' +
'from Rx ' +
'where RxDT = ' + DT + ' ' +
'order by Name'
 



One thing is not yet clear to me. That is the 'distinct'. To have this
as fast as possible I have to make also a separate index on RxDt,
Name. Is that correct ?  (or the way around: Name, RxDt ?)

---
Rgds, Wilfried
http://www.mestdagh.biz



 





Re: [sqlite] Performance problem with 3.2.7

2005-11-21 Thread Akira Higuchi
Hi,

On Mon, 21 Nov 2005 10:56:41 -0500 (EST)
Shane Baker [EMAIL PROTECTED] wrote:

 I just need to figure out why my performance is about 30x slower than what
 others are reporting when using the library in similar ways.

Are you using sqlite on windows or MacOS X?

As I tested, sqlite performs 10 write transactions per sec on
windows and MacOS X. On Linux, sqlite performs up to 300 write
transactions per sec if HDD cache is enabled. However, when HDD cache
is disabled (/sbin/hdparm -W0 /dev/hda etc.), it drops to 10
transactions per sec.

To ensure ACID compliance, sqlite seems to be using FlushFileBuffers() on
windows, F_FULLFSYNC fcntl() on MacOS X, and fsync() on other unixes.
FlushFileBuffers and F_FULLFSYNC fcntl() flush HDD write-back
cache, and they are very slow. However, many implementations of fsync()
don't flush HDD cache (they flush OScache only). It's very fast, but
dangerous (not ACID compliant) if HDD cache has no battery backup.
(So i'm using sqlite on Linux with HDD cache off.)


Re: [sqlite] Performance problem with 3.2.7

2005-11-21 Thread Shane Baker
Thank you very much.  I am happy to hear that the performance I am seeing
is in line with what others have observed.  I am running this on Windows
XP.

On Tue, 22 Nov 2005, Akira Higuchi wrote:

 Hi,

 On Mon, 21 Nov 2005 10:56:41 -0500 (EST)
 Shane Baker [EMAIL PROTECTED] wrote:

  I just need to figure out why my performance is about 30x slower than what
  others are reporting when using the library in similar ways.

 Are you using sqlite on windows or MacOS X?

 As I tested, sqlite performs 10 write transactions per sec on
 windows and MacOS X. On Linux, sqlite performs up to 300 write
 transactions per sec if HDD cache is enabled. However, when HDD cache
 is disabled (/sbin/hdparm -W0 /dev/hda etc.), it drops to 10
 transactions per sec.

 To ensure ACID compliance, sqlite seems to be using FlushFileBuffers() on
 windows, F_FULLFSYNC fcntl() on MacOS X, and fsync() on other unixes.
 FlushFileBuffers and F_FULLFSYNC fcntl() flush HDD write-back
 cache, and they are very slow. However, many implementations of fsync()
 don't flush HDD cache (they flush OScache only). It's very fast, but
 dangerous (not ACID compliant) if HDD cache has no battery backup.
 (So i'm using sqlite on Linux with HDD cache off.)