Re: [sqlite] ORDER BY is more than 10 times slower with 3.3.8 compared to 3.3.7

2006-12-30 Thread Michael Sizaki

Thanks Roger!

I switched Memory Usage to System Cache
  http://www.techspot.com/tweaks/memory-winxp/
and my performance problems are gone.

I have to see how this setting influences my overall
performance. It's strange that windows is not a bit more
clever on caching. I have 2Gb and most of the time I have
1Gb free. Windows could use this for temp files.

Michael


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Michael Sizaki wrote:
| I'm really puzzled why my system hits the disk so heavily

Windows XP limits the maximum size of the cache (default 10MB!).  There
are zillions of pseudo-freeware programs out there to change it.  You
can also change it using the control panel and/or registry:

~  http://support.microsoft.com/kb/308417  (system cache)

~  http://mywebpages.comcast.net/SupportCD/XPMyths.html (large system 
cache)


~  http://www.jsifaq.com/SF/Tips/Tip.aspx?id=9200

~  http://www.techspot.com/tweaks/memory-winxp/

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFFla84mOOfHg372QQRAssiAJ99Hzrw6/9Nn8EscwqkV7Dsn/OYKgCcCOjC
BZlsagFsYZ2pNRc/21g5MsU=
=qdN5
-END PGP SIGNATURE-

- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] ORDER BY is more than 10 times slower with 3.3.8 compared to 3.3.7

2006-12-30 Thread Michael Sizaki

Enjoy this video:

  http://channel9.msdn.com/ShowPost.aspx?PostID=59936


Nice!

The key sentence is a lot of the assumptions that where made
15 years ago, don't hold true anymore...


Michael

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] ORDER BY is more than 10 times slower with 3.3.8 compared to 3.3.7

2006-12-29 Thread Michael Sizaki

== SUMMARY ==
== There is indeed no difference between 3.3.7 and 3.3.8
== However, sqlite hits the disk a lot in a temp file??!!
== PRAGMA temp_store = MEMORY; helps
== Why is sqlite hitting the disk with a 70MB database?

Further tests shows that there is no difference between
3.3.7 and 3.3.8.

The problem was, that I was using sqlite.exe interactively.
The in the 3.3.8 shell I have been running some tests that
created and deleted some temp tables before I did
performance tests.

It turns out that the query hits the disk when the table
exceeds a certain size. There's a certain size of my tables
when performance goes down dramatically. It takes 14 sec for
100,000 rows and 300 for 200,000. The CPU goes down to
almost 0 and the disk gets very active.

My database:
  pragma cache_size = 2;
  pragma page_size = 4096;
  Database file (after vacuum) 70MB with about 450,000 records

 time ./sqlite3.3.8.exe db.sqlite SELECT * FROM files where id  10 ORDER BY 
size, name;|wc
  9  103445 11352384

real0m14.281s
user0m7.260s
sys 0m3.775s

Peak memory 35 MB

time ./sqlite3.3.8.exe db.sqlite SELECT * FROM files where id  20 ORDER BY 
size, name;|wc
 19  204598 24676875

real4m49.947s
user0m18.386s
sys 0m13.318s

Peak memory 35 MB

I captured the performance using sysinternals procexp:
  
http://www.microsoft.com/technet/sysinternals/SystemInformation/ProcessExplorer.mspx
See the attached screen shot. It's interesting that half of the memory is
allocated in the last seconds...

When I prepend the query with
  PRAGMA temp_store = MEMORY;
The queries are fast, but the process needs a lot of memory
(about 5 times the size of the .dump size of the result table)

 time ./sqlite3.3.8.exe db.sqlite PRAGMA temp_store = MEMORY;SELECT * FROM where id 
 10 ORDER BY size, name;|wc
 9  103445 11352384

real0m8.262s
user0m6.659s
sys 0m0.210s

Peak memory 58 MB

 time ./sqlite3.3.8.exe db.sqlite PRAGMA temp_store = MEMORY;SELECT * FROM where id 
 20 ORDER BY size, name;|wc
 19  204598 24676875

real0m13.329s
user0m12.187s
sys 0m0.310s

Peak memory 75 MB

What surprises me, is that the temp file is not kept in
cache. I have 2GB of memory and much bigger files can be
kept in cache. Why is sqlite hitting the disk?  What is
going on here? The maximum file cache needed would be 70 MB
for the database + 75 MB for the temp table. 150MB is
nothing on a 2GB system.

I thought maybe
  PRAGMA synchronous = OFF;
would help. But it does not.


Michael

[EMAIL PROTECTED] wrote:
 Michael Sizaki [EMAIL PROTECTED] wrote:
 What has changed in 3.3.8 to make it so slow?


 There were no changes to the query optimizer between 3.3.7
 and 3.3.8.  None.  Nada.  Zilch.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] ORDER BY is more than 10 times slower with 3.3.8 compared to 3.3.7

2006-12-29 Thread Michael Sizaki

Here's the screenshot showing the resource usage of the slow query:

 time ./sqlite3.3.8.exe db.sqlite SELECT * FROM files where id  20 ORDER BY 
size, name;|wc
  19  204598 24676875

real4m49.947s
user0m18.386s
sys 0m13.318s

Peak memory 35 MB


-
To unsubscribe, send email to [EMAIL PROTECTED]
-

Re: [sqlite] ORDER BY is more than 10 times slower with 3.3.8 compared to 3.3.7

2006-12-29 Thread Michael Sizaki

[EMAIL PROTECTED] wrote:

Perhaps someone with more windows experience can correct
me if my assertion above is incorrect.  Are there some
special flags that SQLite could pass to CreateFileW() to
trick windows into doing a better job of caching temp
files?


It seems you've done it right:
  fileflags = FILE_FLAG_RANDOM_ACCESS;
#if !OS_WINCE
  if( delFlag ){
fileflags |= FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE;
  }
#endif

I'm really puzzled why my system hits the disk so heavily


Michael

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] ORDER BY is more than 10 times slower with 3.3.8 compared to 3.3.7

2006-12-29 Thread Michael Sizaki

I went to implement this suggestion and quickly discovered
that SQLite already uses the FILE_ATTRIBUTE_TEMPORARY flag
on TEMP tables.  Or at least I think it does.  Can somebody
with a symbolic debugger that runs on windows please confirm
that the marked line of code in below (found in os_win.c) gets 
executed when using TEMP tables:


It gets called!

Michael

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] ORDER BY is more than 10 times slower with 3.3.8 compared to 3.3.7

2006-12-28 Thread Michael Sizaki

Hi,

The following query on a table with 400,000 rows

 SELECT * FROM table where ORDER BY name limit 10;

takes less than 3 sec with version 3.3.7 (or 3.3.0)
and 35 sec  with version 3.3.8.

What has changed in 3.3.8 to make it so slow?

My application relies on fast sorting

Michael

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How do I speed up CREATE INDEX ?

2006-12-03 Thread Michael Sizaki

Radzi,

are the ids of the Transaction table ordered when inserted?
I have discovered that it is very bad for performance of huge
tables, if the rows are inserted with random ids. If you use
an integer id (primary key )for such a table, SQLite uses the
ROWID column to store the integer primary key. SQLite will
put the records physically in the order you insert them but
logically in ROWID order.

Suppose you insert the following data:

id data
9  -- disk 1
6  -- disk 2
8  -- disk 3
1  -- disk 3
5  -- disk 5
2  -- disk 6
7  -- disk 7
4  -- disk 8
3  -- disk 9

The recorders are on disk in order 'disk 1' .. 'disk 9'.
But SQLite accesses the in id order. If the table is huge,
then the head of your hard disk jumps around like crazy.

When you create an index, SQLite uses the id order to access
your entries. This takes for ever.

If you can order the data on id before you insert should dramatically
speed up the indexing. If this is not possible, don't make the id column
primary key, but create an index for id instead.

I wonder how this would change the performance of your application


Michael

Thanks for the suggestion. I'm a bit lost now. I've tried to load 
80million rows now. It took 40 minutes to load into non-index tables; 
but creating index now take almost forever. It's already 12 hrs, not yet 
complete.


regards,
Radzi.

- Original Message - From: [EMAIL PROTECTED]
To: sqlite-users@sqlite.org
Sent: Sunday, December 03, 2006 8:21 PM
Subject: Re: [sqlite] How do I speed up CREATE INDEX ?



Mohd Radzi Ibrahim [EMAIL PROTECTED] wrote:

Hi,
I was loading a file to sqlite (3.3.8), and it took 4 mins to load 6 
million rows (with no index). But then when I run CREATE INDEX it 
took me 40 mins to do that. What could I do to speed up the indexing 
process ?




The reason index creation slows down when creating large
indices is a problem with locality of reference in your disk
cache.  I've learned a lot about dealing with locality
while working on full-text search, and I think I can
probably implement a CREATE INDEX that runs much faster
for a large table.  There are some plans in the works
that might permit me the time to do this in the spring.
But in the meantime, the only thing I can suggest is to
add more RAM to your machine so that you disk cache is
larger.  Or get a faster disk drive.
--
D. Richard Hipp  [EMAIL PROTECTED]





- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Smallest INTEGER wrong? it is -9223372036854775807 and not -9223372036854775808...

2006-09-12 Thread Michael Sizaki

Hi,

in java, the smallest long is
  -9223372036854775808

in SQLite it seems to be
  -9223372036854775807

sqlite create temp table t as select -9223372036854775807,-9223372036854775808;
sqlite select * from t;
-9223372036854775807|-9.22337203685478e+18

== -9223372036854775808 is converted to a float!

Bug or feature?

BTW for positive integers the limit of 9223372036854775807 is correct.

How did I find it?

I have a table with integer keys.
  create table t ( k integer primary key);
From java I called
  select k from t where k  %q;
with %q expanding to -9223372036854775808 (Long.MIN_VALUE)
did not return anything (I expected the entire table!)

Now I use (Long.MIN_VALUE+1).

(Yes I know, I could omit the  where in this case...)

Michael

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Smallest INTEGER wrong? it is -9223372036854775807 and not -9223372036854775808...

2006-09-12 Thread Michael Sizaki

[EMAIL PROTECTED] wrote:

Michael Sizaki [EMAIL PROTECTED] wrote:

in java, the smallest long is
   -9223372036854775808

in SQLite it seems to be
   -9223372036854775807

Bug or feature?


Call it what you like.  I deliberately omitted the
lower end to make the last line of sqlite3atoi64()
a little simpler.  (Nobody has noticed in 2.5 years.)


WOW, I'm surprised that nobody found it! There are
a few important values for any integer type: MIN,-1,0,1,MAX
(and a few orhers). If any huge value (e.g 4741939734731675961)
would not work, I'm sure nobody would ever find it, because the
probability of hitting the value (and finding the problem!) is
negligible. But MIN and MAX are fundamental. If you use LONG_MIN
to denote the smallest possible value and assume
  any_integer_value=-9223372036854775808
you will be very surprised. At least, I was

Ok, I have my workaround, and since I'm the only one
complaining I can live with that :-).

Michael

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] How to calculate the sum up to a row better than O(n^2)?

2006-07-20 Thread Michael Sizaki

Hi,


Suppose I have a database:
  CREATE TABLE data (timestamp INTEGER, amount INTEGER);
  INSERT INTO data VALUES(1,10);
  INSERT INTO data VALUES(2,20);
  INSERT INTO data VALUES(3,5);
  INSERT INTO data VALUES(4,2);
  ...

Now I want to see the sum up to the timestamp:

 SELECT
timestamp,(SELECT sum(amount)
FROM data as d
WHERE d.timestamp=data.timestamp)
  FROM data ORDER BY timestamp;

This works fine for small data sets. But it is obviously
a quadratic problem. Is there a more efficient way to do
the same thing?


Michael



Re: [sqlite] Large DB Performance Questions

2006-06-07 Thread Michael Sizaki

Hi Mark,

have you tried to do a VACUUM on the database?
It helps a lot when it comes to the 'read ahead'
feature of the database.

Michael


Mark Drago wrote:

Hello,

I'm writing a web cache and I want to use SQLite to store the log of all
of the accesses made through the web cache.  The idea is to install this
web cache in large institutions (1000-5000 workstations).  The log
database can grow in size very quickly and can reach in to the gigabytes
after just a few days.

Writing to the database is speedy enough that I haven't seen much of a
problem.  I collect the data for 1000 web requests and then insert them
all in a single transaction using a prepared statement.  This works
rather well.

The problem that I'm encountering has to do with generating reports on
the data in the log database.  SQLite is showing good performance on
some simple queries, but that is not the case once something more
advanced is involved, like an aggregate function for example.  More
over, once the SQLite file is cached in memory it is really quick.
However, I can't count on this file being cached at all when a user goes
to run the report.  So, I've been clearing my file cache before running
a test, and then running the same test again now that everything has
been loaded in to the cache.  Like I said, for most cases SQLite is
fine, but here is one example where it doesn't fare as well.

The system that I'm running these tests on is a P4 2.8GHz HT with 1 GB
of RAM running Fedora Core 5 and using SQLite version 3.3.3 (being as
that is what comes with FC5).  I'm doing my tests with a database that
is 732M in size and contains 1,280,881 records (the DB schema is
included below).

I clear the file cache by running the following command. I wait until it
consumes all of memory and then I kill it:
perl -e '@f[0..1]=0'

I'm running the tests by running the following script:
#!/bin/bash
echo $1; | sqlite3 log.db  /dev/null

The query I'm running is the following:
select count(host), host from log group by host;

The results include the first time the query is run (when the file is
not cached) and then the times of a few runs after that (when the file
is cached).

SQLite: 221.9s, 1.6s, 1.6s, 1.6s
 MySQL:   2.2s, 1.8s, 1.8s, 1.8s

The MySQL tests were done with the following script:
#!/bin/bash
mysql -u root --database=log -e $1  /dev/null

It is apparent that SQLite is reading the entire database off of the
disk and MySQL somehow is not.  The MySQL query cache is not in use on
this machine and MySQL does not claim very much memory for itself before
the test is conducted (maybe 30M).

I've tried looking in to the output from 'explain' to see if SQLite was
using the index that I have on the 'host' column, but I don't think it
is.  The output from 'explain' is included below.  Note that the
'explain' output is from a different machine which is running SQLite
3.3.5 compiled from source as the SQLite on FC5 kept Segfaulting when I
tried to use 'explain'.

Any information or ideas on how to speed up this query are greatly
appreciated.  The only un-implemented idea I have right now is to remove
some of the duplicated data from the schema in an attempt to reduce the
size of the average row in the table.  In some cases I can store just an
integer where I'm storing both the integer and a descriptive string
(category_name and category_no for example).  Some of the other
information in the schema holds data about things that are internal to
the web cache (profile*, ad*, etc.).

Thank you very much for any ideas,
Mark.

TABLE SCHEMA:
CREATE TABLE log(
log_no integer primary key,
add_dte datetime,
profile_name varchar(255),
workstation_ip integer,
workstation_ip_txt varchar(20),
verdict integer,
verdict_reason varchar(255),
category_name varchar(80),
category_no integer,
set_user_name varchar(255),
profile_zone varchar(40),
profile_zone_no integer,
author_user_name varchar(255),
workstation_name varchar(255),
workstation_group_name varchar(255),
profile_association varchar(255),
profile_association_no integer,
protocol varchar(40),
connection_type varchar(255),
connection_type_no integer,
host varchar(255),
url text,
ad_username varchar(255),
ad_groups text,
ad_domain varchar(255),
ad_workstation_name varchar(255),
ad_last_update_dte datetime);

INDEXES:
CREATE INDEX add_dte ON log (add_dte);
CREATE INDEX profile_name ON log(profile_name);
CREATE INDEX workstation_ip ON log(workstation_ip);
CREATE INDEX verdict ON log (verdict);
CREATE INDEX research_zone_no ON log(research_zone_no);
CREATE INDEX profile_zone_no ON log(profile_zone_no);
CREATE INDEX workstation_name ON log(workstation_name);
CREATE INDEX workstation_group_name ON log(workstation_group_name);
CREATE INDEX profile_association_no ON log(profile_association_no);
CREATE INDEX connection_type_no ON log(connection_type_no);
CREATE INDEX host ON log(host);
CREATE INDEX ad_username on log(ad_username);
CREATE INDEX ad_domain on log(ad_domain);
CREATE INDEX ad_workstation_name on 

Re: [sqlite] High retrieval time. Please help

2006-05-22 Thread Michael Sizaki

Hi Anish,

when a database hits the disk, there's not much you can
do about. You can increase the memory of your system, so
that the entire database fits into memory. If the database
is cold (the system has started and the database is not
in the file system cache), you can read the entire database
file once using to get it into the cache. Unfortunately,
this does not help much, if the database is too big to fit
into memory. Another trick that could work in some cases:
if you know you have 50.000 requests and you know the order
in which the data is in the database, you can sort the requests
before you access the database in the order in which they are physically
stored in the database. Normally you would not know the exact order,
but if you do a VACUUM on the database you know that the data
is ordered in order of ROWID. But that works only in some cases,
where you query the data by ROWID.

I general, sqlite is great as long as all data fits into
memory. I have no comparison with other database systems,
how they perform when you hit the disk.

Michael


Anish Enos Mathew wrote:

Hi Michael,
   I came to know that increasing the page size would help in
better performance. So I used PRAGMA and set the page size to 32768
using the command,
  sqlite3_exec (db, PRAGMA page_size = 32768, NULL, NULL,
NULL);

Still result is the same. Taking a time of 110 sec for 50,000
retrievals. Can u suggest me a method by which the performance of
retrieval can be increased.

-Original Message-
From: Michael Sizaki [mailto:[EMAIL PROTECTED]

Sent: Saturday, May 20, 2006 12:00 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] High retrieval time. Please help

Anish,


So my problem of retrieving 1,000,000 random records 1,000,000 times
works fine for 15 bytes. But it is taking too long a time for 1k
records. It is almost taking 102 seconds for retrieving 50,000 records
of size 1k. Can u suggest me a way for reducing the time taken for the
same? I have'nt done any changes in my program. The only change I made
was adding primary key to the seq_number column in my data insertion
program.


My guess is that with 15 byte records you operating system keeps the
entire database in the file cache and no real IO is done. When you use
the 1k records, the entire database does not fit into memory anymore,
and therefore real IO is done. That slows database access dramatically
down. I guess if you watch the CPU usage in the 15 byte case, its close
to 100% and in the 1k case it's very low (an you have a lot of disc
access).

Michael


The information contained in, or attached to, this e-mail, contains 
confidential information and is intended solely for the use of the individual 
or entity to whom they are addressed and is subject to legal privilege. If you 
have received this e-mail in error you should notify the sender immediately by 
reply e-mail, delete the message from your system and notify your system 
manager. Please do not copy it for any purpose, or disclose its contents to any 
other person. The views or opinions presented in this e-mail are solely those 
of the author and do not necessarily represent those of the company. The 
recipient should check this e-mail and any attachments for the presence of 
viruses. The company accepts no liability for any damage caused, directly or 
indirectly, by any virus transmitted in this email.

www.aztecsoft.com






Re: [sqlite] High retrieval time. Please help

2006-05-19 Thread Michael Sizaki

Anish,


So my problem of retrieving 1,000,000 random records 1,000,000 times
works fine for 15 bytes. But it is taking too long a time for 1k
records. It is almost taking 102 seconds for retrieving 50,000 records
of size 1k. Can u suggest me a way for reducing the time taken for the
same? I have'nt done any changes in my program. The only change I made
was adding primary key to the seq_number column in my data insertion
program.


My guess is that with 15 byte records you operating system keeps the
entire database in the file cache and no real IO is done. When you use
the 1k records, the entire database does not fit into memory anymore,
and therefore real IO is done. That slows database access dramatically
down. I guess if you watch the CPU usage in the 15 byte case, its close
to 100% and in the 1k case it's very low (an you have a lot of disc
access).

Michael


[sqlite] Non standard database design (max no if tables?)

2006-01-27 Thread Michael Sizaki

Hi,

I'm thinking about using a non standard design for my database.
There are up to 1 million records in the database. Each record
has an integer UID. There is another table with keywords for
each of the records. Records typically have 0-15 keywords associated.
The number of keywords is small 100-1000. Currently I have

CREATE TABLE data (
dataId INTEGER PRIMARY KEY AUTOINCREMENT,
...
);
CREATE TABLE words (
wordId INTEGER PRIMARY KEY AUTOINCREMENT,
word TEXT PRIMARY KEY ON CONFLICT IGNORE
);

CREATE TABLE keywords (
wordId INTEGER,
dataId INTEGER,
PRIMARY KEY (wordId, dataId)ON CONFLICT IGNORE
);
CREATE INDEX keywordsDataIdIndex ON keywords (dataId);

This creates in 2 big indexes for keywords...

When I display data items, I also display all associated keywords.
Therefore I created a cat function, that creates a comma separated
list of items:

SELECT *,(SELECT cat(category) FROM keywords WHERE keywords.dataId = 
data.dataId) FROM data ...

I just wonder, if I should not forget about database normalization
and create one table per keyword:

CREATE TABLE data (
dataIdINTEGER PRIMARY KEY AUTOINCREMENT,
keywords  TEXT, -- a comma separated list of word ids
...
);
CREATE TABLE words (
wordId  INTEGER PRIMARY KEY AUTOINCREMENT,
wordTEXT PRIMARY KEY ON CONFLICT IGNORE
);

-- for each word ID there's a table
CREATE TABLE keywords_0 (
dataId  INTEGER PRIMARY KEY,
);

Is there a limit in the number of tables?

If I choose keyword 1, 7 and 42 I would

SELECT * FROM keywords_1 UNION SELECT * FROM keywords_7' UNION SELECT * FROM 
keywords_42'

I expect the database to become significantly smaller...

Has anybody tried something like this?

Michael


Re: [sqlite] Slow query after reboot

2006-01-19 Thread Michael Sizaki

Geoff Simonds wrote:
My table contains about 500,000 rows and 4 columns, not all that much 
data.  The overall size of the db file is 35 mb.  Does 15 - 20 seconds 
sound right to load from disk into memory?


Yes it does. The problem is, that your query is probably
not reading sequentially from disk. Therefore the disk
head has to jump forth and back. Once the entire database
in in the OS disk cache, queries are fast, because it's
only CPU bound and not disk bound anymore.

To speedup the initial access, you can:
- read the entire file once before you start your query
- run the following query (once)
select count(last_column) from big_table;
  this will touch each record in a kind of optimal order
- if that is still slow, try VACUUM on your database. This
  brings the records in a natural order.

I have an application that deals also with about 500,000
and the database size is about 100mb. Queries on a cold
database are extremely slow


Michael


Robert Simpson wrote:


- Original Message - From: Geoff Simonds [EMAIL PROTECTED]





The app is running on Windows XP machines and I assume that disk 
files are cached.  The strange thing is that the time it takes for 
the initial read into RAM after install and first use is 
significantly shorter than after a reboot.  For example, if you just 
installed the app and start it, the first time you do a query you see 
results in about 2 seconds.  Subsequent queries come back much almost 
instantaneously.  If the user reboots the machine or waits until the 
next day and performs the same query, it now takes about 15 seconds.  
After the 15 seconds, results come back and subsequent queries are 
instantaneous.  I am not sure if this has anything to do with it but 
the app is a Deskband that lives in the taskbar on windows.





That's not so strange, really.  When the app is installed (along with 
the database), the Windows disk cache probably has at least part of 
the database file cached -- afterall it just got finished writing it.


Robert












Re: [sqlite] Which VC6 options to use for fastest sqlite3?

2006-01-10 Thread Michael Sizaki

/MD /W3 /GX /O2 /Ob2 /D NDEBUG /D WIN32 /D _LIB /D _AFXDLL /FpRelease/LibSqlite3.pch /YX 
/FoRelease/ /FdRelease/ /FD /c


Thx!


Since SQLite seems to be I/O bound, I'm not sure the compiler matters
that much. What I find is that performance on SCSI equipped machines
is far superior to IDE. In some case I've had perfectly acceptable
performance on my SCSI based system which becomes unusable when on my
customers IDE based systems. Not SQlite's fault, IDE and SATA drives
just aren't so hot.


Well, I got another mail saying that my app will probably
be disk bound, but that's not really the case for my
application:
- I insert data in huge transactions (a kind of bulk load)
- The application reads the data most of the
  time. And it reads the data sequentially.
  Therefore it is mostly CPU bound.
- I use a trick to speed up access: I read
  the data once in 'natural order'. This puts
  the data into the system disk cache. From then
  on, access is *much* faster compared to a setup
  where I don't read the data once initially.


Michael



[sqlite] Which VC6 options to use for fastest sqlite3?

2006-01-09 Thread Michael Sizaki

Hi,

which VC6 options to use for a very fast sqlite dll?
Any /D except NDEBUG?
  /nologo /MT /W3 /GX /O2 /D WIN32 /D NDEBUG /D _WINDOWS /D _MBCS /D 
_USRDLL

Is there another C compiler that generates faster executables on
windows?

Michael