Re: [sqlite] Problems with multiple threads?

2006-06-08 Thread Kervin L. Pierre
Hello,

I was under the impress that we could never
get an SQLITE_BUSY, not even on COMMIT if
we use BEGIN EXCLUSIVE.  But this seems to
say that COMMITs on exclusive transactions
can through SQLITE_BUSY?...

--- [EMAIL PROTECTED] wrote:

> then start the transaction initially with BEGIN
> EXCLUSIVE.  This
> will acquire the reserved lock immediately (instead
> of waiting to
> the first write occurs) and so you will either get
> an SQLITE_BUSY
> right away (when it is a simple matter to just rerun
> the BEGIN EXCLUSIVE
> statement until it works) or you can be assured of
> never getting
> another SQLITE_BUSY again until you try to COMMIT
> (and there too,
> you can simply rerun COMMIT repeatedly until it
> works.)

How is that?  Since the process at that
point has the exclusive access to the
database file.

Best regards,
Kervin




Re: [sqlite] Problems with multiple threads?

2006-06-08 Thread drh
"Kervin L. Pierre" <[EMAIL PROTECTED]> wrote:
> Hello,
> 
> I was under the impress that we could never
> get an SQLITE_BUSY, not even on COMMIT if
> we use BEGIN EXCLUSIVE.  But this seems to
> say that COMMITs on exclusive transactions
> can through SQLITE_BUSY?...
> 

You can get an SQLITE_BUSY on the BEGIN EXCLUSIVE
itself.  Assuming you make it past the BEGIN EXCLUSIVE
you should never get another SQLITE_BUSY in that
transaction.
--
D. Richard Hipp   <[EMAIL PROTECTED]>



Re: [sqlite] Help. Database 4096 Mb in Windows XP

2006-06-08 Thread Dennis Cote

Андрей Лапин wrote:

Hi.
In mine to a database after the size of a file of base has reached
4096 Mb, I am not added any more records. After a command sqlite3_step
the mistake (sqlite3_errmsg) stands out: "SQL logic error or missing
database".
Please help, it is necessary for me to process a lot of records.
Operation system Windows XP (SP2)
sqlite versions 3.3.6
File system NTFS


Are you sure you are using an NTFS file system?

I ran the following test on a Windows XP box with NTFS drive and it ran 
until the disk was full, at which point there were 2^28 records and the 
database file was 5.39 GB (5,789,097,984 bytes).


SQLite version 3.3.5
Enter ".help" for instructions
sqlite> create table t(a integer primary key, b, c);
sqlite> insert into t values(NULL, "hello","world");
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
2
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
4
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
8
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
16
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
32
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
64
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
128
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
256
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
512
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
1024
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
2048
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
4096
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
8192
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
16384
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
32768
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
65536
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
131072
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
262144
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
524288
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
1048576
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
2097152
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
4194304
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
8388608
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
16777216
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
33554432
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
67108864
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
134217728
sqlite> insert into t select NULL, b, c from t;
sqlite> select max(a) from t;
268435456
sqlite> insert into t select NULL, b, c from t;
SQL error: database or disk is full
sqlite> select max(a) from t;
268435456
sqlite> insert into t select NULL, b, c from t;
SQL error: database or disk is full
sqlite> select max(a) from t;
268435456
sqlite>

HTH
Dennis Cote


[sqlite] function pointers? - Re: [sqlite] DLLs containing user-defined SQL functions

2006-06-08 Thread Kervin L. Pierre
Hello,

Regardless of program loading design,
wouldn't this feature be better coded
using function pointers?  Ie. Have a
"register/load" function that maps
functions in the exe?

PS.  It would be helpful to have
sqlite3OSMalloc() and sqlite3OSFree()
as function pointers as well, so an
application can do it's own memory
management without recompiling SQLite
library.

Best regards,
Kervin

--- [EMAIL PROTECTED] wrote:

> "Igor Tandetnik" <[EMAIL PROTECTED]> wrote:
> > 
> > Note an inherent chicken and egg problem: you
> can't build two DLLs (or 
> > an EXE and a DLL) using this approach where a
> circular dependency 
> > exists, that is, where DLL A needs a function
> exported from DLL B, and 
> > at the same time DLL B needs a function exported
> from DLL A. To 
> > successfully link DLL A, you need an import
> library from DLL B, but an 
> > import library is produced as a side effect of
> link process, and to link 
> > DLL B you need an import library from DLL A, which
> you can't build until 
> > you've built B, ...   There is a way to break this
> circle with the use 
> > of so called export files (.exp ), but the
> technique is rather 
> > cumbersome. You don't want to go that way unless
> there's a gun to your 
> > head.
> > 
> 
> It's official then:  The lack of sensible shared
> library loader
> is yet another reason to avoid windows at all costs.
>  In fact, 
> I'm thinking this reason will go near the top of the
> list
> 
> Thanks, everybody, for your help.
> 
> --
> D. Richard Hipp   <[EMAIL PROTECTED]>
> 
> 



Re: [sqlite] Large DB Performance Questions

2006-06-08 Thread Mark Drago
On Wed, 2006-06-07 at 00:03 -0500, David Wollmann wrote:
> Mark Drago wrote:
> > Hello,
> >
> > I'm writing a web cache and I want to use SQLite to store the log of all
> > of the accesses made through the web cache.  The idea is to install this
> > web cache in large institutions (1000-5000 workstations).  The log
> > database can grow in size very quickly and can reach in to the gigabytes
> > after just a few days.
> >
> > Writing to the database is speedy enough that I haven't seen much of a
> > problem.  I collect the data for 1000 web requests and then insert them
> > all in a single transaction using a prepared statement.  This works
> > rather well.
> >
> > The problem that I'm encountering has to do with generating reports on
> > the data in the log database.  SQLite is showing good performance on
> > some simple queries, but that is not the case once something more
> > advanced is involved, like an aggregate function for example.  More
> > over, once the SQLite file is cached in memory it is really quick.
> > However, I can't count on this file being cached at all when a user goes
> > to run the report.  So, I've been clearing my file cache before running
> > a test, and then running the same test again now that everything has
> > been loaded in to the cache.  Like I said, for most cases SQLite is
> > fine, but here is one example where it doesn't fare as well.
> >   
> [snip]
> 
> Our solution to a very similar problem was to generate new tables with
> daily subtotals (since our reports deal with daily usage), doing as much
> processing as possible in advance. This has worked very well, allowing
> all our reports to run quickly enough that they almost feel as if the
> results are cached. I'm not a database guru by any means, this approach
> was suggested to me by a friend who writes code to manage CDRs for a
> telecom billing system. I owe him a beer. Ok, several beers.

We tried doing this with some of our reports and it does help with the
speed.  But, we wanted to support a broader range of reporting options
than just daily snapshots.  We wanted to allow the user to view
arbitrary date ranges and to slice up the data in other ways, so
creating daily reports that were mostly static wasn't something we
really wanted to do.  We also wanted the reports to contain information
that was relatively recent (last 15 minutes or so).  We didn't want to
make the user wait a full day for their report to be available.

Thanks for the reply,
Mark.


smime.p7s
Description: S/MIME cryptographic signature


RE: [sqlite] Problems with multiple threads?

2006-06-08 Thread Pat Wibbeler
No problem.  In fact, I had to consult the docs again to be sure!  I've
sorted through these several times myself.

They are quite good, though I have to admit that I'm constantly having
to consult them to remember how the locking works.  I do agree that a
BEGIN SHARED would be nice for cases where you'd like consistent reads
across multiple statements without using a BEGIN IMMEDIATE and locking
other threads doing the same out.

Pat

-Original Message-
From: A. Pagaltzis [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, June 07, 2006 7:01 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Problems with multiple threads?

* Pat Wibbeler <[EMAIL PROTECTED]> [2006-06-07 22:55]:
> It's entirely possible I'm reading these docs incorrectly, but
> this strategy has worked quite well for me.

No, I don't see any error in your reading. My apologies; I should
have consulted the docs instead of going by mailing list posts.

It's interesting that there's no way to force a SHARED lock to be
obtained immediately. The available mechanisms allow serialising
write operations with respect to each other, but not forcing a
well-defined sequence of read operations relative to write
operations.

Regards,
-- 
Aristotle Pagaltzis // 


[sqlite] .import seqfault

2006-06-08 Thread Michael Somos
I just ran across this little bug


> sqlite3
SQLite version 3.3.6
Enter ".help" for instructions
sqlite> .import x y
Segmentation fault


gdb shows where it took place :


sqlite> .import x y

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 1024 (LWP 16754)]
sqlite3SafetyOn (db=0x0) at ../sqlite-3.3.6/src/util.c:1149
1149  if( db->magic==SQLITE_MAGIC_OPEN ){


There may be other places where not enough checks are made.


Re: [sqlite] Extra functions - New Project?

2006-06-08 Thread drh
Mikey C <[EMAIL PROTECTED]> wrote:
> 
> I am in need of some new SQL functions and wanted to ask advice on the best
> way to integrate these functions into SQLite 3. 
>

See http://www.sqlite.org/cvstrac/wiki?p=LoadableExtensions

This is still a work in progress...
--
D. Richard Hipp   <[EMAIL PROTECTED]>



Re: [sqlite] .import seqfault

2006-06-08 Thread drh
Michael Somos <[EMAIL PROTECTED]> wrote:
> I just ran across this little bug
> 
> 
> > sqlite3
> SQLite version 3.3.6
> Enter ".help" for instructions
> sqlite> .import x y
> Segmentation fault
> 
> 

Thanks for the report.

This is a bug in the command-line shell, not in the
SQLite core.  Fixed now.  See 

   http://www.sqlite.org/cvstrac/chngview?cn=3209

--
D. Richard Hipp   <[EMAIL PROTECTED]>



[sqlite] SQLite3::SQLException when unable to write to DB

2006-06-08 Thread Yash Ganthe
Hi,

 

I use sqlite3.3.4 on windows.

I have two instances of a program that attempt to simultaneously write to
the same database.db file.  When I run them together, I observe that a
database.db-journal file is created. The programs crash with an exception
thrown. The exception found in the program is :

SQLite3::SQLException: SQL logic error or missing database

 

I would have expected this to be SQLite3::BusyException. What is the
difference between the two? I need to handle the case when one program is
unable to write to the database when another is writing. How do I do it?

 

I have also observed that the -journal file cannot be deleted for some time
after the programs crash. Why is that?

 

Thanks,

Yash


DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Persistent Systems Pvt. Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent Systems 
Pvt. Ltd. does not accept any liability for virus infected mails.


Re: [sqlite] Large DB Performance Questions

2006-06-08 Thread Mark Drago
On Thu, 2006-06-08 at 03:58 +0200, Michael Sizaki wrote:
> Hi Mark,
> 
> have you tried to do a VACUUM on the database?
> It helps a lot when it comes to the 'read ahead'
> feature of the database.
> 
> Michael

Michael,

Well, I just repeated my tests to see if vacuum would have any
noticeable improvement on the query times that I have been seeing.  Here
are my findings:

Cold Cache before vacuum: 217s
Warm Cache before vacuum: 1.6s
Vacuum time: 1314s = 21m54s
Cold Cache after vacuum:  206s
Warm Cache after vacuum:  1.6s

Of course, right after performing the vacuum the database would have
been cached.  So, I made sure to clear my file cache before running the
tests after the vacuum.  I can get the database in to the file cache by
just running 'dd if=log.db of=/dev/null' and that only takes 16 seconds.
But of course the file cache will only help this much when there is
enough memory in the machine to get the database in to it.  I guess
since my query is only using one of the indexes and is never reading the
table proper that just getting the part of the file that contains the
index in to memory would be just as good.

Thanks for the suggestion,
Mark.


> Mark Drago wrote:
> > Hello,
> > 
> > I'm writing a web cache and I want to use SQLite to store the log of all
> > of the accesses made through the web cache.  The idea is to install this
> > web cache in large institutions (1000-5000 workstations).  The log
> > database can grow in size very quickly and can reach in to the gigabytes
> > after just a few days.
> > 
> > Writing to the database is speedy enough that I haven't seen much of a
> > problem.  I collect the data for 1000 web requests and then insert them
> > all in a single transaction using a prepared statement.  This works
> > rather well.
> > 
> > The problem that I'm encountering has to do with generating reports on
> > the data in the log database.  SQLite is showing good performance on
> > some simple queries, but that is not the case once something more
> > advanced is involved, like an aggregate function for example.  More
> > over, once the SQLite file is cached in memory it is really quick.
> > However, I can't count on this file being cached at all when a user goes
> > to run the report.  So, I've been clearing my file cache before running
> > a test, and then running the same test again now that everything has
> > been loaded in to the cache.  Like I said, for most cases SQLite is
> > fine, but here is one example where it doesn't fare as well.
> > 
> > The system that I'm running these tests on is a P4 2.8GHz HT with 1 GB
> > of RAM running Fedora Core 5 and using SQLite version 3.3.3 (being as
> > that is what comes with FC5).  I'm doing my tests with a database that
> > is 732M in size and contains 1,280,881 records (the DB schema is
> > included below).
> > 
> > I clear the file cache by running the following command. I wait until it
> > consumes all of memory and then I kill it:
> > perl -e '@f[0..1]=0'
> > 
> > I'm running the tests by running the following script:
> > #!/bin/bash
> > echo "$1;" | sqlite3 log.db > /dev/null
> > 
> > The query I'm running is the following:
> > select count(host), host from log group by host;
> > 
> > The results include the first time the query is run (when the file is
> > not cached) and then the times of a few runs after that (when the file
> > is cached).
> > 
> > SQLite: 221.9s, 1.6s, 1.6s, 1.6s
> >  MySQL:   2.2s, 1.8s, 1.8s, 1.8s
> > 
> > The MySQL tests were done with the following script:
> > #!/bin/bash
> > mysql -u root --database=log -e "$1" > /dev/null
> > 
> > It is apparent that SQLite is reading the entire database off of the
> > disk and MySQL somehow is not.  The MySQL query cache is not in use on
> > this machine and MySQL does not claim very much memory for itself before
> > the test is conducted (maybe 30M).
> > 
> > I've tried looking in to the output from 'explain' to see if SQLite was
> > using the index that I have on the 'host' column, but I don't think it
> > is.  The output from 'explain' is included below.  Note that the
> > 'explain' output is from a different machine which is running SQLite
> > 3.3.5 compiled from source as the SQLite on FC5 kept Segfaulting when I
> > tried to use 'explain'.
> > 
> > Any information or ideas on how to speed up this query are greatly
> > appreciated.  The only un-implemented idea I have right now is to remove
> > some of the duplicated data from the schema in an attempt to reduce the
> > size of the average row in the table.  In some cases I can store just an
> > integer where I'm storing both the integer and a descriptive string
> > (category_name and category_no for example).  Some of the other
> > information in the schema holds data about things that are internal to
> > the web cache (profile*, ad*, etc.).
> > 
> > Thank you very much for any ideas,
> > Mark.
> > 
> > TABLE SCHEMA:
> > CREATE TABLE log(
> > log_no integer primary key,
> > add_dte datetime,
> > profile_nam

Re: [sqlite] Multithreading. Again.

2006-06-08 Thread Florian Weimer
> Remember, that the operating system bug that is causing all the
> multithreading grief is that file locks created by one thread
> cannot be reliably removed or modified by a different thread.
> So if a statement acquires a lock on the database file in one
> thread and you try to finalize the statement in a different
> thread, the finalization would involve releasing the lock in
> a different thread from which it was acquired - an operation
> that silently fails on certain Linux kernels.

Hmm.  The main reason why I'm asking is that I'd like to cleanly shut
down the database even when the application terminates due to an
unhandled exception or something like that.  The try/finally approach
is often helpful, but it won't help you if all uses of the handle are
statically nested (as far as the call graph is concerned).  The
garbage collector (or, in my case, a special cleanup handler) could
deal with the remaining open statement or database handles.

If RH9 is the main remaining problem OS, I can probably live with
that, especially if none of the RHELs is affected.


[sqlite] how to recover a corrupted database?

2006-06-08 Thread Joseph J. Strout
I've been using a SQLite database for a few weeks.  Today, it 
suddenly stopped working.  The database has a single table, and can 
correctly report that:



 select name, sql from sqlite_master


|name |sql
|-|-|
|tMessages|CREATE TABLE tMessages(fMs...|
|-|-|
1 record.

But pretty much any other command reports an error.  For example:


 select count(*) from tMessages

Error: SQL logic error or missing database


 select fMsgID from tMessages where fMsgID='jjstrout06011-3228733882.560'

Error: SQL logic error or missing database

Furthermore, a VACUUM command produces a different error:


 vacuum

Error: database disk image is malformed

So clearly my database is corrupted.  My two questions (three 
questions!) are, how can this happen?  Is there any way to prevent 
it?  And is there any way to recover my data once I'm in this state?


Thanks,
- Joe

--

Joseph J. Strout
[EMAIL PROTECTED]


[sqlite] how to recover a corrupted database?

2006-06-08 Thread Joseph J. Strout
I've been using a SQLite database for a few weeks.  Today, it 
suddenly stopped working.  The database has a single table, and can 
correctly report that:



 select name, sql from sqlite_master


|name |sql
|-|-|
|tMessages|CREATE TABLE tMessages(fMs...|
|-|-|
1 record.

But pretty much any other command reports an error.  For example:


 select count(*) from tMessages

Error: SQL logic error or missing database


 select fMsgID from tMessages where fMsgID='jjstrout06011-3228733882.560'

Error: SQL logic error or missing database

Furthermore, a VACUUM command produces a different error:


 vacuum

Error: database disk image is malformed

So clearly my database is corrupted.  My two questions (three 
questions!) are, how can this happen?  Is there any way to prevent 
it?  And is there any way to recover my data once I'm in this state?


Thanks,
- Joe

--

Joseph J. Strout
[EMAIL PROTECTED]


Re: [sqlite] how to recover a corrupted database?

2006-06-08 Thread drh
"Joseph J. Strout" <[EMAIL PROTECTED]> wrote:
> 
> So clearly my database is corrupted.  My two questions (three 
> questions!) are, how can this happen? 

See section 6.0 at http://www.sqlite.org/lockingv3.html

> Is there any way to prevent it?  

Don't do the things described in the document above that
will result in a corrupt database.  Sometimes you can't
help it.  If your disk control causes faulty information
to be written to the disk platter, there is nothing the
software can do about it.  I have flaky memory on my
desktop machine that sometimes will flip a random bit
or two on hot days - nothing the software can do about
that either.

Generally speaking, corruption is not often a problems.

> And is there any way to recover my data once I'm in this state?
> 

Sometimes the ".dump" command on the command-line shell will
recover some of the data.  It is worth a try.  Besides that,
you will need to restore from a backup.  There is little to
no redundancy in an SQLite database file, so when things go
bad, there is not much that can be done to recover.
--
D. Richard Hipp   <[EMAIL PROTECTED]>



Re: [sqlite] disabling rollback journal

2006-06-08 Thread JP

Michael Scharf wrote:

Without journalling, you cannot have a ROLLBACK command.
And the semantics of UPDATE become UPDATE OR FAIL instead
of the default UPDATE OR ABORT.  The difference is subtle,
but important.



There are other reasons to disable the rollback journal:
Suppose you want to create a database for querying only.
The real data persistence happens somewhere else. The
database is there to organize the data and to access the
data. Once the database is filled, it will not be modified.

If the filling fails, you try to refill it again.

You can also think of using it for a 'bulk fill': you
only use is when you file the database with the (massive)
initial data. From then on you use sqlite with journaling...

Michael


I agree.  I have a particular need for a "read only" environment, where 
editing happens elsewhere.  This readonly database has extra indices and 
denormalized tables to speed up searches, but no need to INSERT, DELETE, 
UPDATE or CREATE/DROP anything.


I also think a "readonly" version of sqlite would be nice to have.  An 
even smaller dll/lib which only allows for SELECTs.  I might attempt to 
do this when I have the time.


Has anybody done anything like this?

jp



Re: [sqlite] how to recover a corrupted database?

2006-06-08 Thread joe
On Jun 08, 2006, at 18:20 UTC, [EMAIL PROTECTED] wrote:
 
> See section 6.0 at http://www.sqlite.org/lockingv3.html

Thanks.  Nothing there obviously applies in this case -- there was no power 
failure, and no files were moved or deleted as far as I'm aware -- but these 
are good tips to keep in mind anyway.  It's possible that it was just some 
random bit flip; it was buggers hot here yesterday.

I also occasionally back up the database using subversion ("svn commit"), while 
the app that uses it is still running.  My belief is that subversion only reads 
a file to commit it, and doesn't write to it, but it's possible that is wrong.

Thanks to the backup, I only lost about a day's worth of data, and much of that 
was recoverable from other sources.  It sounds like corruption is fairly rare, 
so for now I'll just bolster my backup & recovery procedures and stick with it.

Thanks,
- Joe

P.S. My apologies for the double posting -- I thought the first one had gotten 
lost because it was sent too quickly after I joined the list.

--
Joe Strout -- [EMAIL PROTECTED]
Verified Express, LLC "Making the Internet a Better Place"
http://www.verex.com/



Re: [sqlite] how to recover a corrupted database?

2006-06-08 Thread Jay Sprenkle

On 6/8/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

On Jun 08, 2006, at 18:20 UTC, [EMAIL PROTECTED] wrote:

> See section 6.0 at http://www.sqlite.org/lockingv3.html

Thanks.  Nothing there obviously applies in this case -- there was no power 
failure, and no files were moved or deleted as far as I'm aware -- but these 
are good tips to keep in mind anyway.  It's possible that it was just some 
random bit flip; it was buggers hot here yesterday.


I found this to be excellent at spotting memory problems:
http://www.memtest86.com/
No installation or even a hard disk on the system needed.
Burn the image to a CD and boot with the CD to test


--
SqliteImporter, SqliteReplicator: Command line utilities for Sqlite
http://www.reddawn.net/~jsprenkl/Sqlite

Cthulhu Bucks!  http://www.cthulhubucks.com


[sqlite] Index usage tracking

2006-06-08 Thread Dennis Jenkins
Hello all,

I would like to know where the best place in sqlite is to patch to
have it record (syslog for unix, OutputDebugString() for windows,
nothing fancy) each time it decides to use an index to satisfy a query. 
For that matter, also each time is runs a select query and decided to
not use an index.  This is only for in-house debugging.  Consider it
"coverage testing" of all of our SQL to determine if I have the correct
indicies.

I'd like the logging to record/emit the original SQL and the names
of the indicies (if any) used to execute that SQL.

Our app has grown and morphed over the past two years.  It has LOTS
of sql in it now.  Granted, I could isolate all of this sql (even the
dynamically generated stuff) (select, delete, update statements) and run
it through the analyzer.  However, if the above mentioned "hack" is easy
to do, then I would prefer the hack.  I'm trying to determine is all of
my indicies are actually being used, and to what frequency they are
being used during a typical run of our software.

I've been reading through "select.c" and "vdbe.c".  I'm not sure if
I should add the hack to the Virtual Machine opcode emitting code or the
opcode consuming code.  Maybe there is already a solution to my problem
and I simply didn't see it.



Re: [sqlite] Index usage tracking

2006-06-08 Thread drh
Dennis Jenkins <[EMAIL PROTECTED]> wrote:
> 
> I would like to know where the best place in sqlite is to patch to
> have it record (syslog for unix, OutputDebugString() for windows,
> nothing fancy) each time it decides to use an index to satisfy a query. 

The index decisions are all made in where.c and there is already
code in that file for printing out the decisions for testing
purposes.  I suggest you search for SQLITE_TEST inside where.c,
see that current testing code, and modify that to do whatever
it is you want to do.

--
D. Richard Hipp   <[EMAIL PROTECTED]>



Re: [sqlite] Index usage tracking

2006-06-08 Thread Dennis Jenkins
[EMAIL PROTECTED] wrote:
> Dennis Jenkins <[EMAIL PROTECTED]> wrote:
>   
>> I would like to know where the best place in sqlite is to patch to
>> have it record (syslog for unix, OutputDebugString() for windows,
>> nothing fancy) each time it decides to use an index to satisfy a query. 
>> 
>
> The index decisions are all made in where.c and there is already
> code in that file for printing out the decisions for testing
> purposes.  I suggest you search for SQLITE_TEST inside where.c,
> see that current testing code, and modify that to do whatever
> it is you want to do.
>
> --
> D. Richard Hipp   <[EMAIL PROTECTED]>
>
>   
Thank you very much.  I see the code that I need to tweak now.  ;)



[sqlite] disabling large file support

2006-06-08 Thread Doug Shelton
How does one disable large file support?  As mentioned in comments, I've added 
-DSQLITE_DISABLE_LFS to the Makefile, but continue to get errors indicating 
lack of kernel support for large files.  The following lines are the end of my 
compile (so you can see make options) and the behavior of the resulting sqlite3.

./libtool --mode=link gcc -g -O2 -DOS_BEOS=1 -DSQLITE_DISABLE_LFS 
-DHAVE_USLEEP=1 -I. -I./src -DNDEBUG   -DTHREADSAFE=1 
-DSQLITE_THREAD_OVERRIDE_LOCK=-1 -DSQLITE_OMIT_CURSOR -DHAVE_READLINE=0  -lroot 
-lbe \
-o sqlite3 ./src/shell.c libsqlite3.la \
-lreadline -lreadline
gcc -g -O2 -DOS_BEOS=1 -DSQLITE_DISABLE_LFS -DHAVE_USLEEP=1 -I. -I./src 
-DNDEBUG -DTHREADSAFE=1 -DSQLITE_THREAD_OVERRIDE_LOCK=-1 -DSQLITE_OMIT_CURSOR 
-DHAVE_READLINE=0 -o sqlite3 ./src/shell.c  ./.libs/libsqlite3.a -lroot -lbe 
-lreadline
$ sqlite3 test.db
SQLite version 3.3.5
Enter ".help" for instructions
sqlite> .databases
Error: kernel lacks large file support
sqlite> .exit
$

This is in continuing work to support sqlite3 under BeOS.  Any assistance would 
be greatly appreciated.



[sqlite] [ANN] rq-2.3.3

2006-06-08 Thread Ara.T.Howard



NAME
  rq v2.3.3

SYNOPSIS
  rq (queue | export RQ_Q=q) mode [mode_args]* [options]*

URIS

  http://codeforpeople.com/lib/ruby/rq/
  http://raa.ruby-lang.org/project/rq/
  http://www.linuxjournal.com/article/7922

DESCRIPTION
  ruby queue (rq) is a zero-admin zero-configuration tool used to create
  instant unix clusters.  rq requires only a central nfs filesystem in order
  to manage a simple sqlite database as a distributed priority work queue.
  this simple design allows researchers to install and configure, in only a
  few minutes and without root privileges, a robust unix cluster capable of
  distributing processes to many nodes - bringing dozens of powerful cpus to
  their knees with a single blow.  clearly this software should be kept out of
  the hands of free radicals, seti enthusiasts, and one mr. j safran.

  the central concept of rq is that n nodes work in isolation to pull jobs
  from an centrally mounted nfs priority work queue in a synchronized fashion.
  the nodes have absolutely no knowledge of each other and all communication
  is done via the queue meaning that, so long as the queue is available via
  nfs and a single node is running jobs from it, the system will continue to
  process jobs.  there is no centralized process whatsoever - all nodes work
  to take jobs from the queue and run them as fast as possible.  this creates
  a system which load balances automatically and is robust in face of node
  failures.

  although the rq system is simple in it's design it features powerful
  functionality such as priority management, predicate and sql query , compact
  streaming command-line processing, programmable api, hot-backup, and
  input/capture of the stdin/stdout/stderr io streams of remote jobs.  to date
  rq has had no reported runtime failures and is in operation at dozens of
  research centers around the world.

HISTORY

  2.3.3:
- fixed bug in updater related to io files (see tmp_stdin in jobqueue.rb)
- [IMPORTANT] removed feature where stdin jobs could contain comments ('#').
  input lines are now taken literally __except__ blank lines, which are
  still ignored.
- added ability to dump stdin/stdout/stderr for any job

rq q stdout 42

cat jids | rq q stdout -

- added ability to dump stdin/stdout/stderr pathnames for any job
rq q stdout4 42

cat jids | rq q stderr4 -
- rotate was made more robust and tolerates nfs errors to some degree


enjoy.

-a
--
suffering increases your inner strength.  also, the wishing for suffering
makes the suffering disappear.
- h.h. the 14th dali lama