RE: [sqlite] ORDER BY of UNION?

2006-03-01 Thread Boris Popov
Okay that works, but is there a way to make it work with the t1 alias? I'm
porting an existing application to SQLite and changing all queries to not
use aliases may be problematic if you know what I mean. Looking at the
syntax page I don't see how using t1 is illegal and yet clearly as doesn't
work as expected. Any more hints, ideas or suggestions?

Thanks!

-Boris

-- 
+1.604.689.0322
DeepCove Labs Ltd.
4th floor 595 Howe Street
Vancouver, Canada V6C 2T5

[EMAIL PROTECTED]

CONFIDENTIALITY NOTICE

This email is intended only for the persons named in the message
header. Unless otherwise indicated, it contains information that is
private and confidential. If you have received it in error, please
notify the sender and delete the entire message including any
attachments.

Thank you.

-Original Message-
From: Darren Duncan [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, March 01, 2006 10:19 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] ORDER BY of UNION?

At 8:59 PM -0800 3/1/06, Boris Popov wrote:
>I can't seem to get unions to sort properly,
>
>SELECT DISTINCT * FROM (SELECT t1.ID
>FROM GR_ADDRESS t1 UNION ALL SELECT t1.ID
>FROM PERSON t1) t1 ORDER BY t1.ID DESC
>
>results in "no such column: t1.ID" error. How would I go about sorting the
>result set in this case?
>
>Cheers!
>-Boris

Try removing the "t1." from both inner select statements, so it just 
says "select id" in both places. -- Darren Duncan


smime.p7s
Description: S/MIME cryptographic signature


Re: [sqlite] ORDER BY of UNION?

2006-03-01 Thread Darren Duncan

At 8:59 PM -0800 3/1/06, Boris Popov wrote:

I can't seem to get unions to sort properly,

SELECT DISTINCT * FROM (SELECT t1.ID
FROM GR_ADDRESS t1 UNION ALL SELECT t1.ID
FROM PERSON t1) t1 ORDER BY t1.ID DESC

results in "no such column: t1.ID" error. How would I go about sorting the
result set in this case?

Cheers!
-Boris


Try removing the "t1." from both inner select statements, so it just 
says "select id" in both places. -- Darren Duncan


Re: [sqlite] Extrange files

2006-03-01 Thread Nuno Lucas
On 3/2/06, Paul G <[EMAIL PROTECTED]> wrote:
> From: "Nuno Lucas" <[EMAIL PROTECTED]>
> > The only problem is that it can be a race condition between the
> > closing of the handle and the actual delete command (because you can't
> > delete an open file on win world).
>
> you lie!  if the file is opened with the delete sharemode set, it can be
> 'deleted' while open. in this case, it will be *really* deleted when the
> refcount drops to 0, iirc. besides, these hoops don't need to be jumped
> through if you're the process that issued the original CreateFile(), again
> iirc.

You're right, off course. When I wrote that I was only thinking on
Windows CE, which don't implement the FILE_SHARE_DELETE mode (at least
for CE <=4.2, not sure about the 5.0 version).

But note that the flag only exists on NT, not on Win9x (which can be
considered obsolete now, but was important enough to make me never use
it before).


Best regards,
~Nuno Lucas


RE: [sqlite] Compound Ops: syntax error?

2006-03-01 Thread Boris Popov
Okay, false alarm, figured it out myself. For some reason SQLite doesn't
like the extra set of parens, so the following works okay,

SELECT DISTINCT * FROM (SELECT t1.ID, t1.STREET, t1.HOUSE_NUM
 FROM GR_ADDRESS t1
 WHERE t1.ID = 1 UNION ALL SELECT t1.ID, t1.STREET, t1.HOUSE_NUM
 FROM GR_ADDRESS t1
 WHERE t1.ID = 2) t1

instead of,

SELECT DISTINCT * FROM (SELECT t1.ID, t1.STREET, t1.HOUSE_NUM
 FROM GR_ADDRESS t1
 WHERE (t1.ID = 1)) UNION ALL (SELECT t1.ID, t1.STREET, t1.HOUSE_NUM
 FROM GR_ADDRESS t1
 WHERE (t1.ID = 2)) 

Don't you find it strange though?

-Boris

-- 
+1.604.689.0322
DeepCove Labs Ltd.
4th floor 595 Howe Street
Vancouver, Canada V6C 2T5

[EMAIL PROTECTED]

CONFIDENTIALITY NOTICE

This email is intended only for the persons named in the message
header. Unless otherwise indicated, it contains information that is
private and confidential. If you have received it in error, please
notify the sender and delete the entire message including any
attachments.

Thank you.

-Original Message-
From: Boris Popov [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, March 01, 2006 7:48 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] Compound Ops: syntax error?

Not sure if its me, but I just can't figure out why these queries don't
work. As far as I can tell looking at the docs all these compound ops are
supported: UNION | UNION ALL | INTERSECT | EXCEPT

Any ideas?

SQL OK:

SELECT t1.ID, t1.STREET, t1.HOUSE_NUM
 FROM GR_ADDRESS t1

SQL NOT OKAY 1: near "(": syntax error

SELECT DISTINCT * FROM (SELECT t1.ID, t1.STREET, t1.HOUSE_NUM
 FROM GR_ADDRESS t1
 WHERE (t1.ID = 1)) EXCEPT (SELECT t1.ID, t1.STREET, t1.HOUSE_NUM
 FROM GR_ADDRESS t1
 WHERE (t1.ID = 2))

SQL NOT OKAY 2: near "(": syntax error

SELECT DISTINCT * FROM (SELECT t1.ID, t1.STREET, t1.HOUSE_NUM
 FROM GR_ADDRESS t1
 WHERE (t1.ID = 1)) INTERSECT (SELECT t1.ID, t1.STREET, t1.HOUSE_NUM
 FROM GR_ADDRESS t1
 WHERE (t1.ID = 2)) 

SQL NOT OKAY 3: near "(": syntax error

SELECT DISTINCT * FROM (SELECT t1.ID, t1.STREET, t1.HOUSE_NUM
 FROM GR_ADDRESS t1
 WHERE (t1.ID = 1)) UNION ALL (SELECT t1.ID, t1.STREET, t1.HOUSE_NUM
 FROM GR_ADDRESS t1
 WHERE (t1.ID = 2)) 

Thanks!

-Boris

-- 
+1.604.689.0322
DeepCove Labs Ltd.
4th floor 595 Howe Street
Vancouver, Canada V6C 2T5

[EMAIL PROTECTED]

CONFIDENTIALITY NOTICE

This email is intended only for the persons named in the message
header. Unless otherwise indicated, it contains information that is
private and confidential. If you have received it in error, please
notify the sender and delete the entire message including any
attachments.

Thank you.


smime.p7s
Description: S/MIME cryptographic signature


[sqlite] ORDER BY of UNION?

2006-03-01 Thread Boris Popov
I can't seem to get unions to sort properly,

SELECT DISTINCT * FROM (SELECT t1.ID
FROM GR_ADDRESS t1 UNION ALL SELECT t1.ID
FROM PERSON t1) t1 ORDER BY t1.ID DESC

results in "no such column: t1.ID" error. How would I go about sorting the
result set in this case?

Cheers!

-Boris

-- 
+1.604.689.0322
DeepCove Labs Ltd.
4th floor 595 Howe Street
Vancouver, Canada V6C 2T5

[EMAIL PROTECTED]

CONFIDENTIALITY NOTICE

This email is intended only for the persons named in the message
header. Unless otherwise indicated, it contains information that is
private and confidential. If you have received it in error, please
notify the sender and delete the entire message including any
attachments.

Thank you.


smime.p7s
Description: S/MIME cryptographic signature


Re: [sqlite] Extrange files

2006-03-01 Thread Paul G


- Original Message - 
From: "Nuno Lucas" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, March 01, 2006 10:36 PM
Subject: Re: [sqlite] Extrange files


The only problem is that it can be a race condition between the
closing of the handle and the actual delete command (because you can't
delete an open file on win world).


you lie!  if the file is opened with the delete sharemode set, it can be 
'deleted' while open. in this case, it will be *really* deleted when the 
refcount drops to 0, iirc. besides, these hoops don't need to be jumped 
through if you're the process that issued the original CreateFile(), again 
iirc.


-p 



[sqlite] Compound Ops: syntax error?

2006-03-01 Thread Boris Popov
Not sure if its me, but I just can't figure out why these queries don't
work. As far as I can tell looking at the docs all these compound ops are
supported: UNION | UNION ALL | INTERSECT | EXCEPT

Any ideas?

SQL OK:

SELECT t1.ID, t1.STREET, t1.HOUSE_NUM
 FROM GR_ADDRESS t1

SQL NOT OKAY 1: near "(": syntax error

SELECT DISTINCT * FROM (SELECT t1.ID, t1.STREET, t1.HOUSE_NUM
 FROM GR_ADDRESS t1
 WHERE (t1.ID = 1)) EXCEPT (SELECT t1.ID, t1.STREET, t1.HOUSE_NUM
 FROM GR_ADDRESS t1
 WHERE (t1.ID = 2))

SQL NOT OKAY 2: near "(": syntax error

SELECT DISTINCT * FROM (SELECT t1.ID, t1.STREET, t1.HOUSE_NUM
 FROM GR_ADDRESS t1
 WHERE (t1.ID = 1)) INTERSECT (SELECT t1.ID, t1.STREET, t1.HOUSE_NUM
 FROM GR_ADDRESS t1
 WHERE (t1.ID = 2)) 

SQL NOT OKAY 3: near "(": syntax error

SELECT DISTINCT * FROM (SELECT t1.ID, t1.STREET, t1.HOUSE_NUM
 FROM GR_ADDRESS t1
 WHERE (t1.ID = 1)) UNION ALL (SELECT t1.ID, t1.STREET, t1.HOUSE_NUM
 FROM GR_ADDRESS t1
 WHERE (t1.ID = 2)) 

Thanks!

-Boris

-- 
+1.604.689.0322
DeepCove Labs Ltd.
4th floor 595 Howe Street
Vancouver, Canada V6C 2T5

[EMAIL PROTECTED]

CONFIDENTIALITY NOTICE

This email is intended only for the persons named in the message
header. Unless otherwise indicated, it contains information that is
private and confidential. If you have received it in error, please
notify the sender and delete the entire message including any
attachments.

Thank you.


smime.p7s
Description: S/MIME cryptographic signature


Re: [sqlite] Extrange files

2006-03-01 Thread Nuno Lucas
On 3/1/06, Ralf Junker <[EMAIL PROTECTED]> wrote:
>
> >But I do not think that DOS has the ability
> >to automatically delete a file when it is closed, so the
> >files remain on disk after SQLite has finished with them.
>
> Just a thought:
>
> Wouldn't it possible to have SQLite thoughtfully delete all files it creates 
> when closing the database instead of relying on the operating system?
>
> I am not sure, but there might be plenty of OSes besides DOS that do not 
> automatically delete temporary files.

That's what the Windows CE port does, and I think I remember it's also
done on the Mac port (don't have the code in front of me and never
developed for a Mac).

The only problem is that it can be a race condition between the
closing of the handle and the actual delete command (because you can't
delete an open file on win world).

In the case of Windows CE, that risk is negligible. It would be even
less for DOS (last time I checked DOS wasn't multi-task, if we exclude
TSR's).

On (desktop) windows, this will not be so true, because of things like
anti-virus that open a file for scaning as soon as they are closed,
making the next delete to fail (but only sometimes and never on the
developper machine ;-).

> Ralf

Best regards,
~Nuno Lucas


Re: [sqlite] New columns not recognized by other connections

2006-03-01 Thread drh
Blake Ross <[EMAIL PROTECTED]> wrote:
> 4. In process 1, execute: ALTER TABLE test ADD bar;
> 5. In process 2, execute INSERT INTO test(bar) VALUES(1);
> 
> After executing step 5, you get an SQLITE_ERROR that table "test" has no 
> column "bar". Re-executing the statement has no effect (i.e. at the code 
> level, recompiling does not help, as it does with SQLITE_SCHEMA). 
> Executing any valid SELECT in process 2 does fix it, as does vacuuming 
> or closing and reopening the connection.
> 
> I couldn't find any mention of this in the current docs. Is this 
> behavior by design?
> 

This is a side effect of the design and is mentioned in the
documention somewhere, if I recall.  Any schema change
(not just ALTER TABLE, but any change) in one process is 
not recognized by another process until the other process 
actually tries to read the database file.  Perparing a
statement does *not* cause the database file to be read,
however.  So at step 5 above, process 2 does not know that
the schema has been changed by process 1.  Consequently,
process 2 does not know about the new "bar" column.

You can force process 2 to recognize the "bar" column by
getting it to read something from the database.  For example,
you could execute "SELECT 1 FROM sqlite_master LIMIT 1".
You can sqlite3_prepare() that statement.  When you sqlite3_step()
it, you will get an SQLITE_ERROR back.  Then when you
sqlite3_finalize() you will get the SQLITE_SCHEMA return
code.  You do not care.  At this point, process 2 now
knows that the schema has changed.  So when you go to
sqlite3_prepare() your next statement, it will reread and
reparse the entire schema first.  This will let process
2 know about the new column and the prepare will succeed.

--
D. Richard Hipp   <[EMAIL PROTECTED]>



Re: [sqlite] Running App state in db?

2006-03-01 Thread Nathaniel Smith
On Thu, Mar 02, 2006 at 12:18:21AM +0100, Elrond wrote:
> I intended to only put the locks as such in the db.
> When it comes to a new lock, I'll select all relevant old
> locks, that might conflict, handle the conflict check in
> app logic and finally insert a new lock record. (all inside
> a proper table lock/transaction).
> 
> I just hoped for some cool way to let the db cleanup all
> those records, when the session dies.

I don't know enough about your app to know if this would work, but
if you only have the one app instance managing locks, and locks expire
whenever that app dies... perhaps you can just have that app, on
startup, unconditionally erase all locks in the db?

-- Nathaniel

-- 
Details are all that matters; God dwells there, and you never get to
see Him if you don't struggle to get them right. -- Stephen Jay Gould


Re: [sqlite] Failing Transaction Help.

2006-03-01 Thread Liu Baoliang
I think you could set pxdb->pBe->inTrans field to 0 to indicate the
transaction is finished.

2006/3/2, nbiggs <[EMAIL PROTECTED]>:
> I think that I just found my problem.  For some stupid reason, I coded
> it so that if a statement failed, then it stopped executing the rest of
> the statements. Thus leaving the transaction open and causing my
> problems.
> I have since fixed it.
>
> -Original Message-
> From: Rob Lohman [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, March 01, 2006 5:09 PM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Failing Transaction Help.
>
> If I'm not mistaken you still need to close the transaction.
> Are you doing an "end transaction" even if a statement
> fails (ie, a rollback is done)?
>
> - Original Message -
> From: "nbiggs" <[EMAIL PROTECTED]>
> To: 
> Sent: Wednesday, March 01, 2006 10:24 PM
> Subject: [sqlite] Failing Transaction Help.
>
>
> > In my application, I am using a transaction to insert about 10 records
> > at a time.  The problem is that if one of the statements in the
> > transaction fail, commit is not being executed.  When I try creating
> > another transaction, I get a constant error message "can not create a
> > transaction within a transaction".  How do I get around this issue?
> >
> > Nathan Biggs
> > Computerway Food Systems
> > (336) 841-7289
> >
> >
>
>


Re: [sqlite] Running App state in db?

2006-03-01 Thread Elrond
On Wed, Mar 01, 2006 at 12:44:47PM -0600, Jim C. Nasby wrote:
[...]
> Depending on your needs, you might be able to just lock a row for
> updates and hold that lock. IE, open a seperate connection to the
> database and do:
> 
> BEGIN;
> UPDATE process SET start_time = now() WHERE process_id = ?;
> 
> And then 'sit' on that connection until you're done. When you're
> finished, just issue a COMMIT. Note that some databases won't like you
> leaving that transaction open a real long time, so it depends on what
> you're doing if this will work. I also don't know if SQLite cares about
> such things.
[...]

sqlite mostly cares about such things, but it goes the
"lock the whole db" way.

I'd love row level locks, right!

I already considered adding some "this_row_locked" column
to a table, just to emulate row level locking.


Elrond


Re: [sqlite] Running App state in db?

2006-03-01 Thread Elrond
On Wed, Mar 01, 2006 at 04:00:53PM -0600, Jim C. Nasby wrote:
> On Wed, Mar 01, 2006 at 01:53:45PM -0800, w b wrote:
> > Well, my locking data isn't as simple as "locked, not
> > locked". The resource has ranges that can be locked, and it
> > can be locked for reading (shared) and writing (exclusive).
> > It's not really fun.
> 
> Sounds to me like the best bet is to put the 'resources' into a database
> and let it handle the locking...

Letting the db handle it, isn't really an option: For
starters, I can't put any possible range of the resource
into the db.

I intended to only put the locks as such in the db.
When it comes to a new lock, I'll select all relevant old
locks, that might conflict, handle the conflict check in
app logic and finally insert a new lock record. (all inside
a proper table lock/transaction).

I just hoped for some cool way to let the db cleanup all
those records, when the session dies.

(not, that I have that now ;) )


Elrond


RE: [sqlite] Failing Transaction Help.

2006-03-01 Thread nbiggs
I think that I just found my problem.  For some stupid reason, I coded
it so that if a statement failed, then it stopped executing the rest of
the statements. Thus leaving the transaction open and causing my
problems.
I have since fixed it.

-Original Message-
From: Rob Lohman [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, March 01, 2006 5:09 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Failing Transaction Help.

If I'm not mistaken you still need to close the transaction.
Are you doing an "end transaction" even if a statement
fails (ie, a rollback is done)?

- Original Message - 
From: "nbiggs" <[EMAIL PROTECTED]>
To: 
Sent: Wednesday, March 01, 2006 10:24 PM
Subject: [sqlite] Failing Transaction Help.


> In my application, I am using a transaction to insert about 10 records
> at a time.  The problem is that if one of the statements in the
> transaction fail, commit is not being executed.  When I try creating
> another transaction, I get a constant error message "can not create a
> transaction within a transaction".  How do I get around this issue?
> 
> Nathan Biggs
> Computerway Food Systems
> (336) 841-7289
> 
>



Re: [sqlite] Failing Transaction Help.

2006-03-01 Thread Rob Lohman

If I'm not mistaken you still need to close the transaction.
Are you doing an "end transaction" even if a statement
fails (ie, a rollback is done)?

- Original Message - 
From: "nbiggs" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, March 01, 2006 10:24 PM
Subject: [sqlite] Failing Transaction Help.



In my application, I am using a transaction to insert about 10 records
at a time.  The problem is that if one of the statements in the
transaction fail, commit is not being executed.  When I try creating
another transaction, I get a constant error message "can not create a
transaction within a transaction".  How do I get around this issue?

Nathan Biggs
Computerway Food Systems
(336) 841-7289




Re: [sqlite] Running App state in db?

2006-03-01 Thread Jim C. Nasby
On Wed, Mar 01, 2006 at 01:53:45PM -0800, w b wrote:
> Well, my locking data isn't as simple as "locked, not
> locked". The resource has ranges that can be locked, and it
> can be locked for reading (shared) and writing (exclusive).
> It's not really fun.

Sounds to me like the best bet is to put the 'resources' into a database
and let it handle the locking...
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461


Re: [sqlite] Running App state in db?

2006-03-01 Thread w b
Elrond.
  
  If you dont have any luck with the database way check out ACE Adaptive  
Communications Environment. That has wrapped all of the code for the  likes of 
mutexes etc. So could save you a bunch of time if you need to  go to option 1 
especially across multiple OS's
  
  http://www.cs.wustl.edu/~schmidt/ACE-overview.html
  
  
  
  

Elrond <[EMAIL PROTECTED]> wrote:  On Wed, Mar 01, 2006 at 01:32:31PM -0600, 
Jim C. Nasby wrote:
> BTW, if you're running everything on a single machine there's lots of
> other ways you can do locking that don't involve the database.
[...]

Well, my locking data isn't as simple as "locked, not
locked". The resource has ranges that can be locked, and it
can be locked for reading (shared) and writing (exclusive).
It's not really fun.

That said, I want the whole fun to work on Unix and
Windows.

So my options are:

1) Write native api code for the job, using shared
   memory/mutexs, or whatnot for the relevant OS.
2) Find a suitable storage for the structured data.

I'm currently trying (2).


Elrond



[sqlite] Failing Transaction Help.

2006-03-01 Thread nbiggs
In my application, I am using a transaction to insert about 10 records
at a time.  The problem is that if one of the statements in the
transaction fail, commit is not being executed.  When I try creating
another transaction, I get a constant error message "can not create a
transaction within a transaction".  How do I get around this issue?
 
Nathan Biggs
Computerway Food Systems
(336) 841-7289
 


Re: [sqlite] How to secure SQLITE

2006-03-01 Thread Clay Dowling

Hakan VELIOGLU said:
> Hi,
>
> I have a server that users are accessing it with ssh and publishing their
> web
> sites. What I want is a database support with less effor. So,
> SQLite is a very good option for me to decrease the management tasks for a
> database support. However, I searched the SQLites's  web site for security
> tasks but I couldn't find any sugestion.
>
> What I need is some suggestions like don't give the 777 permission to a
> database
> file. :) Of course this is very simple, but I am a newbee in SQLite so I
> need
> your experiments.

What are you trying to accomplish with your database, and your database
security?  That said, unless you buy the encrypting version of the
database your only real options for securing a database file are through
filesystem permissions.  Set the filesystem permissions appropriately and
things should be okay.

If the database needs to be accessible from a web app the best solution is
to create the database outside of the web server's document root in a
folder that only the web server and the site owner have permissions to
read, write or execute.  That's folder permissions 770 with the owner or
the group being the web server's owner or group, and the group or the
owner being the user.

>From there keeping miscreants out of the file becomes the domain of the
web application accessing that file.  The app writer can give as much or
as little access as desired.

Clay Dowling
-- 
Simple Content Management
http://www.ceamus.com



Re: [sqlite] How to secure SQLITE

2006-03-01 Thread Jay Sprenkle
>
> I have a server that users are accessing it with ssh and publishing their web
> sites. What I want is a database support with less effor. So,
> SQLite is a very good option for me to decrease the management tasks for a
> database support. However, I searched the SQLites's  web site for security
> tasks but I couldn't find any sugestion.
>
> What I need is some suggestions like don't give the 777 permission to a
> database
> file. :) Of course this is very simple, but I am a newbee in SQLite so I need
> your experiments.

The author of sqlite sells an encrypted version:

from: http://www.hwaci.com/sw/sqlite/prosupport.html

3.0 Encrypted Databases

An enhanced version of SQLite is available (for both versions 2.8 and
3.3) that encrypts its database files to help prevent unauthorized
access or modification. The entire database file is encrypted2. To an
outside observer, the database file appears to contain white noise.
There is nothing2 that identifies the file as an SQLite database.

The enhanced SQLite with encryption support can continue to read and
write ordinary unencrypted databases without any performance penalty.
You can use the ATTACH SQL command to attach an encrypted database to
an unencrypted database or to attach an unencrypted database to an
encrypted one. The password to a database can be changed at any time,
though doing so is an expensive operation roughly comparable to
VACUUM.

The encryption extension descrypts each page of data as it is read
from the disk and reencrypts it as modified versions are written back
to the disk. But the primary database file and the rollback journal
are encrypted. A very fast encryption algorithm is used, but even so
it takes time to do all of that encryption and decryption. So when
encryption is enabled, there is about a 50% performance loss.

The encrypted database enhancements for SQLite are available in
source-code form for a one-time licensing fee of $2000 (US). A
technical support contract is also recommended but is not required.
There are no per-copy royalties. The one-time fee entitles the
licensee to free copies of all future updates to the code. Call
+1.704.948.4565 or write to [EMAIL PROTECTED] for additional
information.


[sqlite] How to secure SQLITE

2006-03-01 Thread Hakan VELIOGLU

Hi,

I have a server that users are accessing it with ssh and publishing their web
sites. What I want is a database support with less effor. So,
SQLite is a very good option for me to decrease the management tasks for a
database support. However, I searched the SQLites's  web site for security
tasks but I couldn't find any sugestion.

What I need is some suggestions like don't give the 777 permission to a 
database

file. :) Of course this is very simple, but I am a newbee in SQLite so I need
your experiments.

Thanks a lot.


This message was sent using IMP, the Internet Messaging Program.



Re: [sqlite] Running App state in db?

2006-03-01 Thread Elrond
On Wed, Mar 01, 2006 at 01:32:31PM -0600, Jim C. Nasby wrote:
> BTW, if you're running everything on a single machine there's lots of
> other ways you can do locking that don't involve the database.
[...]

Well, my locking data isn't as simple as "locked, not
locked". The resource has ranges that can be locked, and it
can be locked for reading (shared) and writing (exclusive).
It's not really fun.

That said, I want the whole fun to work on Unix and
Windows.

So my options are:

1) Write native api code for the job, using shared
   memory/mutexs, or whatnot for the relevant OS.
2) Find a suitable storage for the structured data.

I'm currently trying (2).


Elrond


Re: [sqlite] Running App state in db?

2006-03-01 Thread Jim C. Nasby
BTW, if you're running everything on a single machine there's lots of
other ways you can do locking that don't involve the database.

On Wed, Mar 01, 2006 at 11:20:01AM -0800, w b wrote:
> Unfortunately I think that this would lock the whole database within  SQLITE 
> as there is no row level locking, so probably not the best way  to go 
> forward, unless all of the other applications are only performing  reads ? 
>   
>   
>   Some othe ideas that might help.
>   
>   Have a field in one of your tables (May be a process table as Jim  
> descriobed)  that stores the last update time when your main app  performed a 
> refresh of the data. Your other applications could then  infer that if that 
> value is greater than some threshold that the data  within is old and should 
> not be trusted. So your other applications  could infer from that that your 
> app has crashed. In this case you might  not need to clean the DB as the data 
> is effectively implied as being  bad given that the last_refresh time is 
> outside of your accepted aging  window. This assumes that you are 
> periodically refreshing the data in  there which sounds like that is the case
>   
>   On recovery (restart ) of your application I think the only thing you  
> probably dont want to do is go thru the recreation of the tables as  that 
> would invalidate any prepares that your other applications have  done. So may 
> be delete  the old data and refresh it (or simply  overwrite it). In doing so 
> your other applications would then see a new  time stamp within the accepted 
> threshold range and so could now trust  that data again.
>   
>   Wayne
>   
> 
> "Jim C. Nasby" <[EMAIL PROTECTED]> wrote:  On Wed, Mar 01, 2006 at 07:38:58PM 
> +0100, Elrond wrote:
> > 
> > Hi,
> > 
> > I'm considering to put the state of a running app into an
> > sqlite db. I want it in a db, so external tools can query
> > it and know, what the app is doing currently.
> > 
> > Any hints on how to clean up the db, when the app crashes?
> > 
> > (I have external resources, that I need to "lock", so the
> > idea is to put the locks in the db, so more than one
> > instance of the app can run and they don't kill the
> > external resource.)
> > 
> > Any hints?
> 
> Depending on your needs, you might be able to just lock a row for
> updates and hold that lock. IE, open a seperate connection to the
> database and do:
> 
> BEGIN;
> UPDATE process SET start_time = now() WHERE process_id = ?;
> 
> And then 'sit' on that connection until you're done. When you're
> finished, just issue a COMMIT. Note that some databases won't like you
> leaving that transaction open a real long time, so it depends on what
> you're doing if this will work. I also don't know if SQLite cares about
> such things.
> -- 
> Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
> Pervasive Software  http://pervasive.comwork: 512-231-6117
> vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461
> 

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461


Re: [sqlite] Running App state in db?

2006-03-01 Thread w b
Unfortunately I think that this would lock the whole database within  SQLITE as 
there is no row level locking, so probably not the best way  to go forward, 
unless all of the other applications are only performing  reads ? 
  
  
  Some othe ideas that might help.
  
  Have a field in one of your tables (May be a process table as Jim  
descriobed)  that stores the last update time when your main app  performed a 
refresh of the data. Your other applications could then  infer that if that 
value is greater than some threshold that the data  within is old and should 
not be trusted. So your other applications  could infer from that that your app 
has crashed. In this case you might  not need to clean the DB as the data is 
effectively implied as being  bad given that the last_refresh time is outside 
of your accepted aging  window. This assumes that you are periodically 
refreshing the data in  there which sounds like that is the case
  
  On recovery (restart ) of your application I think the only thing you  
probably dont want to do is go thru the recreation of the tables as  that would 
invalidate any prepares that your other applications have  done. So may be 
delete  the old data and refresh it (or simply  overwrite it). In doing so your 
other applications would then see a new  time stamp within the accepted 
threshold range and so could now trust  that data again.
  
  Wayne
  

"Jim C. Nasby" <[EMAIL PROTECTED]> wrote:  On Wed, Mar 01, 2006 at 07:38:58PM 
+0100, Elrond wrote:
> 
> Hi,
> 
> I'm considering to put the state of a running app into an
> sqlite db. I want it in a db, so external tools can query
> it and know, what the app is doing currently.
> 
> Any hints on how to clean up the db, when the app crashes?
> 
> (I have external resources, that I need to "lock", so the
> idea is to put the locks in the db, so more than one
> instance of the app can run and they don't kill the
> external resource.)
> 
> Any hints?

Depending on your needs, you might be able to just lock a row for
updates and hold that lock. IE, open a seperate connection to the
database and do:

BEGIN;
UPDATE process SET start_time = now() WHERE process_id = ?;

And then 'sit' on that connection until you're done. When you're
finished, just issue a COMMIT. Note that some databases won't like you
leaving that transaction open a real long time, so it depends on what
you're doing if this will work. I also don't know if SQLite cares about
such things.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



Re: [sqlite] Extrange files

2006-03-01 Thread Thomas Chust

On Wed, 1 Mar 2006, RalfJunker wrote:

[...] Wouldn't it possible to have SQLite thoughtfully delete all files 
it creates when closing the database instead of relying on the operating 
system? [...]


Hello,

that would probably be possible, but not very efficient as you would have 
to keep a list of formerly used filenames around associated with the 
database handle.


Also SQLite3 does delete the temporary files, but I think (without having 
looked into the source now) it does so immediately after opening them as 
it is common good programming practice. I don't think DOS supports this 
approach, though.


cu,
Thomas


Re: [sqlite] Running App state in db?

2006-03-01 Thread Jim C. Nasby
On Wed, Mar 01, 2006 at 07:38:58PM +0100, Elrond wrote:
> 
> Hi,
> 
> I'm considering to put the state of a running app into an
> sqlite db. I want it in a db, so external tools can query
> it and know, what the app is doing currently.
> 
> Any hints on how to clean up the db, when the app crashes?
> 
> (I have external resources, that I need to "lock", so the
> idea is to put the locks in the db, so more than one
> instance of the app can run and they don't kill the
> external resource.)
> 
> Any hints?

Depending on your needs, you might be able to just lock a row for
updates and hold that lock. IE, open a seperate connection to the
database and do:

BEGIN;
UPDATE process SET start_time = now() WHERE process_id = ?;

And then 'sit' on that connection until you're done. When you're
finished, just issue a COMMIT. Note that some databases won't like you
leaving that transaction open a real long time, so it depends on what
you're doing if this will work. I also don't know if SQLite cares about
such things.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461


[sqlite] Running App state in db?

2006-03-01 Thread Elrond

Hi,

I'm considering to put the state of a running app into an
sqlite db. I want it in a db, so external tools can query
it and know, what the app is doing currently.

Any hints on how to clean up the db, when the app crashes?

(I have external resources, that I need to "lock", so the
idea is to put the locks in the db, so more than one
instance of the app can run and they don't kill the
external resource.)

Any hints?


Elrond


Re: [sqlite] performance statistics

2006-03-01 Thread Jim C. Nasby
On Wed, Mar 01, 2006 at 09:25:02AM -0500, [EMAIL PROTECTED] wrote:
> I am currently investigating porting my project from postgres to SQLite due
> to anticipated performance issues (we will have to start handling lots more
> data).  My initial speed testing of handling the expanded amount data has
> suggested that the postgres performance will be unacceptable.  I'm
> convinced that SQLite will solve my performance issues, however, the speed
> comparison data found on the SQLite site (http://www.sqlite.org/speed.html)
> is old.  This is the type of data I need, but I'd like to have more recent
> data to present to my manager, if it is available.  Can anybody point me
> anywhere that may have similar but more recent data?

What tuning have you done to PostgreSQL? The out-of-the-box
postgresql.conf is *VERY* conservative; it's meant to get you up and
running, not provide good performance.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461


Re: [sqlite] performance statistics

2006-03-01 Thread Jim C. Nasby
On Wed, Mar 01, 2006 at 05:42:57PM +0100, Denis Sbragion wrote:
> Hello Andrew,
> 
> On Wed, March 1, 2006 17:31, Andrew Piskorski wrote:
> > Is that in fact true?  I am not familiar with how PostgreSQL
> > implements the SERIALIZABLE isolation level, but I assume that
> > PostgreSQL's MVCC would still give some advantage even under
> > SERIALIZABLE: It should allow the readers and (at least one of) the
> > writers to run concurrently.  Am I mistaken?
> 
> PostgreSQL always played the "readers are never blocked" mantra. Nevertheless
> I really wonder how the strict serializable constraints could be satisfied
> without blocking the readers while a write is in place.

Simple: readers have to handle the possibility that they'll need to
re-run their transaction. From http://lnk.nu/postgresql.org/8gf.html:

 UPDATE, DELETE, SELECT FOR UPDATE, and SELECT FOR SHARE commands behave
 the same as SELECT in terms of searching for target rows: they will
 only find target rows that were committed as of the transaction start
 time. However, such a target row may have already been updated (or
 deleted or locked) by another concurrent transaction by the time it is
 found. In this case, the serializable transaction will wait for the
 first updating transaction to commit or roll back (if it is still in
 progress). If the first updater rolls back, then its effects are
 negated and the serializable transaction can proceed with updating the
 originally found row. But if the first updater commits (and actually
 updated or deleted the row, not just locked it) then the serializable
 transaction will be rolled back with the message

 ERROR:  could not serialize access due to concurrent update

 because a serializable transaction cannot modify or lock rows changed
 by other transactions after the serializable transaction began. 
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461


Re: [sqlite] performance statistics

2006-03-01 Thread Jim C. Nasby
On Wed, Mar 01, 2006 at 05:23:05PM +0100, Denis Sbragion wrote:
> Insert records as "processing by writer", update them to "ready to be
> processed" with a single atomic update after a burst of inserts, update the
> status of all "ready to be processed" records to the "to be processed by
> reader" status with another single atomic update in the reader, process all
> the "to be processed by reader" records, mark all the "to be processed by
> reader" records as "processed" again with a single atomic update when
> finished, if needed delete "processed" records.

FWIW, the performance of that would be pretty bad in most MVCC
databases, because you can't do an update 'in place' (Ok, Oracle can,
but they still have to write both undo and redo log info, so it's
effectively the same as not being 'in place' unless you have a lot of
indexes and you're not touching indexed rows).
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461


Re: [sqlite] Extrange files

2006-03-01 Thread Ralf Junker

>But I do not think that DOS has the ability
>to automatically delete a file when it is closed, so the
>files remain on disk after SQLite has finished with them. 

Just a thought:

Wouldn't it possible to have SQLite thoughtfully delete all files it creates 
when closing the database instead of relying on the operating system?

I am not sure, but there might be plenty of OSes besides DOS that do not 
automatically delete temporary files.

Ralf 



Re: [sqlite] performance statistics

2006-03-01 Thread drh
Andrew Piskorski <[EMAIL PROTECTED]> wrote:
> On Wed, Mar 01, 2006 at 10:53:12AM -0500, [EMAIL PROTECTED] wrote:
> > If you use READ COMMITTED isolation (the default in PostgreSQL)
> 
> > If it is a problem,
> > then you need to select SERIALIZABLE isolation in PostgreSQL
> > in which case the MVCC is not going to give you any advantage
> > over SQLite.
> 
> Is that in fact true?  I am not familiar with how PostgreSQL
> implements the SERIALIZABLE isolation level, but I assume that
> PostgreSQL's MVCC would still give some advantage even under
> SERIALIZABLE: It should allow the readers and (at least one of) the
> writers to run concurrently.  Am I mistaken?
> 

Well.  On second thought, you might be right.  I guess it
depends on how PostgreSQL implements SERIALIZABLE.  Perhaps
somebody with a better knowledge of the inner workings of
PostgreSQL can answer with more authority.

--
D. Richard Hipp   <[EMAIL PROTECTED]>



Re: [sqlite] performance statistics

2006-03-01 Thread Jim Dodgen
Quoting [EMAIL PROTECTED]:
> 
> I anticipate 2 bottlenecks...
> 
> 1. My anticipated bottleneck under postgres is that the DB-writing app.
> must parse incoming bursts of data and store in the DB.  The machine
> sending this data is seeing a delay in processing.  Debugging has shown
> that the INSERTS (on the order of a few thousand) is where most of the time
> is wasted.

I would wrap the "bursts" in a transaction if you can (begin; and commit; 
statements)

> 
> 2. The other bottleneck is data retrieval.  My DB-reading application must
> read the DB record-by-record (opens a cursor and reads one-by-one), build
> the data into a message according to a system ICD, and ship it out.
> postgres (postmaster) CPU usage is hovering around 85 - 90% at this time.
>

I do a simular thing in my application, what I do is to snapshot (copy) the 
database (A sqlite database is a single file) and then run my batch process 
against the copy. 
 
> The expansion of data will force me to go from a maximum 3400 row table to
> a maximum of 11560.

My tables are a simular size

> 
> From what I gather in reading about SQLite, it seems to be better equipped
> for performance.  All my testing of the current system points to postgres
> (postmaster) being my bottleneck.
> 
> Jason Alburger
> HID/NAS/LAN Engineer
> L3/ATO-E En Route Peripheral Systems Support
> 609-485-7225
> 
> 
>
>  [EMAIL PROTECTED]
>  
>
>  03/01/2006 09:54   To 
>  AMsqlite-users@sqlite.org 
> cc 
>
>  Please respond to Subject 
>  [EMAIL PROTECTED] Re: [sqlite] performance statistics 
>   te.org   
>
>
>
>
>
> 
> 
> 
> 
> [EMAIL PROTECTED] wrote:
> >
> > I am currently investigating porting my project from postgres to SQLite
> due
> > to anticipated performance issues
> >
> 
> I do not thing speed should really be the prime consideration
> here.  PostgreSQL and SQLite solve very different problems.
> I think you should choose the system that is the best map to
> the problem you are trying to solve.
> 
> PostgreSQL is designed to support a large number of clients
> distributed across multiple machines and accessing a relatively
> large data store that is in a fixed location.  PostgreSQL is
> designed to replace Oracle.
> 
> SQLite is designed to support a smaller number of clients
> all located on the same host computer and accessing a portable
> data store of only a few dozen gigabytes which is eaily copied
> or moved.  SQLite is designed to replace fopen().
> 
> Both SQLite and PostgreSQL can be used to solve problems outside
> their primary focus.  And so a high-end use of SQLite will
> certainly overlap a low-end use of PostgreSQL.  But you will
> be happiest if you will use them both for what they were
> originally designed for.
> 
> If you give us some more clues about what your requirements
> are we can give you better guidance about which database might
> be the best choice.
> 
> --
> D. Richard Hipp   <[EMAIL PROTECTED]>
> 
> 







Re: [sqlite] performance statistics

2006-03-01 Thread Denis Sbragion
Hello Andrew,

On Wed, March 1, 2006 17:31, Andrew Piskorski wrote:
> Is that in fact true?  I am not familiar with how PostgreSQL
> implements the SERIALIZABLE isolation level, but I assume that
> PostgreSQL's MVCC would still give some advantage even under
> SERIALIZABLE: It should allow the readers and (at least one of) the
> writers to run concurrently.  Am I mistaken?

PostgreSQL always played the "readers are never blocked" mantra. Nevertheless
I really wonder how the strict serializable constraints could be satisfied
without blocking the readers while a write is in place.

Bye,

-- 
Denis Sbragion
InfoTecna
Tel: +39 0362 805396, Fax: +39 0362 805404
URL: http://www.infotecna.it



Re: [sqlite] performance statistics

2006-03-01 Thread Jay Sprenkle
> My question is not about extending/improving SQLite but about having an
> extra tool which helps to optimize the SQL written for SQLite. So SQLite
> stays indeed lightweight and fast, but the SQL it is fed with is
> automatically optimized.

Like I said, the optimizer tool is the programmer.
In a lot of cases the sql in a program doesn't change so the best
place to optimize it would
be when the program is designed, not at query time.
If anyone wrote a tool like that I'm sure it would be useful.


Re: [sqlite] performance statistics

2006-03-01 Thread Andrew Piskorski
On Wed, Mar 01, 2006 at 10:53:12AM -0500, [EMAIL PROTECTED] wrote:
> If you use READ COMMITTED isolation (the default in PostgreSQL)

> If it is a problem,
> then you need to select SERIALIZABLE isolation in PostgreSQL
> in which case the MVCC is not going to give you any advantage
> over SQLite.

Is that in fact true?  I am not familiar with how PostgreSQL
implements the SERIALIZABLE isolation level, but I assume that
PostgreSQL's MVCC would still give some advantage even under
SERIALIZABLE: It should allow the readers and (at least one of) the
writers to run concurrently.  Am I mistaken?

-- 
Andrew Piskorski <[EMAIL PROTECTED]>
http://www.piskorski.com/


Re: [sqlite] performance statistics

2006-03-01 Thread Ran
My question is not about extending/improving SQLite but about having an
extra tool which helps to optimize the SQL written for SQLite. So SQLite
stays indeed lightweight and fast, but the SQL it is fed with is
automatically optimized.

Ran

On 3/1/06, Jay Sprenkle <[EMAIL PROTECTED]> wrote:
>
> On 3/1/06, Ran <[EMAIL PROTECTED]> wrote:
> > In light of your answer, I wonder if it is possible to implement such
> > optimizer that does the hand-optimizing automatically, but of course
> BEFORE
> > they are actually being used by SQLite.
> >
> > So the idea is not to make SQLite optimizer better, but to create a kind
> of
> > SQL optimizer that gets as input SQL statements and gives as output
> > optimized (specifically for SQLite) SQL statements.
>
> I think the concept so far has been that the programmer is the query
> optimizer so it stays fast and lightweight. ;)
>


Re: [sqlite] performance statistics

2006-03-01 Thread Denis Sbragion
Hello DRH,

On Wed, March 1, 2006 16:53, [EMAIL PROTECTED] wrote:
...
> If you use READ COMMITTED isolation (the default in PostgreSQL)
> then your writes are not atomic as seen by the reader.  In other
...
> then you need to select SERIALIZABLE isolation in PostgreSQL
> in which case the MVCC is not going to give you any advantage
> over SQLite.

indeed. Another trick which may be useful and that we often used in our
applications, which sometimes have similar needs: use an explicity "status"
field to mark the record situation.

Insert records as "processing by writer", update them to "ready to be
processed" with a single atomic update after a burst of inserts, update the
status of all "ready to be processed" records to the "to be processed by
reader" status with another single atomic update in the reader, process all
the "to be processed by reader" records, mark all the "to be processed by
reader" records as "processed" again with a single atomic update when
finished, if needed delete "processed" records.

This kind of approach requires just an index on the status field and is also
really useful when something goes wrong (application bug, power outage and so
on) because it becomes pretty easy to reprocess all the unprocessed records
just by looking at the status. The end results should be pretty similar to the
use of temporary tables, but without the need of additional tables.

Bye,

-- 
Dr. Denis Sbragion
InfoTecna
Tel: +39 0362 805396, Fax: +39 0362 805404
URL: http://www.infotecna.it



Re: [sqlite] performance statistics

2006-03-01 Thread Clay Dowling

[EMAIL PROTECTED] said:
> 1. My anticipated bottleneck under postgres is that the DB-writing app.
> must parse incoming bursts of data and store in the DB.  The machine
> sending this data is seeing a delay in processing.  Debugging has shown
> that the INSERTS (on the order of a few thousand) is where most of the
> time
> is wasted.

Jason,

You might be better performance simply by wrapping the insert into a
transaction, or wrapping a transaction around a few hundred inserts at a
time.  A transaction is a very expensive operation, and unless you group
your inserts into transactions of several inserts, you pay the transaction
price for each single insert.  That has a devastating impact on
performance no matter what database you're using, so long as it's ACID
compliant.

SQLite is a wonderful tool and absolutely saving my bacon on a current
project, but you can save yourself the trouble of rewriting your database
access by making a slight modification to your code.  This assumes, of
course, that you aren't already using transactions.

Clay Dowling
-- 
Simple Content Management
http://www.ceamus.com



Re: [sqlite] performance statistics

2006-03-01 Thread Derrell . Lipman
[EMAIL PROTECTED] writes:

> PostgreSQL has a much better query optimizer than SQLite.
> (You can do that when you have a multi-megabyte memory footprint
> budget versus 250KiB for SQLite.)  In your particular case,
> I would guess you could get SQLite to run as fast or faster
> than PostgreSQL by hand-optimizing your admittedly complex
> queries.

In this light, I had a single query that took about 24 *hours* to complete in
sqlite (2.8.x).  I hand optimized the query by breaking it into multiple (14
I think) separate sequential queries which generate temporary tables for the
next query to work with, and building some indexes on the temporary tables.
The 24 hour query was reduced to a few *seconds*.

Query optimization is critical for large queries in sqlite, and sqlite can be
made VERY fast if you take the time to optimize the queries that are taking a
long time to execute.

Derrell


Re: [sqlite] performance statistics

2006-03-01 Thread drh
"Denis Sbragion" <[EMAIL PROTECTED]> wrote:
> Furthermore having both a reader
> and a writer at the same time the MVCC "better than row level locking"
> mechanism might provide you better performances than SQLite, but here the
> devil's in the detail.

"D. Richard Hipp" <[EMAIL PROTECTED]> wrote:
> Since PostgreSQL supports READ COMMITTED isolation by default, the
> writer lock will not be a problem there.  But you will have the same
> issue on PosgreSQL if you select SERIALIZABLE isolation.  SQLite only
> does SERIALIZABLE for database connections running in separate
> processes.

To combine and clarify our remarks:

If you use READ COMMITTED isolation (the default in PostgreSQL)
then your writes are not atomic as seen by the reader.  In other
words, if a burst of inserts occurs while a read is in process,
the read might end up seeing some old data from before the burst
and some new data from afterwards.  This may or may not be a
problem for you depending on your application.  If it is a problem,
then you need to select SERIALIZABLE isolation in PostgreSQL
in which case the MVCC is not going to give you any advantage
over SQLite.

--
D. Richard Hipp   <[EMAIL PROTECTED]>



Re: [sqlite] performance statistics

2006-03-01 Thread Jay Sprenkle
On 3/1/06, Ran <[EMAIL PROTECTED]> wrote:
> In light of your answer, I wonder if it is possible to implement such
> optimizer that does the hand-optimizing automatically, but of course BEFORE
> they are actually being used by SQLite.
>
> So the idea is not to make SQLite optimizer better, but to create a kind of
> SQL optimizer that gets as input SQL statements and gives as output
> optimized (specifically for SQLite) SQL statements.

I think the concept so far has been that the programmer is the query
optimizer so it stays fast and lightweight. ;)


Re: [sqlite] performance statistics

2006-03-01 Thread drh
[EMAIL PROTECTED] wrote:
> wellThe database and the applications accessing the database are all
> located on the same machine, so distribution across multiple machines
> doesn't apply here.   The system is designed so that only one application
> handles all the writes to the DB.   Another application handles all the
> reads, and there may be up to two instances of that application running at
> any one time, so I guess that shows a small number of clients.   When the
> application that reads the DB data starts, it reads *all* the data in the
> DB and ships it elsewhere.

I think either SQLite or PostgreSQL would be appropriate here.  I'm
guessing that SQLite will have the speed advantage in this particular
case if you are careful in how you code it up.

> 
> I anticipate 2 bottlenecks...
> 
> 1. My anticipated bottleneck under postgres is that the DB-writing app.
> must parse incoming bursts of data and store in the DB.  The machine
> sending this data is seeing a delay in processing.  Debugging has shown
> that the INSERTS (on the order of a few thousand) is where most of the time
> is wasted.

You will do well to gather your incoming data into a TEMP table then
insert the whole wad into the main database all in one go using
something like this:

INSERT INTO maintable SELECT * FROM temptable;
DELETE FROM temptable;

Actually, this same trick might solve your postgresql performance
problem and thus obviate the need to port your code.

> 
> 2. The other bottleneck is data retrieval.  My DB-reading application must
> read the DB record-by-record (opens a cursor and reads one-by-one), build
> the data into a message according to a system ICD, and ship it out.
> postgres (postmaster) CPU usage is hovering around 85 - 90% at this time.
> 
> The expansion of data will force me to go from a maximum 3400 row table to
> a maximum of 11560.

Unless each row is particularly large, this is not a very big database
and should not present a problem to either SQLite or PostgreSQL.  Unless
you are doing some kind of strange join that you haven't told us about.

If your data formatting takes a long time, the reader might block the
writer in SQLite.  The writer process will have to wait to do its write
until the reader has finished.  You can avoid this by making a copy of
the data to be read into a temporary table before formatting it:

CREATE TEMP TABLE outbuf AS SELECT * FROM maintable;
SELECT * FROM outbuf;
  -- Do your formatting and sending
DROP TABLE outbuf;

Since PostgreSQL supports READ COMMITTED isolation by default, the
writer lock will not be a problem there.  But you will have the same
issue on PosgreSQL if you select SERIALIZABLE isolation.  SQLite only
does SERIALIZABLE for database connections running in separate
processes.
--
D. Richard Hipp   <[EMAIL PROTECTED]>



Re: [sqlite] performance statistics

2006-03-01 Thread Denis Sbragion
Hello Jason,

On Wed, March 1, 2006 16:20, [EMAIL PROTECTED] wrote:
...
> 1. My anticipated bottleneck under postgres is that the DB-writing app.
> must parse incoming bursts of data and store in the DB.  The machine
> sending this data is seeing a delay in processing.  Debugging has shown
> that the INSERTS (on the order of a few thousand) is where most of the time
> is wasted.
>
> 2. The other bottleneck is data retrieval.  My DB-reading application must
> read the DB record-by-record (opens a cursor and reads one-by-one), build
> the data into a message according to a system ICD, and ship it out.
> postgres (postmaster) CPU usage is hovering around 85 - 90% at this time.
...

though your application seems a good candidate for SQLite use, have you tried
surrounding each burst of inserts and reads in a single transaction? With
PostgreSQL, but also with SQLite, performances might increase dramatically
with proper transaction handling in place. Furthermore having both a reader
and a writer at the same time the MVCC "better than row level locking"
mechanism might provide you better performances than SQLite, but here the
devil's in the detail. A lot depends on how much the read and write operations
overlap each others.

Bye,

-- 
Denis Sbragion
InfoTecna
Tel: +39 0362 805396, Fax: +39 0362 805404
URL: http://www.infotecna.it



Re: [sqlite] performance statistics

2006-03-01 Thread Ran
In light of your answer, I wonder if it is possible to implement such
optimizer that does the hand-optimizing automatically, but of course BEFORE
they are actually being used by SQLite.

So the idea is not to make SQLite optimizer better, but to create a kind of
SQL optimizer that gets as input SQL statements and gives as output
optimized (specifically for SQLite) SQL statements.

Ran

On 3/1/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> PostgreSQL has a much better query optimizer than SQLite.
> (You can do that when you have a multi-megabyte memory footprint
> budget versus 250KiB for SQLite.)  In your particular case,
> I would guess you could get SQLite to run as fast or faster
> than PostgreSQL by hand-optimizing your admittedly complex
> queries.
> --
> D. Richard Hipp   <[EMAIL PROTECTED]>
>
>


Re: [sqlite] performance statistics

2006-03-01 Thread Denis Sbragion
Hello Serge,

On Wed, March 1, 2006 16:11, Serge Semashko wrote:
...
> I'm in no way a database expert, but the tests on the benchmarking page
> seem a bit trivial and looks like they only test database API (data
> fetching throughoutput), but not the engine performance. I would like to
> see some benchmarks involving really huge databases and complicated
> queries and wonder if the results will be similar to those I have
> observed...

those benchmarks target the primary use of SQLite, which isn't the same as
other database engines, as perfectly explained by DRH himself. Even though its
performances and rich feature list might make us forget which is the intended
use of SQLite, we must remember that it is firt of all a compact, lightweight,
excellent *embedded* database engine. SQLite simply isn't designed for huge
databases and complicated queries, even though most of the times it is able to
cope with both, being at least a bit more than an fopen() replacement. Don't
be shy Dr. Hipp! :)

Bye,

-- 
Denis Sbragion
InfoTecna
Tel: +39 0362 805396, Fax: +39 0362 805404
URL: http://www.infotecna.it



Re: [sqlite] performance statistics

2006-03-01 Thread jason . ctr . alburger




wellThe database and the applications accessing the database are all
located on the same machine, so distribution across multiple machines
doesn't apply here.   The system is designed so that only one application
handles all the writes to the DB.   Another application handles all the
reads, and there may be up to two instances of that application running at
any one time, so I guess that shows a small number of clients.   When the
application that reads the DB data starts, it reads *all* the data in the
DB and ships it elsewhere.

I anticipate 2 bottlenecks...

1. My anticipated bottleneck under postgres is that the DB-writing app.
must parse incoming bursts of data and store in the DB.  The machine
sending this data is seeing a delay in processing.  Debugging has shown
that the INSERTS (on the order of a few thousand) is where most of the time
is wasted.

2. The other bottleneck is data retrieval.  My DB-reading application must
read the DB record-by-record (opens a cursor and reads one-by-one), build
the data into a message according to a system ICD, and ship it out.
postgres (postmaster) CPU usage is hovering around 85 - 90% at this time.

The expansion of data will force me to go from a maximum 3400 row table to
a maximum of 11560.

>From what I gather in reading about SQLite, it seems to be better equipped
for performance.  All my testing of the current system points to postgres
(postmaster) being my bottleneck.

Jason Alburger
HID/NAS/LAN Engineer
L3/ATO-E En Route Peripheral Systems Support
609-485-7225


   
 [EMAIL PROTECTED] 
   
 03/01/2006 09:54   To 
 AMsqlite-users@sqlite.org 
cc 
   
 Please respond to Subject 
 [EMAIL PROTECTED] Re: [sqlite] performance statistics 
  te.org   
   
   
   
   
   




[EMAIL PROTECTED] wrote:
>
> I am currently investigating porting my project from postgres to SQLite
due
> to anticipated performance issues
>

I do not thing speed should really be the prime consideration
here.  PostgreSQL and SQLite solve very different problems.
I think you should choose the system that is the best map to
the problem you are trying to solve.

PostgreSQL is designed to support a large number of clients
distributed across multiple machines and accessing a relatively
large data store that is in a fixed location.  PostgreSQL is
designed to replace Oracle.

SQLite is designed to support a smaller number of clients
all located on the same host computer and accessing a portable
data store of only a few dozen gigabytes which is eaily copied
or moved.  SQLite is designed to replace fopen().

Both SQLite and PostgreSQL can be used to solve problems outside
their primary focus.  And so a high-end use of SQLite will
certainly overlap a low-end use of PostgreSQL.  But you will
be happiest if you will use them both for what they were
originally designed for.

If you give us some more clues about what your requirements
are we can give you better guidance about which database might
be the best choice.

--
D. Richard Hipp   <[EMAIL PROTECTED]>



RE: [sqlite] Extrange files

2006-03-01 Thread Josep Lluís Vaquer
It works

You're great! Thanks a lot 

-Mensaje original-
De: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Enviado el: dimecres, 1 / març / 2006 16:03
Para: sqlite-users@sqlite.org; [EMAIL PROTECTED]
Asunto: Re: [sqlite] Extrange files

=?iso-8859-1?Q?Josep_Llu=EDs_Vaquer?= <[EMAIL PROTECTED]> wrote:
> I'm running DOS, Sqlite version 3.2.8. Attached example files
> 

The mailing list handler automatically strips attachments so
they did not get through.

The files you are seeing are probably temporary databases
that SQLite creates to support its internal processing.
Such temporary databases are automatically deleted on unix
and windows.  But I do not think that DOS has the ability
to automatically delete a file when it is closed, so the
files remain on disk after SQLite has finished with them.

To fix this, you might consider compiling SQLite so that
it stores temporary databases in memory rather than on disk.
Do so with this compile-time option:

-DTEMP_STORE=2

You can do this at runtime using a pragma:

PRAGMA temp_store=MEMORY;

--
D. Richard Hipp   <[EMAIL PROTECTED]>


--
Este mensaje ha sido analizado mediante ABServer Antivirus System
http://www.abserver.es/antivirus/panel.htm


--
Este mensaje ha sido analizado mediante ABServer Antivirus System
http://www.abserver.es/antivirus/panel.htm



Re: [sqlite] performance statistics

2006-03-01 Thread drh
Serge Semashko <[EMAIL PROTECTED]> wrote:
>> 
> We started with using sqlite3, but the database has grown now to
> something like 1GB and has millions of rows. It does not perform as fast
> as we would like, so we looked for alternatives. We tried to convert
> it to both mysql and postgresql and tried to run the same query we are
> using quite often (the query is rather big and contains a lot of
> conditions, but it extracts only about a hundred matching rows). The
> result was a bit surprising. Mysql just locked down and could not
> provide any results. After killing it, increasing memory limits in its
> configuration to use all the available memory, it managed to complete
> the query but was still slower than sqlite3 (lost about 30%). Postgresql
> on the other hand was a really nice surprise and it was several times
> faster than sqlite3! Now we are converting to postgresql :)
> 

PostgreSQL has a much better query optimizer than SQLite.
(You can do that when you have a multi-megabyte memory footprint
budget versus 250KiB for SQLite.)  In your particular case,
I would guess you could get SQLite to run as fast or faster
than PostgreSQL by hand-optimizing your admittedly complex
queries.
--
D. Richard Hipp   <[EMAIL PROTECTED]>



Re: [sqlite] Extrange files

2006-03-01 Thread drh
=?iso-8859-1?Q?Josep_Llu=EDs_Vaquer?= <[EMAIL PROTECTED]> wrote:
> I'm running DOS, Sqlite version 3.2.8. Attached example files 
> 

The mailing list handler automatically strips attachments so
they did not get through.

The files you are seeing are probably temporary databases
that SQLite creates to support its internal processing.
Such temporary databases are automatically deleted on unix
and windows.  But I do not think that DOS has the ability
to automatically delete a file when it is closed, so the
files remain on disk after SQLite has finished with them.

To fix this, you might consider compiling SQLite so that
it stores temporary databases in memory rather than on disk.
Do so with this compile-time option:

-DTEMP_STORE=2

You can do this at runtime using a pragma:

PRAGMA temp_store=MEMORY;

--
D. Richard Hipp   <[EMAIL PROTECTED]>



Re: [sqlite] performance statistics

2006-03-01 Thread Serge Semashko

[EMAIL PROTECTED] wrote:

I am currently investigating porting my project from postgres to 
SQLite due to anticipated performance issues (we will have to start 
handling lots more data).  My initial speed testing of handling the 
expanded amount data has suggested that the postgres performance will

 be unacceptable.  I'm convinced that SQLite will solve my
performance issues, however, the speed comparison data found on the
SQLite site (http://www.sqlite.org/speed.html) is old.  This is the
type of data I need, but I'd like to have more recent data to present
to my manager, if it is available.  Can anybody point me anywhere
that may have similar but more recent data?

Thanks in advance!

Jason Alburger HID/NAS/LAN Engineer L3/ATO-E En Route Peripheral 
Systems Support 609-485-7225


Actually I have quite the opposite experience :)

We started with using sqlite3, but the database has grown now to
something like 1GB and has millions of rows. It does not perform as fast
as we would like, so we looked for alternatives. We tried to convert
it to both mysql and postgresql and tried to run the same query we are
using quite often (the query is rather big and contains a lot of
conditions, but it extracts only about a hundred matching rows). The
result was a bit surprising. Mysql just locked down and could not
provide any results. After killing it, increasing memory limits in its
configuration to use all the available memory, it managed to complete
the query but was still slower than sqlite3 (lost about 30%). Postgresql
on the other hand was a really nice surprise and it was several times
faster than sqlite3! Now we are converting to postgresql :)

I'm in no way a database expert, but the tests on the benchmarking page
seem a bit trivial and looks like they only test database API (data
fetching throughoutput), but not the engine performance. I would like to
see some benchmarks involving really huge databases and complicated
queries and wonder if the results will be similar to those I have
observed...





Re: [sqlite] performance statistics

2006-03-01 Thread drh
[EMAIL PROTECTED] wrote:
> 
> I am currently investigating porting my project from postgres to SQLite due
> to anticipated performance issues
>

I do not thing speed should really be the prime consideration
here.  PostgreSQL and SQLite solve very different problems.
I think you should choose the system that is the best map to
the problem you are trying to solve.

PostgreSQL is designed to support a large number of clients
distributed across multiple machines and accessing a relatively
large data store that is in a fixed location.  PostgreSQL is
designed to replace Oracle.

SQLite is designed to support a smaller number of clients
all located on the same host computer and accessing a portable
data store of only a few dozen gigabytes which is eaily copied
or moved.  SQLite is designed to replace fopen().

Both SQLite and PostgreSQL can be used to solve problems outside
their primary focus.  And so a high-end use of SQLite will
certainly overlap a low-end use of PostgreSQL.  But you will 
be happiest if you will use them both for what they were
originally designed for.

If you give us some more clues about what your requirements
are we can give you better guidance about which database might
be the best choice.

--
D. Richard Hipp   <[EMAIL PROTECTED]>



RE: [sqlite] Extrange files

2006-03-01 Thread Josep Lluís Vaquer
I'm running DOS, Sqlite version 3.2.8. Attached example files 

-Mensaje original-
De: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Enviado el: dimecres, 1 / març / 2006 13:32
Para: sqlite-users@sqlite.org
Asunto: Re: [sqlite] Extrange files

=?iso-8859-1?Q?Josep_Llu=EDs_Vaquer?= <[EMAIL PROTECTED]> wrote:
> Hi, I'm new in Sqlite.
>  
> I'm trying to develop an aplication using Sqlite and it started to 
> work fine. I did a relatively simple Schema and builded a few views to 
> access easily. The problem started when I tried to access at one of 
> those views (only in one of those). Each select I do to this view 
> creates a couple of files in the main directory named SQLITE_ followed by
a number or a letter.
> When in this directory are a few of those apparently empty files 
> Sqlite stops working.
>  
> My answer is: why are those files generated? And what can I do to 
> solve this problem?
>  

What OS are you running?  What version of SQLite are you using?
Can you show me specific examples of the name of the files that are being
generated?
--
D. Richard Hipp   <[EMAIL PROTECTED]>


--
Este mensaje ha sido analizado mediante ABServer Antivirus System
http://www.abserver.es/antivirus/panel.htm


Re: [sqlite] performance statistics

2006-03-01 Thread Jay Sprenkle
> All -
>
> I am currently investigating porting my project from postgres to SQLite due
> to anticipated performance issues (we will have to start handling lots more
> data).  My initial speed testing of handling the expanded amount data has
> suggested that the postgres performance will be unacceptable.  I'm
> convinced that SQLite will solve my performance issues, however, the speed
> comparison data found on the SQLite site (http://www.sqlite.org/speed.html)
> is old.  This is the type of data I need, but I'd like to have more recent
> data to present to my manager, if it is available.  Can anybody point me
> anywhere that may have similar but more recent data?

This might be valuable for you:
http://sqlite.phxsoftware.com/forums/9/ShowForum.aspx


[sqlite] performance statistics

2006-03-01 Thread jason . ctr . alburger




All -

I am currently investigating porting my project from postgres to SQLite due
to anticipated performance issues (we will have to start handling lots more
data).  My initial speed testing of handling the expanded amount data has
suggested that the postgres performance will be unacceptable.  I'm
convinced that SQLite will solve my performance issues, however, the speed
comparison data found on the SQLite site (http://www.sqlite.org/speed.html)
is old.  This is the type of data I need, but I'd like to have more recent
data to present to my manager, if it is available.  Can anybody point me
anywhere that may have similar but more recent data?

Thanks in advance!

Jason Alburger
HID/NAS/LAN Engineer
L3/ATO-E En Route Peripheral Systems Support
609-485-7225


Re: [sqlite] Extrange files

2006-03-01 Thread drh
=?iso-8859-1?Q?Josep_Llu=EDs_Vaquer?= <[EMAIL PROTECTED]> wrote:
> Hi, I'm new in Sqlite.
>  
> I'm trying to develop an aplication using Sqlite and it started to work
> fine. I did a relatively simple Schema and builded a few views to access
> easily. The problem started when I tried to access at one of those views
> (only in one of those). Each select I do to this view creates a couple of
> files in the main directory named SQLITE_ followed by a number or a letter.
> When in this directory are a few of those apparently empty files Sqlite
> stops working. 
>  
> My answer is: why are those files generated? And what can I do to solve
> this problem? 
>  

What OS are you running?  What version of SQLite are you using?
Can you show me specific examples of the name of the files that
are being generated?
--
D. Richard Hipp   <[EMAIL PROTECTED]>



[sqlite] Extrange files

2006-03-01 Thread Josep Lluís Vaquer
Hi, I'm new in Sqlite.
 
I'm trying to develop an aplication using Sqlite and it started to work
fine. I did a relatively simple Schema and builded a few views to access
easily. The problem started when I tried to access at one of those views
(only in one of those). Each select I do to this view creates a couple of
files in the main directory named SQLITE_ followed by a number or a letter.
When in this directory are a few of those apparently empty files Sqlite
stops working. 
 
My answer is: why are those files generated? And what can I do to solve
this problem? 
 
I'll thank any help
 
 
 
JOSEP LLUÍS VAQUER 
R Department 
[EMAIL PROTECTED] 
Tel. (0034) 977 84 56 11 
FAX. (0034) 977 84 57 56 
Grupo Turomas