Re: [sqlite] Client/Server Best Practices

2015-01-16 Thread James K. Lowden
On Sat, 10 Jan 2015 00:58:25 -0700
Keith Medcalf kmedc...@dessus.com wrote:

  there's no way to hook two SELECTs together to make them see one
 database state. That's what JOIN is for.  :-)
 
 Yes, it is a part of the SQL Standard isolation levels in excess of
 the default default of READ COMMITTED.  Different vendors call the
 isolation levels by different names -- MS SQL Server calls them Read
 Uncomitted, Read Committed, Repeatable Read, Snapshot, and
 Serializable; DB/2 calls them No Commit (similar to autocommit),
 Uncommitted Read, Cursor Stability, Read Stability, Repeatable Read.
 DB/2 Repeatable Read = MS Serializable (which is the SQL Standard
 Serializable isolation level).  Cursor Stability = Read Committed.

Keith, I just want to thank you for relieving me of my
misunderstanding.  I think some years ago I must have decided that for
my purposes Repeatable Read  friends don't exist because of their
effect on performance and concurrency.  I've always thought the
standard was paradoxical in the sense that the more you need isolation
(because of concurrent updates), the less useful it is to the extent
that restricts the system's abiltity to support concurrent updates.  

To bring it back to SQLite, we have two logging modes and two
isolation levels.  IIUC, given two connections it can be boiled down
thusly:

modetransaction isolation
Journal implicitRead Committed
WAL implicitRead Committed
journal explicitRepeatable Read
WAL explicitRepeatable Read

with the difference being concurrency: 

* in journal mode a writer is blocked by a reader, whereas 
* in WAL mode a writer can commit while a reader continues to work,
in isolation, unaffected.  

--jkl

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Client/Server Best Practices

2015-01-16 Thread Richard Hipp
On 1/16/15, James K. Lowden jklow...@schemamania.org wrote:
 On Sat, 10 Jan 2015 00:58:25 -0700
 Keith Medcalf kmedc...@dessus.com wrote:

  there's no way to hook two SELECTs together to make them see one
 database state. That's what JOIN is for.  :-)

 Yes, it is a part of the SQL Standard isolation levels in excess of
 the default default of READ COMMITTED.  Different vendors call the
 isolation levels by different names -- MS SQL Server calls them Read
 Uncomitted, Read Committed, Repeatable Read, Snapshot, and
 Serializable; DB/2 calls them No Commit (similar to autocommit),
 Uncommitted Read, Cursor Stability, Read Stability, Repeatable Read.
 DB/2 Repeatable Read = MS Serializable (which is the SQL Standard
 Serializable isolation level).  Cursor Stability = Read Committed.

 Keith, I just want to thank you for relieving me of my
 misunderstanding.  I think some years ago I must have decided that for
 my purposes Repeatable Read  friends don't exist because of their
 effect on performance and concurrency.  I've always thought the
 standard was paradoxical in the sense that the more you need isolation
 (because of concurrent updates), the less useful it is to the extent
 that restricts the system's abiltity to support concurrent updates.

 To bring it back to SQLite, we have two logging modes and two
 isolation levels.  IIUC, given two connections it can be boiled down
 thusly:

 mode  transaction isolation
 Journal   implicitRead Committed
 WAL   implicitRead Committed
 journal   explicitRepeatable Read
 WAL   explicitRepeatable Read

I think the isolation in SQLite is always Serializable.

Except if you have two connections in the same process in shared cache
mode and they set PRAGMA read_uncommitted=ON then the isolation is
Read Uncommitted.  But otherwise, isolation in SQLite is always
Serializable.



 with the difference being concurrency:

 * in journal mode a writer is blocked by a reader, whereas
 * in WAL mode a writer can commit while a reader continues to work,
 in isolation, unaffected.

 --jkl

 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Client/Server Best Practices

2015-01-09 Thread Keith Medcalf

On Friday, 9 January, 2015 16:43, James K. Lowden jklow...@schemamania.org 
said:
On Wed, 07 Jan 2015 21:47:24 -0700 Keith Medcalf kmedc...@dessus.com wrote:

Along the same lines, since you mentioned it,

  in an SQLite database you can do:

 BEGIN
 SELECT ...
 SELECT ...
 SELECT ...
 SELECT ...
 COMMIT

 and the view of the database seen by this connection will be
 consistent even though some other process modified the tables used
 in query 3 while query 2 was executing.  Even if those changes are
 commited by the writer process, the above process will not see them
 until the COMMIT releases the locks.  I expect other databases do
 this as well.

In other DBMSs that I know of -- and afaik standard SQL -- BEGIN
TRANSACTION has no effect on SELECT. Each SELECT is atomic, of course,
but there's no way to hook two SELECTs together to make them see one
database state. That's what JOIN is for.  :-)

Yes, it is a part of the SQL Standard isolation levels in excess of the default 
default of READ COMMITTED.  Different vendors call the isolation levels by 
different names -- MS SQL Server calls them Read Uncomitted, Read Committed, 
Repeatable Read, Snapshot, and Serializable; DB/2 calls them No Commit (similar 
to autocommit), Uncommitted Read, Cursor Stability, Read Stability, Repeatable 
Read.  DB/2 Repeatable Read = MS Serializable (which is the SQL Standard 
Serializable isolation level).  Cursor Stability = Read Committed.

These describe the isolation to be maintained be simultaneous Units of Work.  
The standard Read Committed/Cursor Stability level does not isolate the unit of 
work from changes made (and committed) by other units of work.  The higher 
levels do.  For example, if a unit of work is executed with Read Committed the 
two select statements may see changes to the database which were committed on 
another unit of work.  With Serializable / Repeatable Read, those changes will 
not be visible.  In fact, the database will not permit the updates to rows that 
have been read at the Serializable / Repeatable Read level.

http://publib.boulder.ibm.com/iseries/v5r1/ic2924/info/db2/rbafzmstisol.htm
http://msdn.microsoft.com/en-us/library/ms173763.aspx

A common error among inexperienced programmers is to assume that BEGIN
TRANSACTION works as you describe above: that it isolates SELECT from
competing updates.  But a transaction is not a lock!  A better mental
model is the one defined in the standard: BEGIN TRANSACTION defines a
unit of work that either will or will not be applied to the database in
its entirety (depending on COMMIT or ROLLBACK, of course).  It has a
role to play in maintaining database integrity.  It really has nothing
to do with isolation.

Indeed it does.  It is up to the programmer to set the isolation levels 
required and then set the transaction (unit of work) boundaries correctly.  
Most only know or use the default Read Committed / Cursor Stability level which 
is usually the default mode, which works as you describe.

SQLite in standard journal mode uses read committed/cursor stability 
isolation between connections to the same database.  I have double checked, and 
if you surround multiple selects with explicit begin/end, then that unit of 
work cannot be interrupted by updates and hence you then have Repeatable Read 
isolation.  Whether or not this is intended it must work this way because the 
introduction of a unit of work means that there is intent to update at some 
point and the database, within the unit of work, must maintain cursor stability 
(at least) until an actual update occurs which will escalate the shared lock to 
an exclusive lock.  In fact, if you do this, you will be unable to commit the 
update transaction until the other unit of work is completed (even if it only 
contains select statements).

In WAL mode the transaction mode is always snapshot between units of work on 
separate connections, but only one connections' unit of work can convert to 
update at a time.  Other connections doing just selects between a BEGIN 
COMMIT will continue to see the same snapshot even if another connection 
commits database changes.  It appears that the concurrent snapshot can also 
escalate to an update lock is released by another and sometimes it cannot -- if 
it is the head of the uncommitted WAL, it can, if it is not, it cannot.

Interesting discussion, Keith.  Thanks for the clarification.

---
Theory is when you know everything but nothing works.  Practice is when 
everything works but no one knows why.  Sometimes theory and practice are 
combined:  nothing works and no one knows why.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Client/Server Best Practices

2015-01-09 Thread James K. Lowden
On Wed, 07 Jan 2015 21:47:24 -0700
Keith Medcalf kmedc...@dessus.com wrote:

 As I said, your description (which I trust is accurate) is very
 helpful to someone who wants to understand how SQLite will act on
 the SQL provided to it.  But it also protrays problematic choices
 that stray from SQL's defined behavior.
 
 Not really.  It only shows that updating the database on the same
 connection as is being used to run a select, and attempting to commit
 the update before the select is complete results in chaos.  I think
 you would agree that the sequence:
 
 _prepare('select ...')
 do while _step() != DONE
_step(_prepare('BEGIN'))
_step(_prepare('UPDATE ...'))
_step(_prepare('COMMIT'))
continue
 
 should not be valid.  The BEGIN COMMIT should be moved outside the
 loop unless the connection on which the select is processed is
 separate from that on which the BEGIN/UPDATE/COMMIT is performed.

Yes, I agree that shouldn't be valid.  Or, if it is, then it should
work correctly!  ;-)   

Thanks for the clarification.  The salient point, as you emphasized, is

 There is only one connection.  

I tend to forget that SQLite behaves oddly sometimes when misused.  We
had a discussion months ago about approximately the same thing, where
threads were sharing a connection.  

Sybase works similarly.  The server insists that each query be handled
in its entirety before it accepts the next.  Attempting to issue an
UPDATE per your example above would result in the dread pending
results error.  

Along the same lines, since you mentioned it, 

  in an SQLite database you can do:
 
 BEGIN
 SELECT ...
 SELECT ...
 SELECT ...
 SELECT ...
 COMMIT
 
 and the view of the database seen by this connection will be
 consistent even though some other process modified the tables used
 in query 3 while query 2 was executing.  Even if those changes are
 commited by the writer process, the above process will not see them
 until the COMMIT releases the locks.  I expect other databases do
 this as well.  

In other DBMSs that I know of -- and afaik standard SQL -- BEGIN
TRANSACTION has no effect on SELECT. Each SELECT is atomic, of course,
but there's no way to hook two SELECTs together to make them see one
database state. That's what JOIN is for.  :-)  

A common error among inexperienced programmers is to assume that BEGIN
TRANSACTION works as you describe above: that it isolates SELECT from
competing updates.  But a transaction is not a lock!  A better mental
model is the one defined in the standard: BEGIN TRANSACTION defines a
unit of work that either will or will not be applied to the database in
its entirety (depending on COMMIT or ROLLBACK, of course).  It has a
role to play in maintaining database integrity.  It really has nothing
to do with isolation.  

Interesting discussion, Keith.  Thanks for the clarification.  

--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Client/Server Best Practices

2015-01-09 Thread Rick Kelly
Thanks for all the sage observations. As a server with a connection pool, I'm
avoiding some of issues brought up. Every thread has their own connection
handle and when done, it goes back into the pool so no sharing. I spent some
hours reading everything I could find about this product and concurrency
issues and my goal is ultimately to avoid busy's and lockout's. This library
is quite fast and performance isn't going to be an issue. After I put the
backup api in the server code base, I'll hit it with several thousand
threads, mixed reads and writes, as fast as I can create them both with the
backup api active and without and see what the session log records.

Rick Kelly



--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Client-Server-Best-Practices-tp79728p79920.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Client/Server Best Practices

2015-01-07 Thread James K. Lowden
On Fri, 02 Jan 2015 21:41:02 -0700
Keith Medcalf kmedc...@dessus.com wrote:
 On Friday, 2 January, 2015 16:26, James K. Lowden
 jklow...@schemamania.org said:
 
 On Thu, 25 Dec 2014 05:32:45 -0700 (MST) Rick Kelly
 rpke...@gci.net wrote:
 
  All SELECT type requests are wrapped with BEGIN TRANSACTION/COMMIT
 
 That shouldn't be necessary and afaik isn't necessary.  SELECT does
 not modify the database.  To commit a select is to apply the
 nonchanges.
 
 It does not matter whether it modifies the database.  reading the
 database requires a lock -- a shared lock.  updating the database
 requires a write lock, which precludes obtaining a shared lock.

When I first read your answer my reaction was, yes, yes, of course.
But I'm not sure where that leaves the OP.  Are you suggesting select
statements work better in some sense with autocommit turned off?  

In passing I suggest *requires* is not true in general.  It may be that
SQLite and other implementations use locks to mediate access and
implement ACID semantics.  Locks are just one popular way to accomplish
that, not the only one.  

Your description of transaction implementation is illuminating.  If I
understand correctly, you're describing an odd set of
design choices.  

 BEGIN TRANSACTION does not acquire a lock -- BEGIN IMMEDIATE does
 that -- BEGIN TRANSACTION merely turns off autocommit, meaning that
 the lock will not be released magically, but rather by an explicit
 COMMIT (which itself does not do anything -- it merely turns
 autocommit back on so that the next statement will commit the
 transaction before magically acquiring a new lock).  

I find this very peculiar.  You aren't saying is that

begin transaction;
insert into T value (1);
commit;
[sqlite3_close]

leaves the database unchanged (because there's no next statement)?
If not, and there's a power outage between (successful) commit and
closing the connection, what will be the state of the database on
restart?  Is the transaction still open or, if not, is it rolled
forward and completed, or rolled back?  

 However, execution of a SELECT statement does cause a lock to be
 obtained (a shared lock) and a COMMIT does cause that shared lock to
 be released.  

Again, I find this surprising.  I would expect SELECT to 

1.  establish a shared lock
2.  select the data
3.  release the lock

whether or not BEGIN is called.  If I understand what commit does per
your description above, in a transaction, the effect would be

1.  BEGIN TRANSACTION (autocommit off)
2.  SELECT (take shared lock)
3.  data data data
4.  [SELECT  done] (release shared lock)
5.  COMMIT (autocommit on)

which leaves steps #1 and #5 redundant.  

 Executing an UPDATE after a SELECT -- in the same connection -- (or
 while a select is in progress) will escalate the SHARED lock to a
 WRITE lock.  

OK, this is typical. 

 COMMIT will release the lock -- the lock is now a WRITE lock, not
 a shared lock.  Therefore the next _step() will be executing without
 any lock at all leading to apparently undefined results (really an
 error should be thrown SQLITE_DATABASE_UNLOCKED_FROM_UNDER_ME or a
 MISUSE error, but is not). 

Good to know.  I hope we agree this is unlovely, and intersects with
SQLite's unfortunate property of not isolating SELECT as atomic.  (By
which I mean: if there are 10 rows in a table and connection 1 issues
SELECT * FROM T and between calls 4  5 to sqlite3_step another
connection 2 issues DELETE FROM T, the first process may get 4 or 5
rows, or 10, depending on luck of the draw.)  

The right behavior is not a misuse error.  How is it misuse for two
connections to read and update the same table in overlapping time?  

The right behavior is to isolate SELECT from UPDATE.  Let the reader see
what was in the database at the time the SELECT was issued, unaffected
by UPDATE.  The COMMIT associated with UPDATE should not affect the
SELECT's lock; rather it should pend until SELECT completes.  Then its
own lock acquires rights to the resources it needs, and is released when
the work is done.  

 This is inherent in how WAL works.  Just because WAL is not in effect
 does not alter the fundamental workings of the transaction system.

Not sure how to parse that.  I think you mean it's inherent in how
transactions work, whether or not WAL is used?  

 I do not believe that there is a way to specify COMMIT BUT MAINTAIN
 THE SHARED LOCK, (that is, to commit the changes only and
 un-escalate the lock back to a shared lock) 

No such syntax is needed if ACID semantics are respected. In SQL we
we neither lock nor release anything, ever, explicitly.  Locks are
implementation artifacts.  COMMIT simply means the data are safely
stored.  

As I said, your description (which I trust is accurate) is very helpful
to someone who wants to understand how SQLite will act on the SQL
provided to it.  But it also protrays problematic choices that stray
from SQL's defined behavior.  

--jkl

Re: [sqlite] Client/Server Best Practices

2015-01-07 Thread Keith Medcalf
On Wednesday, 7 January, 2015 22:57, Kevin Benson kevin.m.ben...@gmail.com 
said:

-- The OP wrote:

On Thu, Dec 25, 2014 at 7:32 AM, Rick Kelly rpke...@gci.net wrote:
*SNIP*
 The database file is located in the same folder as the server. The
server
is
 multi-threaded. I'm using library version 3.8.7.4
 The database file is opened as follows:
 1. sqlite3_enable_shared_cache(%SQLITE_OPEN_SHAREDCACHE)

-- *Presumably* enabling SQLite Shared-Cache Mode likely signals his
intention to HAVE two or more connections, right? If so, with that comes
the Shared-Cache Locking Model at https://www.sqlite.org/sharedcache.html

You are right Kevin, and I am guilty of mixing two questions -- this one about 
implementing your own Client/Server protocol, and one regarding interspersing 
an UPDATE/COMMIT inside a SELECT on the same connection.

In reply to the original post, there is only one connection from the shared 
cache to the database, and it is this connection where WAL applies.  Per table 
locking and updating with the shared cache is an effect of the shared cache, 
not WAL.  You current configuration seems to be working for you.

Unless you have changed the default, all usage of connections from the 
application to shared cache and shared cache to database are serialized 
automatically to ensure that concurrency issues do not arise.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Client/Server Best Practices

2015-01-07 Thread Keith Medcalf
On Wednesday, 7 January, 2015 20:01, James K. Lowden jklow...@schemamania.org 
said:
On Fri, 02 Jan 2015 21:41:02 -0700
Keith Medcalf kmedc...@dessus.com wrote:
 On Friday, 2 January, 2015 16:26, James K. Lowden
 jklow...@schemamania.org said:

 On Thu, 25 Dec 2014 05:32:45 -0700 (MST) Rick Kelly
 rpke...@gci.net wrote:

  All SELECT type requests are wrapped with BEGIN TRANSACTION/COMMIT

 That shouldn't be necessary and afaik isn't necessary.  SELECT does
 not modify the database.  To commit a select is to apply the
 nonchanges.

 It does not matter whether it modifies the database.  reading the
 database requires a lock -- a shared lock.  updating the database
 requires a write lock, which precludes obtaining a shared lock.

When I first read your answer my reaction was, yes, yes, of course.
But I'm not sure where that leaves the OP.  Are you suggesting select
statements work better in some sense with autocommit turned off?

No.  However, when you perform an update in the same transaction (and 
connection) as you are performing the select, when the select is not completed 
yet, and you commit on the connection, it is to be expected that AHWBL.

In passing I suggest *requires* is not true in general.  It may be that
SQLite and other implementations use locks to mediate access and
implement ACID semantics.  Locks are just one popular way to accomplish
that, not the only one.

Your description of transaction implementation is illuminating.  If I
understand correctly, you're describing an odd set of
design choices.

 BEGIN TRANSACTION does not acquire a lock -- BEGIN IMMEDIATE does
 that -- BEGIN TRANSACTION merely turns off autocommit, meaning that
 the lock will not be released magically, but rather by an explicit
 COMMIT (which itself does not do anything -- it merely turns
 autocommit back on so that the next statement will commit the
 transaction before magically acquiring a new lock).

I find this very peculiar.  You aren't saying is that

   begin transaction;
   insert into T value (1);
   commit;
   [sqlite3_close]

leaves the database unchanged (because there's no next statement)?
If not, and there's a power outage between (successful) commit and
closing the connection, what will be the state of the database on
restart?  Is the transaction still open or, if not, is it rolled
forward and completed, or rolled back?

I am not certain exactly where the COMMIT performs the commit.  It could very 
likely commit and then turn autocommit back on.  Whatever so, it makes no 
difference really.

 However, execution of a SELECT statement does cause a lock to be
 obtained (a shared lock) and a COMMIT does cause that shared lock to
 be released.

Again, I find this surprising.  I would expect SELECT to

1.  establish a shared lock
2.  select the data
3.  release the lock

Except that this is not what the OP is doing.  The OP is performing a COMMIT in 
the middle of the select running.  He is doing this:

1.  establish the lock
2.  select the first of one of many rows of data
3.  release the lock forcibly (COMMIT)
4.  read the next row of data from the select
5.  release the lock forcibly (COMMIT)
... lather rinse repeat steps 4 and 5 until you reach the end of the selected 
rows

whether or not BEGIN is called.  If I understand what commit does per
your description above, in a transaction, the effect would be

1.  BEGIN TRANSACTION (autocommit off)
2.  SELECT (take shared lock)
3.  data data data
4.  [SELECT  done] (release shared lock)
5.  COMMIT (autocommit on)

which leaves steps #1 and #5 redundant.

 Executing an UPDATE after a SELECT -- in the same connection -- (or
 while a select is in progress) will escalate the SHARED lock to a
 WRITE lock.

OK, this is typical.

 COMMIT will release the lock -- the lock is now a WRITE lock, not
 a shared lock.  Therefore the next _step() will be executing without
 any lock at all leading to apparently undefined results (really an
 error should be thrown SQLITE_DATABASE_UNLOCKED_FROM_UNDER_ME or a
 MISUSE error, but is not).

Good to know.  I hope we agree this is unlovely, and intersects with
SQLite's unfortunate property of not isolating SELECT as atomic.  (By
which I mean: if there are 10 rows in a table and connection 1 issues
SELECT * FROM T and between calls 4  5 to sqlite3_step another
connection 2 issues DELETE FROM T, the first process may get 4 or 5
rows, or 10, depending on luck of the draw.)

No, it was the other persons choice to do this.  He could have run the select 
entirely, then done the updates.  Or he could have chosen to do the updates on 
another connection (which would be isolated from the connection running the 
select) using WAL.

The right behavior is not a misuse error.  How is it misuse for two
connections to read and update the same table in overlapping time?

It is not.  There IS NOT TWO CONNECTIONS.  There is only one connection.  The 
same connection is interspersing updates while running the select, and 
attempting to 

Re: [sqlite] Client/Server Best Practices

2015-01-07 Thread Kevin Benson
On Wed, Jan 7, 2015 at 11:47 PM, Keith Medcalf kmedc...@dessus.com wrote:

 On Wednesday, 7 January, 2015 20:01, James K. Lowden 
 jklow...@schemamania.org said:
 On Fri, 02 Jan 2015 21:41:02 -0700
 Keith Medcalf kmedc...@dessus.com wrote:
  On Friday, 2 January, 2015 16:26, James K. Lowden
  jklow...@schemamania.org said:
 
  On Thu, 25 Dec 2014 05:32:45 -0700 (MST) Rick Kelly
  rpke...@gci.net wrote:
 
   All SELECT type requests are wrapped with BEGIN TRANSACTION/COMMIT
 
  That shouldn't be necessary and afaik isn't necessary.  SELECT does
  not modify the database.  To commit a select is to apply the
  nonchanges.
 
  It does not matter whether it modifies the database.  reading the
  database requires a lock -- a shared lock.  updating the database
  requires a write lock, which precludes obtaining a shared lock.
 
 When I first read your answer my reaction was, yes, yes, of course.
 But I'm not sure where that leaves the OP.  Are you suggesting select
 statements work better in some sense with autocommit turned off?

 No.  However, when you perform an update in the same transaction (and
 connection) as you are performing the select, when the select is not
 completed yet, and you commit on the connection, it is to be expected that
 AHWBL.

 In passing I suggest *requires* is not true in general.  It may be that
 SQLite and other implementations use locks to mediate access and
 implement ACID semantics.  Locks are just one popular way to accomplish
 that, not the only one.

 Your description of transaction implementation is illuminating.  If I
 understand correctly, you're describing an odd set of
 design choices.
 
  BEGIN TRANSACTION does not acquire a lock -- BEGIN IMMEDIATE does
  that -- BEGIN TRANSACTION merely turns off autocommit, meaning that
  the lock will not be released magically, but rather by an explicit
  COMMIT (which itself does not do anything -- it merely turns
  autocommit back on so that the next statement will commit the
  transaction before magically acquiring a new lock).

 I find this very peculiar.  You aren't saying is that
 
begin transaction;
insert into T value (1);
commit;
[sqlite3_close]

 leaves the database unchanged (because there's no next statement)?
 If not, and there's a power outage between (successful) commit and
 closing the connection, what will be the state of the database on
 restart?  Is the transaction still open or, if not, is it rolled
 forward and completed, or rolled back?

 I am not certain exactly where the COMMIT performs the commit.  It could
 very likely commit and then turn autocommit back on.  Whatever so, it makes
 no difference really.

  However, execution of a SELECT statement does cause a lock to be
  obtained (a shared lock) and a COMMIT does cause that shared lock to
  be released.

 Again, I find this surprising.  I would expect SELECT to
 
 1.  establish a shared lock
 2.  select the data
 3.  release the lock

 Except that this is not what the OP is doing.  The OP is performing a
 COMMIT in the middle of the select running.  He is doing this:

 1.  establish the lock
 2.  select the first of one of many rows of data
 3.  release the lock forcibly (COMMIT)
 4.  read the next row of data from the select
 5.  release the lock forcibly (COMMIT)
 ... lather rinse repeat steps 4 and 5 until you reach the end of the
 selected rows

 whether or not BEGIN is called.  If I understand what commit does per
 your description above, in a transaction, the effect would be

 1.  BEGIN TRANSACTION (autocommit off)
 2.  SELECT (take shared lock)
 3.  data data data
 4.  [SELECT  done] (release shared lock)
 5.  COMMIT (autocommit on)
 
 which leaves steps #1 and #5 redundant.

  Executing an UPDATE after a SELECT -- in the same connection -- (or
  while a select is in progress) will escalate the SHARED lock to a
  WRITE lock.

 OK, this is typical.

  COMMIT will release the lock -- the lock is now a WRITE lock, not
  a shared lock.  Therefore the next _step() will be executing without
  any lock at all leading to apparently undefined results (really an
  error should be thrown SQLITE_DATABASE_UNLOCKED_FROM_UNDER_ME or a
  MISUSE error, but is not).

 Good to know.  I hope we agree this is unlovely, and intersects with
 SQLite's unfortunate property of not isolating SELECT as atomic.  (By
 which I mean: if there are 10 rows in a table and connection 1 issues
 SELECT * FROM T and between calls 4  5 to sqlite3_step another
 connection 2 issues DELETE FROM T, the first process may get 4 or 5
 rows, or 10, depending on luck of the draw.)

 No, it was the other persons choice to do this.  He could have run the
 select entirely, then done the updates.  Or he could have chosen to do the
 updates on another connection (which would be isolated from the connection
 running the select) using WAL.

 The right behavior is not a misuse error.  How is it misuse for two
 connections to read and update the same table in overlapping 

Re: [sqlite] Client/Server Best Practices

2015-01-03 Thread Simon Slavin

On 3 Jan 2015, at 4:41am, Keith Medcalf kmedc...@dessus.com wrote:

 I do not believe that there is a way to specify COMMIT BUT MAINTAIN THE 
 SHARED LOCK, (that is, to commit the changes only and un-escalate the lock 
 back to a shared lock) which would be required in order for the loop 
 semantics posited by the OP to work correctly.

If it were possible to COMMIT BUT MAINTAIN THE SHARED LOCK then a thread 
could hog the database, never letting other threads/processes do their work.  
So I hope it's not possible to do it.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Client/Server Best Practices

2015-01-02 Thread James K. Lowden
On Thu, 25 Dec 2014 05:32:45 -0700 (MST)
Rick Kelly rpke...@gci.net wrote:

 All SELECT type requests are wrapped with BEGIN TRANSACTION/COMMIT 

That shouldn't be necessary and afaik isn't necessary.  SELECT does not
modify the database.  To commit a select is to apply the nonchanges.  

A common misconception is that BEGIN TRANSACTION takes a lock in some
sense.  It doesn't; it marks a point in logical time that will be
concluded with COMMIT/ROLLBACK.  Locks, if any, are implicit in the
SELECT itself.  

--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Client/Server Best Practices

2015-01-02 Thread J Decker
On Fri, Jan 2, 2015 at 3:25 PM, James K. Lowden jklow...@schemamania.org
wrote:

 On Thu, 25 Dec 2014 05:32:45 -0700 (MST)
 Rick Kelly rpke...@gci.net wrote:

  All SELECT type requests are wrapped with BEGIN TRANSACTION/COMMIT

 That shouldn't be necessary and afaik isn't necessary.  SELECT does not
 modify the database.  To commit a select is to apply the nonchanges.

 A common misconception is that BEGIN TRANSACTION takes a lock in some
 sense.  It doesn't; it marks a point in logical time that will be
 concluded with COMMIT/ROLLBACK.  Locks, if any, are implicit in the
 SELECT itself.


I think the idea is at begin transaction (selects) is that the
transaction is left open so that state can continue to be queied while
other transactions are also created and commited or rolled back at the same
time (?)  ... like using a monotone version control database and having a
current 'head' which is what you get when you select...



 --jkl
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Client/Server Best Practices

2015-01-02 Thread Keith Medcalf

On Friday, 2 January, 2015 16:26, James K. Lowden jklow...@schemamania.org 
said:


On Thu, 25 Dec 2014 05:32:45 -0700 (MST) Rick Kelly rpke...@gci.net wrote:

 All SELECT type requests are wrapped with BEGIN TRANSACTION/COMMIT

That shouldn't be necessary and afaik isn't necessary.  SELECT does not
modify the database.  To commit a select is to apply the nonchanges.

It does not matter whether it modifies the database.  reading the database 
requires a lock -- a shared lock.  updating the database requires a write 
lock, which precludes obtaining a shared lock.

A common misconception is that BEGIN TRANSACTION takes a lock in some
sense.  It doesn't; it marks a point in logical time that will be
concluded with COMMIT/ROLLBACK.  Locks, if any, are implicit in the
SELECT itself.

BEGIN TRANSACTION does not acquire a lock -- BEGIN IMMEDIATE does that -- BEGIN 
TRANSACTION merely turns off autocommit, meaning that the lock will not be 
released magically, but rather by an explicit COMMIT (which itself does not do 
anything -- it merely turns autocommit back on so that the next statement will 
commit the transaction before magically acquiring a new lock).  However, 
execution of a SELECT statement does cause a lock to be obtained (a shared 
lock) and a COMMIT does cause that shared lock to be released.  Executing an 
UPDATE after a SELECT -- in the same connection -- (or while a select is in 
progress) will escalate the SHARED lock to a WRITE lock.  COMMIT will release 
the lock -- the lock is now a WRITE lock, not a shared lock.  Therefore the 
next _step() will be executing without any lock at all leading to apparently 
undefined results (really an error should be thrown 
SQLITE_DATABASE_UNLOCKED_FROM_UNDER_ME or a MISUSE error, but is not). 

This is inherent in how WAL works.  Just because WAL is not in effect does not 
alter the fundamental workings of the transaction system.

I do not believe that there is a way to specify COMMIT BUT MAINTAIN THE SHARED 
LOCK, (that is, to commit the changes only and un-escalate the lock back to a 
shared lock) which would be required in order for the loop semantics posited by 
the OP to work correctly.

---
Theory is when you know everything but nothing works.  Practice is when 
everything works but no one knows why.  Sometimes theory and practice are 
combined:  nothing works and no one knows why.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Client/Server Best Practices

2014-12-25 Thread Rick Kelly
I'm using SQLite3 for the first time and am looking to see if those more
experienced can look at my deployment and offer suggestions.

I have a client/server application running on MS Windows communicating over
TCP/IP supporting a few dozen clients. Basically, clients send a request to
the server and the SQL is built, submitted and results returned.

The database file is located in the same folder as the server. The server is
multi-threaded. I'm using library version 3.8.7.4

The database file is opened as follows:

1. sqlite3_enable_shared_cache(%SQLITE_OPEN_SHAREDCACHE)
2. Database is opened with %SQLITE_OPEN_READWRITE
3. PRAGMA's are set

PRAGMA JOURNAL_MODE=WAL;
PRAGMA WAL_AUTOCHECKPOINT=500;
PRAGMA SECURE_DELETE;
PRAGMA SYNCHRONOUS=FULL;
PRAGMA THREADS=0;

All SELECT type requests are wrapped with BEGIN TRANSACTION/COMMIT and
UPDATE/INSERT/DELETE transactions are wrapped with BEGIN IMMEDIATE
TRANSACTION/COMMIT. The server uses a connection pool and threads request a
handle from the pool and return it to the pool immediately afterwards. The
largest size of the connection pool and the number of open handles is
limited to 5. In my stress testing, I've thrown thousands of requests at the
server and the pool usually gets to two and rarely 3.

If a request encounters %SQLITE_BUSY or %SQLITE_LOCKED, I'm retrying up to 5
times with a 100ms sleep in between.

Will this approach minimize %SQLITE_BUSY and %SQLITE_LOCKED situations and
provide decent concurrency and long term stability? 

I haven't yet tried to serialize update requests so that only one at a time
is active.

Rick Kelly



--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Client-Server-Best-Practices-tp79728.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Client/Server Best Practices

2014-12-25 Thread Richard Hipp
On Thu, Dec 25, 2014 at 7:32 AM, Rick Kelly rpke...@gci.net wrote:

 I'm using SQLite3 for the first time and am looking to see if those more
 experienced can look at my deployment and offer suggestions.

 I have a client/server application running on MS Windows communicating over
 TCP/IP supporting a few dozen clients. Basically, clients send a request to
 the server and the SQL is built, submitted and results returned.


Do I understand correctly:  SQLite is running inside of the single server
process you have built?

In other words, you are not trying to access SQLite database files directly
from the clients across a network?



 The database file is located in the same folder as the server. The server
 is
 multi-threaded. I'm using library version 3.8.7.4

 The database file is opened as follows:

 1. sqlite3_enable_shared_cache(%SQLITE_OPEN_SHAREDCACHE)
 2. Database is opened with %SQLITE_OPEN_READWRITE
 3. PRAGMA's are set

 PRAGMA JOURNAL_MODE=WAL;
 PRAGMA WAL_AUTOCHECKPOINT=500;
 PRAGMA SECURE_DELETE;
 PRAGMA SYNCHRONOUS=FULL;
 PRAGMA THREADS=0;

 All SELECT type requests are wrapped with BEGIN TRANSACTION/COMMIT and
 UPDATE/INSERT/DELETE transactions are wrapped with BEGIN IMMEDIATE
 TRANSACTION/COMMIT. The server uses a connection pool and threads request a
 handle from the pool and return it to the pool immediately afterwards. The
 largest size of the connection pool and the number of open handles is
 limited to 5. In my stress testing, I've thrown thousands of requests at
 the
 server and the pool usually gets to two and rarely 3.

 If a request encounters %SQLITE_BUSY or %SQLITE_LOCKED, I'm retrying up to
 5
 times with a 100ms sleep in between.

 Will this approach minimize %SQLITE_BUSY and %SQLITE_LOCKED situations and
 provide decent concurrency and long term stability?

 I haven't yet tried to serialize update requests so that only one at a time
 is active.

 Rick Kelly



 --
 View this message in context:
 http://sqlite.1065341.n5.nabble.com/Client-Server-Best-Practices-tp79728.html
 Sent from the SQLite mailing list archive at Nabble.com.
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users




-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Client/Server Best Practices

2014-12-25 Thread Simon Slavin

On 25 Dec 2014, at 12:32pm, Rick Kelly rpke...@gci.net wrote:

 If a request encounters %SQLITE_BUSY or %SQLITE_LOCKED, I'm retrying up to 5
 times with a 100ms sleep in between.
 
 Will this approach minimize %SQLITE_BUSY and %SQLITE_LOCKED situations and
 provide decent concurrency and long term stability? 

Do not do the retrying inside your own code.  Instead set a busy_timeout and 
let SQLite handle all retrying itself:

https://www.sqlite.org/c3ref/busy_timeout.html
https://www.sqlite.org/pragma.html#pragma_busy_timeout

SQlite will then take care of all the retrying for you.  A timeout of a minute 
or two might seem strange, but if you think about what you'd want to happen if 
the database is busy it starts to make sense.

If SQLite is still reporting SQLITE_BUSY or SQLITE_LOCKED after this you can 
treat it as an unrecoverable error indicating hardware or network problems.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Client/Server Best Practices

2014-12-25 Thread Rick Kelly
Clients know nothing about the database, they just submit a request and get
results. The server is the only process accessing and using SQLite.

Rick Kelly



--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Client-Server-Best-Practices-tp79728p79731.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


RE: [sqlite] Client/Server approach

2007-08-30 Thread Sreedhar.a
Hi,

Yes, The server and clients are in Network.

Best Regards,
Sreedhar.

-Original Message-
From: John Stanton [mailto:[EMAIL PROTECTED] 
Sent: Friday, August 24, 2007 6:17 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Client/Server approach

You do not need a server to manage 5 clients on the same machine, only if
they are on a network.

Sreedhar.a wrote:
  
 Hi,
 
 I am working in sqlite 3.3.6.
 I want 5 clients to browse at the same time with good performance.
 I enabled threadsafe and defined SQLITE_ENABLE_MEMORY_MANAGEMENT .
 I think by using test_server.c  we can do 5 clients browsing with 
 single server.
 Can any one please help me by providing some patch for implementing this.
 
 My doubts are:
 1.Do I need  to create a thread for each client.If so it has to be 
 created before Sqlite3_server_start().
 
 (Or)
 Some thing like below has to be followed.
 
 Main()
 {
 
sqlite3_client_open()
sqlite3_client_prepare()
sqlite3_client_step()
sqlite3_client_reset()
sqlite3_client_finalize()
sqlite3_client_close()
 
 Sqlite3_server_start(); For starting the server.
 
 sqlite3_server_stop();For closing the server.
 
 
 
 }
 
 Kindly help me to solve this.
 Thanks in advance
 
 Best Regards,
 A.Sreedhar.
  
 
 
 
 --
 --- To unsubscribe, send email to 
 [EMAIL PROTECTED]
 --
 ---
 



-
To unsubscribe, send email to [EMAIL PROTECTED]

-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Client/Server approach

2007-08-25 Thread Sreedhar.a
 I am not using 5 clients from the same machine.
In a network 5 clients and 1 server will be present.


Regards,
A.Sreedhar.
 

-Original Message-
From: John Stanton [mailto:[EMAIL PROTECTED] 
Sent: Friday, August 24, 2007 6:17 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Client/Server approach

You do not need a server to manage 5 clients on the same machine, only if
they are on a network.

Sreedhar.a wrote:
  
 Hi,
 
 I am working in sqlite 3.3.6.
 I want 5 clients to browse at the same time with good performance.
 I enabled threadsafe and defined SQLITE_ENABLE_MEMORY_MANAGEMENT .
 I think by using test_server.c  we can do 5 clients browsing with 
 single server.
 Can any one please help me by providing some patch for implementing this.
 
 My doubts are:
 1.Do I need  to create a thread for each client.If so it has to be 
 created before Sqlite3_server_start().
 
 (Or)
 Some thing like below has to be followed.
 
 Main()
 {
 
sqlite3_client_open()
sqlite3_client_prepare()
sqlite3_client_step()
sqlite3_client_reset()
sqlite3_client_finalize()
sqlite3_client_close()
 
 Sqlite3_server_start(); For starting the server.
 
 sqlite3_server_stop();For closing the server.
 
 
 
 }
 
 Kindly help me to solve this.
 Thanks in advance
 
 Best Regards,
 A.Sreedhar.
  
 
 
 
 --
 --- To unsubscribe, send email to 
 [EMAIL PROTECTED]
 --
 ---
 



-
To unsubscribe, send email to [EMAIL PROTECTED]

-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Client/Server approach

2007-08-25 Thread Zarko Popovski
I found one open source api for making network interface for any embedded
rdbms. The problem was concurrency,for that reason i develop a small thread
pool inside the open source network interface and now  one rdmbs connection
is shared between threads or applications in network. I implement that for
SQLite in Java, i hope you can develope some similar net interface. Sending
query to interface and and receiveing results from interface in application
is builded with tcp sockets and streams.

On 8/24/07, Sreedhar.a [EMAIL PROTECTED] wrote:

 I am not using 5 clients from the same machine.
 In a network 5 clients and 1 server will be present.


 Regards,
 A.Sreedhar.


 -Original Message-
 From: John Stanton [mailto:[EMAIL PROTECTED]
 Sent: Friday, August 24, 2007 6:17 PM
 To: sqlite-users@sqlite.org
 Subject: Re: [sqlite] Client/Server approach

 You do not need a server to manage 5 clients on the same machine, only if
 they are on a network.

 Sreedhar.a wrote:
 
  Hi,
 
  I am working in sqlite 3.3.6.
  I want 5 clients to browse at the same time with good performance.
  I enabled threadsafe and defined SQLITE_ENABLE_MEMORY_MANAGEMENT .
  I think by using test_server.c  we can do 5 clients browsing with
  single server.
  Can any one please help me by providing some patch for implementing
 this.
 
  My doubts are:
  1.Do I need  to create a thread for each client.If so it has to be
  created before Sqlite3_server_start().
 
  (Or)
  Some thing like below has to be followed.
 
  Main()
  {
 
 sqlite3_client_open()
 sqlite3_client_prepare()
 sqlite3_client_step()
 sqlite3_client_reset()
 sqlite3_client_finalize()
 sqlite3_client_close()
 
  Sqlite3_server_start(); For starting the server.
 
  sqlite3_server_stop();For closing the server.
 
 
 
  }
 
  Kindly help me to solve this.
  Thanks in advance
 
  Best Regards,
  A.Sreedhar.
 
 
 
 
  --
  --- To unsubscribe, send email to
  [EMAIL PROTECTED]
  --
  ---
 



 
 -
 To unsubscribe, send email to [EMAIL PROTECTED]

 
 -





 -
 To unsubscribe, send email to [EMAIL PROTECTED]

 -




Re: [sqlite] Client/Server approach

2007-08-25 Thread John Stanton
In that case you are best to implement a server on one machine and have 
the others connect to it.  Since you only have 5 clients you do not need 
to use the shared cache approach.


When we use networked clients we use a server with HTTP protocol so that 
penetrating firewalls is no problem.


Sreedhar.a wrote:

 I am not using 5 clients from the same machine.
In a network 5 clients and 1 server will be present.


Regards,
A.Sreedhar.
 


-Original Message-
From: John Stanton [mailto:[EMAIL PROTECTED] 
Sent: Friday, August 24, 2007 6:17 PM

To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Client/Server approach

You do not need a server to manage 5 clients on the same machine, only if
they are on a network.

Sreedhar.a wrote:



Hi,

I am working in sqlite 3.3.6.
I want 5 clients to browse at the same time with good performance.
I enabled threadsafe and defined SQLITE_ENABLE_MEMORY_MANAGEMENT .
I think by using test_server.c  we can do 5 clients browsing with 
single server.

Can any one please help me by providing some patch for implementing this.

My doubts are:
1.Do I need  to create a thread for each client.If so it has to be 
created before Sqlite3_server_start().


(Or)
Some thing like below has to be followed.

Main()
{

 sqlite3_client_open()
  sqlite3_client_prepare()
  sqlite3_client_step()
  sqlite3_client_reset()
  sqlite3_client_finalize()
  sqlite3_client_close()

Sqlite3_server_start(); For starting the server.

sqlite3_server_stop();  For closing the server.



}

Kindly help me to solve this.
Thanks in advance

Best Regards,
A.Sreedhar.




--
--- To unsubscribe, send email to 
[EMAIL PROTECTED]

--
---






-
To unsubscribe, send email to [EMAIL PROTECTED]

-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Client/Server approach

2007-08-24 Thread John Stanton
You do not need a server to manage 5 clients on the same machine, only 
if they are on a network.


Sreedhar.a wrote:
 
Hi,


I am working in sqlite 3.3.6.
I want 5 clients to browse at the same time with good performance.
I enabled threadsafe and defined SQLITE_ENABLE_MEMORY_MANAGEMENT .
I think by using test_server.c  we can do 5 clients browsing with single
server.
Can any one please help me by providing some patch for implementing this.

My doubts are:
1.Do I need  to create a thread for each client.If so it has to be created
before Sqlite3_server_start().

(Or)
Some thing like below has to be followed.

Main()
{

 sqlite3_client_open()
   sqlite3_client_prepare()
   sqlite3_client_step()
   sqlite3_client_reset()
   sqlite3_client_finalize()
   sqlite3_client_close()

Sqlite3_server_start(); For starting the server.

sqlite3_server_stop();  For closing the server.



}

Kindly help me to solve this.
Thanks in advance

Best Regards,
A.Sreedhar.
 




-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] client/server

2005-06-08 Thread Ben Clewett

Just another suggestion to the problem which I use my self.

I use a single SQLite database for each client.  Hosting the database 
collection on a bastion host as close to the client as I can get it. 
Therefore no client/server connection used.


I then use a daemon which reads all the client SQLite databases in turn, 
and writes them to a central DBMS, as well as writing information back 
out to the client SQLite databases.  (My central database in my case is 
MySQL because it's client/server and has a better locking model.)


I also use the Linux /dev/shm directory to store these bastion 
databases.  Which reads/writes about 200 times faster than a uncached 
disk.  Although this is erases when the server reboots.


This gives dramatic performance and low load on the central database :)

Ben


Andrea Giammarchi wrote:

Eugene Wee wrote:


Not at all (if I correctly understand what you're trying to say).
For example, there exists a SQLite extension in PHP (which comes 
bundled by default in PHP5, but currently does not support SQLite3).



pecl, PDO extensions allows PHP to use SQLITE Version 3.X too :-)
http://it2.php.net/manual/it/ref.pdo.php#pdo.drivers

andr3a





[sqlite] client/server

2005-06-07 Thread Martín Schamis
Hello, I´ve seen inthe sqllite page that,If you have many client
programs accessing a common database over a network, you should consider
using a client/server database engine instead of SQLite
 
1 .- This means that I can´t use a php on the web and the users acceding
to that page can`t modify the base ?
 
2.- Or there is a limi of users ? 
 
3.- If there is a limit waht is it ?
 
Thanks,
 
Martín



Re: [sqlite] client/server

2005-06-07 Thread Eugene Wee

Hi,

Martín Schamis wrote:

1 .- This means that I can´t use a php on the web and the users acceding
to that page can`t modify the base ?

Not at all (if I correctly understand what you're trying to say).
For example, there exists a SQLite extension in PHP (which comes bundled by 
default in PHP5, but currently does not support SQLite3).


From what I see, the point that the documentation is trying to make is that 
SQLite is not suitable when you're dealing with a congested network environment. 
If you dont expect (many) users to be writing to the same database file 
simultaneously, SQLite may still be a feasible option.


Eugene Wee



RE: [sqlite] client/server

2005-06-07 Thread Martín Schamis
Hi, Eugene and everybody.

The problem I have is this, I´m implementing a web aplication that will
require up to 300 users writing at
The same time to the database file simultaneously. The question is, if
sqllite will supported ?

Thanks for your help.

Martín

-Mensaje original-
De: Eugene Wee [mailto:[EMAIL PROTECTED] 
Enviado el: martes, 07 de junio de 2005 10:59
Para: sqlite-users@sqlite.org
Asunto: Re: [sqlite] client/server


Hi,

Martín Schamis wrote:
 1 .- This means that I can´t use a php on the web and the users
acceding
 to that page can`t modify the base ?
Not at all (if I correctly understand what you're trying to say).
For example, there exists a SQLite extension in PHP (which comes bundled
by 
default in PHP5, but currently does not support SQLite3).

 From what I see, the point that the documentation is trying to make is
that 
SQLite is not suitable when you're dealing with a congested network
environment. 
If you dont expect (many) users to be writing to the same database file 
simultaneously, SQLite may still be a feasible option.

Eugene Wee




Re: [sqlite] client/server

2005-06-07 Thread David Morel
Martín Schamis a écrit :
 Hi, Eugene and everybody.
 
 The problem I have is this, I´m implementing a web aplication that will
 require up to 300 users writing at
 The same time to the database file simultaneously. 

you mean 300 users connected simultaaneously, writing occasionaly to the
database I guess, which is not the same thing. Provided your queries are
optimized and you take care of copen connections (not leaving
connections open more than is strictly necessary) it could do the trick.
It all depends on how database-intensive your application is. But even
300 simultaneious connections won't make it on a very powerful mysql
server anyway.
-- 
+---+
| David Morel [EMAIL PROTECTED] |
| http://www.intradot.com   |
+---+
| Intradot Labs |
| 2, rue Cuzin  |
| F-69120 Vaulx-en-Velin|
| tel: +33 478 80 87 90 |
+---+



signature.asc
Description: OpenPGP digital signature


Re: [sqlite] client/server

2005-06-07 Thread Clay Dowling

Martín Schamis said:
 Hello, I´ve seen inthe sqllite page that,If you have many client
 programs accessing a common database over a network, you should consider
 using a client/server database engine instead of SQLite

 1 .- This means that I can´t use a php on the web and the users acceding
 to that page can`t modify the base ?

 2.- Or there is a limi of users ?

 3.- If there is a limit waht is it ?

It means that the file shouldn't live on a network file system.  It your
web server folder is accessed via NFS or SMB (Windows File Share), SQLite
isn't the tool for you.

Given the volume of transaction processing you're indicating, you might
want to consider if this is the right database for you.  Consider if there
will really be 300 simultaneous connections (which is different from 300
simultaneous application users) that you might instead be in need of a
high availability client/server database model.  PostgreSQL fares very
well in this category, or some of the heftier commercial database
offerings.

Clay Dowling
-- 
Lazarus Notes from Lazarus Internet Development
http://www.lazarusid.com/notes/
Articles, Reviews and Commentary on web development



Re: [sqlite] client/server

2005-06-07 Thread Andrew Piskorski
On Tue, Jun 07, 2005 at 11:52:30AM -0300, Mart?n Schamis wrote:
 Hi, Eugene and everybody.
 
 The problem I have is this, I?m implementing a web aplication that will
 require up to 300 users writing at

Then why are you even looking at SQLite?  IMNSHO, unless you have some
compelling reasons otherwise, you should default to using PostgreSQL
for that sort of thing.  You haven't said much about what you're
really doing, but a large busy database-backed website with many
concurrent users is exactly the sort of OLTP niche where capable
client/server RDBMSs like PostgreSQL or Oracle shine.

-- 
Andrew Piskorski [EMAIL PROTECTED]
http://www.piskorski.com/


Re: [sqlite] client/server

2005-06-07 Thread Andrea Giammarchi

Eugene Wee wrote:


Not at all (if I correctly understand what you're trying to say).
For example, there exists a SQLite extension in PHP (which comes 
bundled by default in PHP5, but currently does not support SQLite3).


pecl, PDO extensions allows PHP to use SQLITE Version 3.X too :-)
http://it2.php.net/manual/it/ref.pdo.php#pdo.drivers

andr3a



Re: [sqlite] Client/Server Environment

2004-11-10 Thread David Morel
Le mar 09/11/2004 à 17:52, Gerhard Haering a écrit :
 On Tue, Nov 09, 2004 at 10:45:35AM -0500, Richard Boehme wrote:
  Hi there. How well does SQLite do in a client/server environment
  where the SQLite database is handling a batch of 100 inserts and the
  same number of queries a every 0.5 seconds? The inserts would be
  batched into a transaction. The queries would most likely not.
  
  Thanks for the help.
 
 Sounds like SQLite could handle it just fine.

I agree; several hundred operations in a transaction can easily fit
under a second, it all depends on the nature of data, of course... and
the server speed, processing and io-wise.

-- 
***
[EMAIL PROTECTED]
OpenPGP public key: http://www.amakuru.net/dmorel.asc



signature.asc
Description: Ceci est une partie de message	=?ISO-8859-1?Q?num=E9riquement?= =?ISO-8859-1?Q?_sign=E9e=2E?=


[sqlite] Client/Server Environment

2004-11-09 Thread Richard Boehme
Hi there. How well does SQLite do in a client/server environment where 
the SQLite database is handling a batch of 100 inserts and the same 
number of queries a every 0.5 seconds? The inserts would be batched into 
a transaction. The queries would most likely not.

Thanks for the help.
Richard Boehme



Re: [sqlite] Client/Server Environment

2004-11-09 Thread Clay Dowling

Richard Boehme said:
 Hi there. How well does SQLite do in a client/server environment where
 the SQLite database is handling a batch of 100 inserts and the same
 number of queries a every 0.5 seconds? The inserts would be batched into
 a transaction. The queries would most likely not.

I'm not trying to run down SQLite here, but don't you think that this kind
of load justifies a pretty heavy duty database?  SQLite is a disk based
database, not a client server database.  For that kind of throughput, the
traditional choice has been a large scale client server database, and
there's been a pretty good reason why that's been so.  100 separate
clients trying to access the same file on disk is bound to cause a fiew
problems.  The capacity of the database driver is probably the least of
your worries.

Clay
-- 
Lazarus Notes from Lazarus Internet Development
http://www.lazarusid.com/notes/
Articles, Reviews and Commentary on web development


Re: [sqlite] Client/Server Environment

2004-11-09 Thread Gerhard Haering
On Tue, Nov 09, 2004 at 10:45:35AM -0500, Richard Boehme wrote:
 Hi there. How well does SQLite do in a client/server environment
 where the SQLite database is handling a batch of 100 inserts and the
 same number of queries a every 0.5 seconds? The inserts would be
 batched into a transaction. The queries would most likely not.
 
 Thanks for the help.

Sounds like SQLite could handle it just fine.

I don't know what you mean with client/server here. Do you plan to
develop a custom server process that clients talk to that does all the
database access?

-- Gerhard


signature.asc
Description: Digital signature


Re: [sqlite] Client/Server Environment

2004-11-09 Thread Richard Boehme
This is an extreme case -- records would usually be updated every minute 
or so. Usually there might be 10 computers trading information (about 
100 records every few minutes), for example. I was extrpolating an 
extreme test case where the system is very heavily used.

In such a situation, we would have some server process which feeds the 
queries to SQLite.

Thanks.
Richard Boehme


Re: [sqlite] Client/Server Environment

2004-11-09 Thread Clay Dowling

Richard Boehme said:
 This is an extreme case -- records would usually be updated every minute
 or so. Usually there might be 10 computers trading information (about
 100 records every few minutes), for example. I was extrpolating an
 extreme test case where the system is very heavily used.

 In such a situation, we would have some server process which feeds the
 queries to SQLite.

Since I've now also read your inquiries in the Joel On Software forum, I
might be able to answer your questions better, especially since a client
server database won't do.

The correct answer is that you'll need to try it out and see if SQLite can
keep up.  Your application architecture and the nature of the data and
queries is going to have a pretty significant influence on the result.  I
would build a working prototype to prove that the model is good, then try
flooding the system to see how much it can handle.

Clay


-- 
Lazarus Notes from Lazarus Internet Development
http://www.lazarusid.com/notes/
Articles, Reviews and Commentary on web development


Re: [sqlite] Client/Server Environment

2004-11-09 Thread ben . carlyle
Hello,





Clay Dowling [EMAIL PROTECTED]
09/11/2004 02:39 AM
Please respond to sqlite-users

 
To: [EMAIL PROTECTED]
cc: 
Subject:Re: [sqlite] Client/Server Environment



 Richard Boehme said:
  Hi there. How well does SQLite do in a client/server environment where
  the SQLite database is handling a batch of 100 inserts and the same
  number of queries a every 0.5 seconds? The inserts would be batched 
into
  a transaction. The queries would most likely not.
 I'm not trying to run down SQLite here, but don't you think that this 
kind
 of load justifies a pretty heavy duty database?  SQLite is a disk based
 database, not a client server database.  For that kind of throughput, 
the
 traditional choice has been a large scale client server database, and
 there's been a pretty good reason why that's been so.

If you are talking about scaling the application to a large number of 
machines to try and balance load, the big end databases may suit you. If 
you are talking about putting a big iron database to work on a single 
server and single disk array to get higher performance than sqlite... 
forget it. My figures are a little out of date now, but I looked for a 
high performance database for my own application a couple of years back 
and found that sqlite blew the popular alternatives[1] out of the water in 
data throughput. Since that time SQLite has become faster for many 
functions while other databases continue to get fatter.

As with any performance-sensitive application you must profile the 
alternatives you are considering on your own target hardware with your own 
projected target load to get a reliable indication as to which will 
perform. I would certainly not discount sqlite based on a mailing list 
post. Try it out and see. I was pleasantly surprised, and suspect you will 
be also. Please report back, also. Your experience may be a useful data 
point for someone else in your position in future.

100 separate
 clients trying to access the same file on disk is bound to cause a fiew
 problems.  The capacity of the database driver is probably the least of
 your worries.

100 readers does not equal 100 separate clients trying to access the same 
file on disk. It may be equal to 100 clients trying to access the same 
piece of cache memory. That's certainly something you can do more than 100 
times in half a second. The quality of your operating system will have 
some impact here, but I believe that even Microsoft can manage to do this 
properly these days. The main difference between the way sqlite handles 
performance and the way other databases do is that SQLite relies on the 
well-worn file operations of your operating system more than other 
databases. They try to implement these operations themselves. In bygone 
days this was a good thing because OS file handling was often poor. These 
days, It's a good thing. It keeps you lean and allows the operating system 
to do the heavy lifting in making you perform.

If sqlite is going to be a problem under the proposed loading, it will 
most likely be related to locking and possible delays introduced by 
clients having to sleep to get their locks in. As presented, though, I 
suspect that won't be a huge problem. It would largely depend on the size 
and time-cost of the queries being performed.

Benjamin
[1] Mysql, postgres, sapdb, oracle.