We used Sqlite in a web server successfully. It is embedded in a custom written web server which provides multithreaded access and application language support (an application server). It handles many databases, with each user capable of having their own database. Synchronization is performed within the server.

In our application Sqlite is an ideal solution and outperforms the alternative, using an IPC or network connection to PostgreSQL. The embedded nature of Sqlite gives us an advantage by minimizing network and disk traffic and avoiding repeated process creation and destruction.

In your application my reaction would be to run PostgreSQL or similar (maybe the new free version of DB/2) on one of your servers and connect from the others.

Anil Gulati -X (agulati - Michael Page at Cisco) wrote:
I have certainly got no desire to change the design goals of SQLite.

The only reason I posted originally is because there was a strong
recommendation in the documentation to use it for web sites which of
course risk concurrent writes... The trouble introduced there in my case
is that I am using multiple web servers which therefore have to share
data over the network.

FWIW I don't interpret any posts on this thread as an attempt to change
SQLite, either. But there seems to be some who see value in more clearly
defining *when* SQLite *does* work. I guess that there is a lot of
enthusiasm for SQLite's ability and performance and it's nice to be able
to prove that SQLite does actually compete in areas it was not even
designed for. Perhaps an argument for less complex design as a generic
software design strategy.

-----Original Message-----
From: David M X Green [mailto:[EMAIL PROTECTED] Sent: Sunday, 4 February 2007 1:17 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Appropriate uses for SQLite

I am new to this but are these issues those of trying to get it to do
what sqlite it is not designed for. I quote the book

The Definitive Guide to SQLite - Chapter 1 --- Networking " .... Again,
most of these limitations are intentional-they are a result of SQLite's
design. Supporting high write concurrency, for example, brings with it
great deal of complexity and this runs counter to SQLite's simplicity in
design.
Similarly, being an embedded database, SQLite intentionally does
__not__support__networking__ [my emphasis].  This should come as no
surprise.
In short, what SQLite can't do is a direct result of what it can. It was
designed to operate as a modular, simple, compact, and easy-to-use
embedded relational database whose code base is within the reach of the
programmers using it. And in many respects it can do what many other
databases cannot, such as run in embedded environments where actual
power consumption is a limiting factor. "
------
Is it really a good idea to network a data base that relies on the OS
file systems like this? Is it ever going to be safe enough?
--------------------
David M X Green


|||"Alex Roston" (2007-02-02 20:05) wrote: |||>>>

Scott Hess wrote:

On 2/2/07, Dennis Cote <[EMAIL PROTECTED]> wrote:

[EMAIL PROTECTED] wrote:

The problem is, not many network filesystems work correctly.

I'm sure someone knows which versions of NFS have working file locking, at least under Linux.

I doubt it is this easy. You need to line up a bunch of things in the right order, with the right versions of nfs, locking services, perhaps the right kernel versions, the right config, etc, etc.

IMO the _real_ solution would be a package which you could use to try


to verify whether the system you have is actually delivering working file locking. Something like diskchecker (see http://brad.livejournal.com/2116715.html). The basic idea would be to have a set of networked processes exercising the APIs and looking for discrepencies. Admittedly, passing such a test only gets you a statistical assurance (maybe if you'd run the test for ten more minutes, or with another gig of data, it would have failed!), but failing such a test is a sure sign of a problem.

-scott

That's a really useful idea, not only for itself, but also because it might lead to debugging some of the network issues, and allowing the developers to build a database of stuff that works: "Use Samba version


foo, with patch bar, and avoid the Fooberry 6 network cards." Or

whatever.

My suspicion, in the earlier case with Windows and Linux clients is that Windows didn't handle the locking correctly, and that would be worth proving/disproving too.

An alternate approach is to use something a little more like a standard client-server model, where there's a "server" program which intervenes between (possibly multiple) workstations and the database itself. The "server" would queue requests to the database, make sure that no more than one write request at a time went to the database, and certify that writes have been properly made.

The problem with this approach is that it eats quite heavily into SQLite's speed advantage, but if you've already put thousands of hours


into developing your system, it might be a worthwhile hack.

Alex

----------------------------------------------------------------------
-------

To unsubscribe, send email to [EMAIL PROTECTED]
----------------------------------------------------------------------
-------







------------------------------------------------------------------------
-----
To unsubscribe, send email to [EMAIL PROTECTED]
------------------------------------------------------------------------
-----

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------



-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to