Thank you for the responses all, and please accept my apologies; I have
obviously not achieved my intended level of clarity in the scenario
description I provided.

Firstly, by multi-user I think the most precise definition would be:
'multiple, concurrent and distributed users'.

The key point that I did not make explicit is that the central database
will also be a SQLite database, and thus a synchronisation algorithm will
not avoid the locking issues I mentioned in my original post. Potentially
we could have around fifty client applications all attempting simultaneous
writes to a SQLite database stored on a file server. Our application
generates an audit trail, so these writes will be happening very regularly
during normal operation - this isn't just a case of generating a few COUNT
queries.

I'm certainly not against an eventually-consistent implementation (in fact
I can see both benefits to existing use cases and new use cases emerging
from such an implementation). However, my colleague is attempting to
achieve the impression of immediate transactional consistency by executing
the synchronisation procedure once every second.

I have read the documentation thoroughly, and presented this to our
management team, however this seems to have been to no avail, so this post
was something of an appeal to authority.

On Fri, Oct 19, 2012 at 4:21 PM, Nick Shaw <nick.s...@citysync.co.uk> wrote:

> Richard Hipp wrote:
> > Ben Morris <magospiet...@gmail.com> wrote:
> >> If anyone could share their honest opinions of both my suggested
> >> approach, and my colleague's, I would be very grateful.
> >
> > [snip]
> > Using a separate SQLite database on each client to serve as a local
> cache of the master database and then
> > periodically synchronizing with the master is a very reasonable thing to
> do in many situations.  Such an
> > approach can be very bandwidth efficient (if implemented correctly) and
> it has the huge advantage that is
> > allows for disconnected operation - it allows the clients to continue
> functioning when the network goes down.
> > [snip]
>
> I completely agree; this is what I do in our database application - the
> clients have a local SqLite copy of a central MSSQL database, so the
> clients can operate with the data when the network goes down (which on some
> customers' sites is a fairly regular occurrence!).  To avoid having to
> rectify duplicate rows / primary key violations / etc when down, we just
> mark the local database as 'read-only' when the link to MSSQL goes down so
> no changes can  be made to it 'offline'.  Whenever the link is up, we poll
> the MSSQL database for changes every 15 seconds or so, so the clients'
> SqLite copies are pretty much always in sync.  It adds a bit of network
> traffic doing this every 15 seconds, but data doesn't change very often in
> our application so there's rarely anything more than a few COUNT queries
> going on.  Plus it means the client-side app can usually just query the
> local SqLite database instead of talking over the network to MSSQL all the
> time, which can dramatically speed things
>   up on slow networks.
>
> So I see no fundamental problem in using this kind of approach.  But as
> Richard Hipp says, it depends on what your system needs are.  Operating on
> an SqLite database from multiple networked clients (especially when on a
> Windows network) with data that is changing a lot is not advised when
> performance and concurrency are important factors.  (See
> http://www.sqlite.org/whentouse.html for specific details, as Simon
> Slavin recommended).
>
> Nick.
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to