"HLS" <sant9...@gmail.com> schrieb im
Newsbeitrag
news:9cf5931e1003160705v38335b5g47a3d91193c28...@mail.gmail.com...

> ...we have a ISAM/BTREE database under our full
> reader/writer and exclusive locking controls...

>From your posted code-snippet I don't see any
"heavy SQL-Query-usage" - so, first the question,
why would you want to change your current system?

If you need only something like a "fast entry" into
some of your DataFile-Chunks (per FileNameAreaKey)
and then enumerate (record-)content in these
Data-Chunks, respecting some "sort-order" - why choose
a system which is based on SQL-query-processing?

There are other engines, which fit better for such
scenarios - berkeley-db comes to mind, which in
its latest incarnations should be usable threadsafe too
(not sure about concurrency and locking in that engine).
Or just "write your own thing", if the goal is only,
to achieve fast "ordered enumeration" of more or less
simple records-structs, hosted in files.

If SQL-based querying is (becoming) something which
would be "nice to have", then maybe consider other engines,
which work "better over sockets" (since this mode is
built-in) and have not that much "locking-restrictions"
as SQLite in concurrent scenarios (and also support
record-level-locking directly).

If it has to be SQLite, because it is nice, small, fast and
easy to deploy - then you should consider a completely
different approach with regards to your current client-
side locking-handling.

We also use an RPC-server, with the SQLite-engine
as the serverside backend - but we "completely isolate"
the serverside-cursors from the clientside - meaning, we
perform our SQL-query at the serverside - and serialize
(copy) all the records, according to the "set the SQL-string
describes" into a "transferrable container-object" (a Recordset,
or ResultSet) first.  This container-objects content is then
compressed and transferred over sockets to the clientside
(after all the sqlite-handles for the query in question were
freed).

This works fast and robust, no sqlite-handles are kept open
between requests - in fact this works faster than "serializing
and transferring each record separately over the sockets,
whilst keeping a bunch of serverside SQLite-cursors alive,
which can only be moved forward".

At the clientside, the transferred container (the Recordset)
is freely navigatable (back and forth) - supports its own
Find-, Sort- methods - contains all the Field-Type-
descriptions and so on...

If we need Record-Level-locking at the clientside, we
just update a (serverside) LockedRecords-Table with
the currently locked RecordIDs (+UserID - and some
extra-fields for some "timeout-handling") - and based on
queries against these "locking-tables", we can "colorize"
the currently locked Records appropriately in our GUI.

I understand, that you "don't want to rewrite, what you
currently have" - but (at least for an *SQLite*-engine in the
backend) you will have to IMO.

Nonetheless, such a "disconnected approach" is in the
meantime something like "common-practise" - given all
the Browser-based clients which work against
"RPC-Servers, or Application-Servers - or let's call them
*WebServers* ;-) ...over sockets (e.g. receiving JSON-
serialized Resultsets over http) - ... no matter what SQL-
engine is working in the backend of such a WebSite ...
"clientside-controlled Servercursors" are "a thing of the
past" IMO, since they "introduce more problems than
they solve".

Olaf Schmidt



_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to