https://bugs.kde.org/show_bug.cgi?id=322946

--- Comment #7 from Richard Mortimer <richm+...@oldelvet.org.uk> ---
(In reply to swatilodha27 from comment #6)
> (In reply to Richard Mortimer from comment #5)
... snip ...
> > 
> > - this also means that the referential integrity constraints/checks are
> > vital to the correct operation of the database because they help to ensure
> > that no-one add duplicates or add references to tags/images that may have
> > been added/deleted by someone else.
> 
> After your yesterday's patch on this
> https://bugs.kde.org/show_bug.cgi?id=355831, I think referential integrity
> is mostly done, so duplicates could now be avoided.
yes that is correct. However the consequence is that any attempt to add a
duplicate will fail when executing the SQL. Hence we should be vigilant for
this and properly investigate any failures because they may point to a bug
somewhere in the code.

> 
> > 
> > - to my mind this is where the most important preparations for a
> > multi-digikam shared operation mode come in. The database access routines in
> > digikam should (and many are) be arranged as logical operations, e.g. add a
> > tag and give me the database id of the tag. Those operations have to be well
> > tested and able to work around changes made by other users. For instance the
> > add a tag operation SQL might fail because another digikam has already added
> > that tag. In that instance it should find the requested tag and load the
> > data/return the tag id from the other record.
> 
> For 1) and 3) point you wrote:
> In order to ensure that SQL query made by a user doesn't fail (if another
> user has just made a change in the DB), I think we could use INSERT IGNORE
> INTO or REPLACE statements, instead of simply INSERT query?
> If there's no duplicate, INSERT IGNORE INTO will INSERT as general, else
> IGNORE will discard silently without an error. Thus, preventing duplicate
> records.
> Or REPLACE could be better, in order to get the latest change in the DB.
Yes. Those do work although be mindful that many of these are not in the
standard SQL syntax and hence are less portable when it comes to adding
additional databases in the future.

Any use of these also needs to be careful to ensure that it receives the
primary key for any added/replaced rows as appropriate so that the digikam code
can add that to internal data structures.

> 
> > - related to that digikam would need the ability to "rescan" the database in
> > a similar manner to the way that it can rescan a filesystem. This allows it
> > to easily pick up changes made by other users and integrate them into the
> > Tags/Albums trees. I suspect that much of the images operation is already
> > working in that way because digikam generally queries the database to find
> > images etc.
> 
> How to go with the "re-scanning" of DB, so it's updated for all users, if
> changes are made by another user?
>  
It is each digikam instance that needs to re-scan the DB. I am not familiar
with the implementation of the internals of digikam to comment about how best
to do it at the moment. But it could be quite complex trying to merge with the
existing structure or it could be fairly simple by rebuilding (although this
would have problems with keeping selections, open branches and similar.

> > - the next thing related to concurrency is to consider the use of
> > transactions to ensure that a group of operations either completes or does
> > not. This may be of use when adding something like an image and there are a
> > number of different tables that need updating with the properties of that
> > image. This may not be strictly necessary because it might be enough that
> > each table entry is considered on its own merits. But without it some image
> > metadata may not get saved to the database if digikam crashes halfway
> > through an operation.
> 
> Using TRANSACTIONS for all SQL queries? Or with add/delete/update a
> Tag/image/album would suffice? 
All queries do actually use transactions. But by default each statement is
executed in a single transaction (auto commit mode). There should be no need to
change the majority of statements unless there are any sequences that should be
done as a single unit or not done at all.

> 
> > There is nothing stopping digikam allowing multiple users to have concurrent
> > access to the database. I've used it myself but at the moment care has to be
> > taken by the user to make it a mostly read-only sharing otherwise strange
> > behaviour may occur. But there is a lot of preparation that could be made to
> > prepare for more multi-user operation. Possibly the best way to do this is
> > to write unit tests of the individual operations on something like the tags
> > tree and within those tests explicitly inject changes as though they were
> > from another user to make sure that the code handles them correctly.
> 
> If I'm right, making DB "read only" for users could definitely prevent lot
> of confusion that could occur in DB.

Yes that is certainly true and would be a good experiment to try.

It does have some potential problems and they may prove to be harder to
overcome than fixing up things to cope with changes in other areas. A few
problems I can think of are:

- the readonly user will not be able to change the database version so this
means that read-only users need to use the same version as any read-write user.

- thumbnail generation would be difficult for readonly.

- any attempts to tag, rate, tweak images would need to be gracefully declined.

-- 
You are receiving this mail because:
You are watching all bug changes.

Reply via email to