Le 30/12/2016 à 03:47, Marcus Sundman a écrit :
Wait, what? Does this mean that I can't moves files from one computer
to another?
I regularly switch between different computers, and those presumably
have different library.db files, right?
Does something bad happen if I edit the same file from 2 different
computers? Will one overwrite changes made by the other even if I
don't have the file open in different computers at the same time?
Yes, and I think that overwrite-with-old-data happened to me, although I
have not at the time taken note of what happened precisely.
This can happen not only because of accessing photo collection from
different computers.
Another reason: a photo collection on an external drive. Even on the
same computer, the mount point is not always the same. Reasons are
varied: USB device name changing depending on what was plugged since
boot, duplicate disks labels, USB device stuck then unplugged/replugged.
As a result, the *same* library will contain a number of duplicates
history data and overwrite XMPs which were actual up-to-date data.
Also, when using one mount points, photos from other mount point appear
in database, as skulls.
# Scenario 1: changing computer, two libraries
(1) Mount disk to computer 1, edit photo.
(2) Unmount, mount to computer 2, edit photo, darktable cannot notify
the other database
(3) Go back to computer 1, mount disk, open darktable, your edits from
(2) are lost.
# Scenario 2: same computer and library, two mount points
(1) Mount disk, edit photo.
(2) Unmount, mount to different path, edit photo, darktable thinks it's
a different one
(3) Mount to first path, open darktable, your edits from (2) are lost.
# Analysis
The problem comes from current implementation assuming that when the
external world changes, its local beliefs ("reference by full paths that
are stable", "I am the one that edited it last", "I know it's history
better than the XMP") have priority.
These are reasons to consider per-image information in database a cache,
and XMP primary information.
# Ways to solving that
Way 1, today: having darktable read XMP file first avoids that loss.
This is the safest choice, and safe choice should be default in sane
software, leaving savvy users to change them to "optimized but unsafe"
choices like assuming database is more up-to-date.
Way 2, since Darktable 2.2: workaround giving up fast query on large
collections
darktable --library :memory:
Thank you Matthieu Moy for notifying about the library split in
darktable 2.2. It's a step to better solutions.
Way 3: best of both worlds by fixing broken assumptions
Compatible with moving between computers and mount points, allow big
collections (tags, etc) that need quick query capabilities.
That reference problem is not specific to darktable. Music library
managers, git, git-annex, virtualbox, vmware, docker and the like solved
it by storing all information related to a collection/VM/container *at
the root* of the collection, with a unique ID, independent on host
system, mount point and other factors.
In darktable context that would mean store per-photo information not in
user home dir but in a per-collection database at the root of the
collection, storing only relative paths in the database.
That requires the user once to point at a folder and tell darktable
"there is a collection that has this folder as root".
You can then open your collections on any machine with any mount point,
no more duplication, no skulls, no lost work.
Yet the potentially huge collection has its per-photo library. You still
can query your photos for tags, etc, very quickly.
What do you think?
--
Stéphane Gourichon
____________________________________________________________________________
darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org