On 3/25/20 11:07 AM, Andrei POPESCU wrote:
If a file is corrupted, deleted, etc. in one place that will be
propagated to all copies.

Depending on the features provided by the synchronisation tool they
could be *a part* of a backup solution.

See http://taobackup.com for what a complete backup solution should
provide.

Kind regards,
Andrei


Sorry, but again, I don't see the problem.
If I choose to delete a file from a machine - I will want the same for the rest of the machines. What's the problem with that?


On 3/25/20 12:49 PM, Dan Purgert wrote:
I don't think he meant to imply using external-to-you "cloud" providers
(gdrive, dropbox), but rather creating his own personal "cloud".

Be it something pretty -- Nextcloud, for example -- or something
utilitarian (a central NFS or SSHFS server holding all the data).

I've used Nextcloud in these situations, it's actually pretty good with
slower connections.  Since everything is a (machine-)local copy, in
addition to being stored centrally; something I work on "here(tm)" gets
updated "everywhere" shortly after I've saved the document.

I've only really ever run into problems with it when there was a
godawful slow connection with a machine that'd been offline for 2 weeks
while I was on vacation (and I forgot to spin it up at home before
heading out)




I have also installed in the past the *Νextcloud* locally ( in a Raspberry Pi ) and it really worked amazingly! In fact, it worked so perfectly that, because of that I didn't deal with *Syncthing*.
In fact, I could not understand their differences.
Syncthing seems more restrictive than Nextcloud because it does not have the "cloud" (WEB UI) function offered by Nextcloud. However, I would really like to try Syncthing to see what it really offers and to be able to compare it with Nextcloud.

What is your opinion?



On 3/25/20 7:44 PM, Linux-Fan wrote:
Hello,

there have been multiple answers already, so forgive me if my post does not
seem to add anything valuable. Still, it bugs me that there are many
different solutions proposed without their advantages and disadvantages
given?

I think that it is an important factor how the systems "online status" (in
sense of power and networking) is to be considered? Are both systems online
simultaneously? Are both systems online at the same time only for
synchonization?

I can think of different solutions depending on what is actually
wanted/neede:

* Cluster File Systems.
   People have already mentioned ceph (which is more an object storage
   and thus slow on small files IIRC?). I can add OCFS (Oracle Cluster
   File System) to the list, although it is not so easy to set up.
   Cluster file systems make sense if both systems are online at the same
   time and should both access a common file system. Often, cluster
   file systems want a "third" machine for doing the actual storage work
   (e.g. an iSCISI target of OCFS). I have also tried out GFS2 in the past,
   but it is a PITA to set up!

* Synchonization Tools.
   There are tools to invoke explicitly to call the synchronization.
   These make sense if both systems are online at the same time only
   for synchronization... if not, one will need to deal with "both changed"
   conflicts on a manual basis. I have no experience with syncthing
   (mentioned in the thread) -- syncthing might have a solution for this...

* Network File Systems.
   If you have a constellation of: system1 and system2 where
   system2 online means system1 is online, too, then you might
   install a "file server" (NFS or similar) on system1 and share
   files through this mechanism. From all approaches proposed, I
   would recommend this as being the least complex in operation
   although this does not mean it is the least complex to setup.

* "Cloud"-like file synchronization.
   These usually require a "third" server, too. And in my experience,
   whenever one is using "synchronized" files for non-trivial data
   processing (e.g. creating and reading a lot of files, storing a
   database, accessing the data with many processes...) most of these
   systems will fail one way or another (up to causing data loss). Yet,
   most people using such systems do not seem to have these issues :)
   These tools are useful in scenarios where there is no guarantee for
   any machine being online the same time as the other although this is
   achieved at the cost of running a "third" machine 24/7...

HTH and YMMV
Linux-Fan


I really liked your *detailed* post! It really is a detailed and comprehensive answer.

In my use case, I have both a safe and a secure remote server machine (VM) and a raspberry Pi at my home to run continuously & incessantly 24/7.
I have three computers. One at work a laptop and a desktop computer.
What I want is for the three computers to have specific directories and files shared/synchronized.
( Probably the whole /home directory of my user. )
I also want to be able to sometimes share/send my files - securely - to third parties.
What do you think is the best approach for me?
Maybe Syncthing + Nextcloud ?


Thank you.
Tasos

Reply via email to