On Wed, 27 Dec 2017, Simon Slavin wrote:

I understand that ZFS does this too, though I’ve never used ZFS.

ZFS currently clones on the filesystem level. Filesystems are easy to create/delete and only consume the space required. Once a filesystem has been cloned, then only modified file blocks take new storage space.

ZFS and some other storage-pools/filesystems optionally support de-duplication at the block level so copying a block can imply incrementing a reference count. The application might do quite a lot of work copying the data (slow) but the underlying store can realize that the block matches other copies and not store a new copy. Inserting just one byte early in a changed file may foil de-duplication.

Filesystem tricks still do not solve the most common problem that the master repository is usually accessed over a network, and networks are usually slow.

Any DVCS is going to cause a penalty when the goal is to check out a particular version of the files from a remote server and the repository is large. A hosted VCS like CVS/SVN would deliver just the desired versions of the files (as long as the server remains available and working) whereas with a DVCS, the whole repository normally needs to be duplicated first.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to