On 09/05/2016 03:00 AM, Gour wrote:
[snip]
> Now, when I got rid of ownCloud and replaced it with something lighter
> to sync my calendars/contacts with the phone, I plan to manually cp-ing
> media files to my computer and considering to put all those
> photos/videos as unversioned files in a Fossil repo.
> 
> I use 64bit Linux (Debian Sid) with XFS filesystem, so wonder if there
> are any recommended limit in regard to number of files kep in the repo
> and/or size of the Fossil repo?

I'm not a software developer so my technical insight into these issues
might be dangerously insufficient in some places but a couple of data
points gleaned from the web might be relevant to the discussion. (bottom
of the page)

The two issues that stand out [to me] are:

A. The Fossil repository might have a maximum size for any single
[check-in] file of around 1 or 2 GB ([2] "Maximum length of a string or
BLOB").

B. I suspect the storage requirement will be twice (2x) the data size -
the data is stored once in the Fossil database and another copy would
exist in the file-system [as a check-out].

In a situation where many large files are being managed, those two
issues might become significant.

I can imagine broad classes of applications within various scientific
endeavors where there could be easily ~5TB of instrumentation data in
~10,000 files. Current technology makes this very cost effective.

While the measurement data would probably need to be considered
immutable, the file metadata, annotations, analysis scripts, commentary,
discussion, issues, etc. would probably undergo heavy edits/revisions.
(It would be valuable if there were audit trails within the file
management/data-sharing system (i.e., file integrity checks with
chain-of-custody-like verification)).

Fossil almost does everything that is needed to become a killer app for
people who need a consolidated [and comprehensible] method/tool to
manage, share, and collaborate around a common data-set (e.g., research
groups in university laboratories).

There are a couple of other capabilities that would extend the
use-case/user-base even further but I suspect I am already too far off
the reservation for general toleration.


[1]:
http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/ch02s04.html
------------------------------------------------------
2.4. XFS Limits
32 bit Linux

    Maximum File Size = 16TB (O_LARGEFILE)
    Maximum Filesystem Size = 16TB

64 bit Linux

    Maximum File Size = 9 Million TB = 9 ExaB
    Maximum Filesystem Size = 18 Million TB = 18 ExaB
------------------------------------------------------


And [2]: https://sqlite.org/limits.html
------------------------------------------------------
Maximum length of a string or BLOB

The maximum number of bytes in a string or BLOB in SQLite is defined by
the preprocessor macro SQLITE_MAX_LENGTH. The default value of this
macro is 1 billion (1 thousand million or 1,000,000,000). You can raise
or lower this value at compile-time using a command-line option like this:

    -DSQLITE_MAX_LENGTH=123456789

The current implementation will only support a string or BLOB length up
to 231-1 or 2147483647. And some built-in functions such as hex() might
fail well before that point. In security-sensitive applications it is
best not to try to increase the maximum string and blob length. In fact,
you might do well to lower the maximum string and blob length to
something more in the range of a few million if that is possible.
------------------------------------------------------

_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

Reply via email to