Hi Joerg,
On 10 March 2015 at 17:03, Joerg Sonnenberger <jo...@britannica.bec.de> wrote:
> On Tue, Mar 10, 2015 at 06:14:10PM -0400, Ron W wrote:
>> On Tue, Mar 10, 2015 at 5:54 PM, Joerg Sonnenberger <jo...@britannica.bec.de
>> > wrote:
>> >
>> > Unless "large files" means "larger than 2GB", I don't believe there is
>> > one. I haven't run into a case where I wanted to use version control
>> > systems for handling such files yet...
>>
>>
>> I posted a comment and he replied that his commits sometimes have several
>> hundred files per commit. Maybe a problem with rolling back an interrupt
>> operation?
>
> There are two issues for commits with many files I am ware off. One is
> the updating of the mtime time being stupid, the other is the manifest
> parsing issue discussed recently. Not sure if it is either of those two.
>

I created a test repo with a few commits:
0. Initial commit 800 almost empty text files (repeated 'test' for
about 80 times)
1. 2000 text files of server.wiki from fossil (14K file)
2. 10 100MB disk files using dd

The remaining commits were renaming and moving files into their
respective directories.

The commit of the 100 MB files took about 300 seconds but this ran on
a single core 512MB-1gig RAM machine.

dbstat reports:
repository-size:   948781056 bytes (948.8MB)
artifact-count:    23 (stored as 15 full text and 8 delta blobs)
artifact-sizes:    41118055 average, 104857600 max, 945715285 bytes
(945.7MB) total
compression-ratio: 9:10

I know this is unscientific but it seems fossil handled the large
pretty commits well. If this were done on a machine with an SSD and
some more RAM, it may not take quite as long.

> Joerg

best!
j

-- 
-------
inum: 883510009027723
sip: jungleboo...@sip2sip.info
xmpp: jungle-boo...@jit.si
_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

Reply via email to