On 05.01.2012 01:35, Daniel Shahaf wrote:
Greg Stein wrote on Wed, Jan 04, 2012 at 19:08:41 -0500:
(*) I'd be interested in what they are doing. Is this a use case we might
see elsewhere? Or is this something silly they are doing, that would not be
seen elsewhere?
They use the Apache CMS[1] to manage their site[2,3].  Some changes (for
example r1227057[4]) cause a site-wide rebuild; for example, the
aforementioned r1227057 yielded r801653[5], which touches 19754 files
(according to 'svn log -qv' on svn.eu).

That is nothing that should stress SVN too much.
After all, I did merges of that size with 1.4.

The critical part of the CMS use-case would be
the number of directories within the working copy
and the number of files touched (that need to be
read to check for actual changes).

As a data point: KDE /trunk@r1mio is 9.7GB
(>300k files in <100k folders). With a tuned svn://
implementation, I get an export within 17 seconds(!)
- 4 times in parallel [*]. More clients will ultimately
cause my machine to saturate at 6GB/sec sustained
svn: traffic over localhost.

So, we should really seek to optimize the working
copy implementation to support large working copies.

-- Stefan^2.

[*] The client side is an "empty" export command
that will simply accept any data coming from the server.

Reply via email to