On 2/10/2012 10:21 AM, Bruce Lysik wrote:
Hi,

I'm considering deploying 3 front-ends, all mounting the same SAN volume
for repo. (The SAN handle flock() and fnctl() correctly.) These 3 FEs
would be load balanced by a Citrix Netscaler. (At least for http(s).)

The largest issues I've run into using a shared storage is not the flock() and fcntl() but the atomic renames. fsfs will atomically rename three files to do a commit and the last rename of the 'current' file, on some shared filesystems, will result in points of time where the systems not doing a commit will not see a 'current' file. If this happens, then any svn read or write operation on the repository will temporarily fail.

We had GPFS and it never failed to implement the posix requirements, but it was slow for the number of commits we were pushing (6/sec), so we ended up going to a single server solution with a standby. There was another filesystem, I forgot its name, which didn't implement atomic renames correctly and wasn't usable for svn.

Have you tested your SAN deployment?

What I would do is create a fsfs repository and on one of your hosts, do as many commits per second as you can in a tight loop, then have another host in a tight loop do an operation on the repository, like get the log message of the HEAD revision.

If you want to test your flock() and fcntl() and see how well that performs, try to do as many commits per second into the same repo from two or more hosts. In this case, have a repository with N directories and have each host modify a file in a single directory, that way you won't get any conflicts.

How many commits per second are you expecting in practice?

Blair

--
Blair Zajac, Ph.D.
CTO, OrcaWare Technologies
<bl...@orcaware.com>
Subversion training, consulting and support
http://www.orcaware.com/svn/

Reply via email to