Hi, I'm running a rather large Subversion installation (>1000 repos, >1 TB total storage, >1000 users). I'm looking for advice how to improve our availability. We're currently quite good, but I'm a bit worried about e.g. hardware failure. Performance is not an issue for now (machine load <2, at about 20 svn-requests per second).
How are you out there doing it? I have a few ideas, but I'd like to hear the opinions of others and get maybe some pointers for more research. My ideas: * use svnsync replication. Drawback: failure needs manual intervention, hook scripts need to be transferred manually. * use an active-passive cluster with e.g. heartbeat. That would be possible, the data reside on a SAN anyway. * use an active-active cluster with two separate machines sharing the storage via a cluster fs (GFS? GPFS?) with a HA load balancer in front. Probably the sexiest solution ;-) * I already discarded the idea of using active-active with NFS since I can remember the reports of strange failures... Does anyone here know how other large/high profile sites (e.g. the Apache foundation) are ensuring availability? I couldn't find any hints at the website... Cheers, Ulli -- Ullrich Jans, Specialist, IT-A Phone: +49 9131 7701-6627, mailto:ullrich.j...@elektrobit.com Fax: +49 9131 7701-6333, www.elektrobit.com Elektrobit Automotive GmbH, Am Wolfsmantel 46, 91058 Erlangen, Germany Managing Directors: Alexander Kocher, Gregor Zink Register Court Fürth HRB 4886 ---------------------------------------------------------------- Please note: This e-mail may contain confidential information intended solely for the addressee. If you have received this e-mail in error, please do not disclose it to anyone, notify the sender promptly, and delete the message from your system. Thank you.