On Oct 28, 2014 2:56 AM, "Justin Lloyd" <jclb...@gmail.com> wrote:
> Again, perhaps there's a better way to architect a resilient set of wikis
> that would simplify this design, and I'm open to all suggestions, so far
> what I have is the best I've come up with in the time I've managed these
> wikis.

You should take a look at how WMF handles this. The wiki farm stuff (aka
hetdeploy/mwmultiversion) is documented at
https://wikitech.wikimedia.org/wiki/Heterogeneous_deployment (maybe up to
date because edited recently)

also:
*
https://github.com/wikimedia/operations-mediawiki-config/blob/master/multiversion/MWMultiVersion.php
*
https://github.com/wikimedia/operations-mediawiki-config/blob/master/wmf-config/InitialiseSettings.php
*
https://github.com/wikimedia/operations-mediawiki-config/blob/master/wmf-config/CommonSettings.php

(and see other files in that dir too)

WMF essentially runs its own s3. (called swift. An openstack project) I
don't see why you couldn't use s3 in a similar way. might be overkill for
you to run your own swift cluster. Rackspace has a public swift cluster you
could use too.

It doesn't look like there's currently an s3 filebackend but you're welcome
to add one.

* https://github.com/wikimedia/mediawiki/tree/master/includes/filebackend
*
https://github.com/wikimedia/operations-mediawiki-config/blob/master/wmf-config/filebackend.php
* https://wikitech.wikimedia.org/wiki/Media_storage
* our InitialiseSettings.php: wgUploadPath

In the short term you could load balance reads with a more frequent rsync
and then it's only spof for writes and rendering file description pages. I
think. (wgUploadPath points to load balancer for a cluster of machines that
all have a local copy of all images.)

-Jeremy
_______________________________________________
MediaWiki-l mailing list
To unsubscribe, go to:
https://lists.wikimedia.org/mailman/listinfo/mediawiki-l

Reply via email to