I would think the scripts lsarra mentioned would work best for MediaWiki.
That said, there are a large number of 'archiving' tools that let you take
copies of websites for historical preservation and backup:
e.g.
https://github.com/pirate/ArchiveBox/wiki/Web-Archiving-Community#The-Master-Lists
There is also MWOffliner which might work you, which is maintained by
the kiwix team iirc.
On Sun, 17 Mar 2019 at 00:04, Isarra Yos wrote:
>
> Not exactly the same thing, but there is a set of grabber scripts
> designed to get and import an entire wiki via the api, but these are
> currently
Not exactly the same thing, but there is a set of grabber scripts
designed to get and import an entire wiki via the api, but these are
currently mostly just mw maintenance and a few python scripts that only
expect to be used with an actual second mediawiki instance.
That being said, it
Unfortunately that doesn’t really work unless it’s a fairly small wiki. If
it’s bigger the export request times out.
What I was hoping for was a tool that utilizes either special export or the
API to build a dump file using multiple requests.
I can probably write something to do the same thing,
XML export if enabled should do the trick.
On Fri, 15 Mar 2019 at 18:04, John wrote:
> Are there any tools that allow you to create a dump of a mediawiki install
> that doesn’t require direct database access? My primary focus is on
> creating a backup of the wiki contents.
> Thanks
>
Are there any tools that allow you to create a dump of a mediawiki install
that doesn’t require direct database access? My primary focus is on
creating a backup of the wiki contents.
Thanks
___
MediaWiki-l mailing list
To unsubscribe, go to: