mwDumper is essential also for anyone wiling to replicate a wiki locally for
any purpose. There are alternatives such as xml2SQL or importDump.php but
mwDumper is the most efficient in terms of correctness and completeness or
speed sometimes.
bilal
==
Verily, with hardship comes ease.
On Fri,
I am still able to import the dumps using the old mwDumper (modified to fix
the contributor) and xml2SQL works also and it is quiet fast. importDump.php
continues after it breaks I think.
bilal
--
Verily, with hardship comes ease.
On Thu, Feb 4, 2010 at 9:24 PM, Chad innocentkil...@gmail.com
If you can download the whole file to your PC, then you can just
import a portion of it and stop the import after some time. The
mwDumper shows you the imported pages in an increment of 1000.
If you do not have enough bandwidth to download the whole thing, you
can use the Special:Export
I think having access to them on Commons repository is much easier to
handle. A subset should be good enough.
Having 11 TB of images needs huge research capabilities in order to handle
all of them and work with all of them.
Maybe a special API or advanced API functions would allow people enough
I have been using the dumps for few months and I think this kind of dumps is
much better than a torrent. Yes bandwidth can be saved but I do not think
the the cost of bandwidth is higher than the cost of maintaining the
torrents.
If people are not hosting the files so the value of torrents is
Greetings,
This template is not being parsed on my french local wiki. Any hints on
that. I did several search on google but I could not find the problem.
bilal
--
Verily, with hardship comes ease.
___
Wikitech-l mailing list
I have used xml2sql, mwdumper, import.php and the python script to import
The two fastest are xml2sql and the python script (xray). The best results
is from importDump.php
mwDumper is slow but it gives good results.
I have not done any import with the new redirect tag.
bilal
On Fri, Oct 9,
I think Google applications use the data crawled from their own datbases and
of course Google has almost all last updates of Wikipedia articles with all
its information including the Geo addresses.
bilal
On Fri, Oct 2, 2009 at 1:59 PM, Tei oscar.vi...@gmail.com wrote:
On Fri, Oct 2, 2009 at
Hi Felipe,Thanks for the great effort. This will save us hours of
downloading and importing older dumps.
bilal
On Tue, Jun 23, 2009 at 12:26 PM, Felipe Ortega glimmer_phoe...@yahoo.eswrote:
Hello.
Since just a few hours ago, a new public repository has been created to
host WikiXRay
discussion.
On Fri, May 15, 2009 at 11:45 AM, Bilal Abdul Kader bila...@gmail.com
wrote:
Is it ethical?
How is it unethical? We take advantage of downtime to explain to our
readers that we rely on donations to keep the site running, there is
nothing dishonest about
On Tue, May 5, 2009 at 9:47 AM, Nikola Smolenski smole...@eunet.yu wrote:
Brion Vibber wrote:
It might be helpful for some language wikis to link in a free font this
way, when standard fonts supporting their script are often unavailable.
Right now on such sites there tends to be a little
Greetings,
I am trying to replicate enwiki locally but I am always getting a CRC error
extracting the page history file
(enwiki-latest-pages-meta-history.xml.bz2http://download.wikimedia.org/enwiki/latest/enwiki-latest-pages-meta-history.xml.bz2).
Anybody was able to do so?
I am not sure if the
There is an issue with running a foreground JS thread that is super fast and
might send a lot of request to the server. Heavy processing on the client
side would alleviate the load from the server (if possible) but it might
push another load on the server (in the presented example of sending
This would be a great idea as the library is always updated and has a lot of
features for the front end.
On Wed, Apr 22, 2009 at 12:28 PM, Brian brian.min...@colorado.edu wrote:
Many extensions are now using the Yahoo User Interface library. It would be
nice if mediawiki included it by
I have downloaded the history dump file (~150 GB) using Firefox on XP and
using wget on Ubuntu and it works fine. I have downloaded it using a
download manager on Vista and it is fine also.
A more probable reason is the file system limitations.
bilal
On Fri, Apr 10, 2009 at 3:49 PM, Finne
I have a decent server that is dedicated for a Wikipedia project that
depends on the fresh dumps. Can this be used anyway to speed up the process
of generating the dumps?
bilal
On Tue, Jan 27, 2009 at 2:24 PM, Christian Storm st...@iparadigms.comwrote:
On 1/4/09 6:20 AM, yegg at alum.mit.edu
16 matches
Mail list logo