How far back do you need to go?
On Sun, Jan 22, 2023 at 10:25 AM Adhittya Ramadhan
wrote:
>
> Pada tanggal 17 Jan 2023 11:23, "Eric Andrew Lewis" <
> eric.andrew.le...@gmail.com> menulis:
>
>> Hi,
>>
>> I am interested in performing analysis on recently created pages on
>> English Wikipedia.
>>
That eswiki page is in namespace 'wgNamespaceNumber":104 the FR page is
"wgNamespaceNumber":116
On Sun, Feb 13, 2022 at 2:17 PM Erik del Toro wrote:
> Hello.
>
> I am doing some converts to aarddict https://aarddict.org/ offline
> wikipedia and wiktionary app. I use mw2slob and the N0 files foun
Are you limiting your count to namespace 0?
On Thu, Aug 20, 2020 at 10:45 AM Yuki Kumagai
wrote:
> Hiya
>
> I have a question about wikipedia xml database dump. Apologies if this
> wasn't an appropriate place for asking a question.
> On a wikipedia page, it's mentioned that the current number of
Normal editing won’t cause issues. But a delete /move/restore history merge
can cause things to look out of order if you are using child/parent
On Fri, Jan 17, 2020 at 8:52 PM Christopher Wolfram
wrote:
> Thanks Ariel.
>
> So the revisions are in order of revision id which are assigned
> sequent
It looks like the page was deleted/restored thus giving it a new page ID.
Originally when pages where deleted the page_id was not kept, which caused
a new page_id to be issued when it was restored. This phenomenon has since
been fixed, and should no longer happen.
On Sat, Dec 3, 2016 at 8:47 AM, R
the dumps do not contain any images, just the description text that goes
along with them. Platonides got you a raw copy of the actual files
On Fri, Mar 18, 2016 at 6:31 AM, D. Hansen
wrote:
> Hi Platonides
>
> First let me thank you very much!
>
> 99 GByte, how could that be?
> On https://dumps.
Have you tried 7zip ?
On Fri, Jan 15, 2016 at 8:30 PM, Richard Farmbrough <
rich...@farmbrough.co.uk> wrote:
> I have problems bunzip2ing pages-articles files. WinRAR fails at 37G, and
> bunzip2 fails at some point >> 14g though it "helpfully" cleans up after
> itself.
>
> Bunzip2 v 1.0.6
>
> >b
The page in question is: AfghanistanHistory However you are querying
medawiki.org, not en wikipedia. Also Its far better to grab the page table
dump and use both of those.
On Mon, Nov 16, 2015 at 7:05 AM, Alan Said wrote:
> Hi all,
> I want to recreate the redirect graph form the redirect sql f
OK, reports a few minutes old:
http://tools.wmflabs.org/betacommand-dev/reports/commonswiki_svg_list.txt.7z
On Sat, Jun 20, 2015 at 1:38 AM, Ariel T. Glenn
wrote:
> Στις 20-06-2015, ημέρα Σαβ, και ώρα 00:46 +0200, ο/η Federico Leva
> (Nemo) έγραψε:
> > D. Hansen, 19/06/2015 23:09:
> > > One sugg
I can run a database report Monday. But keep in mind that the wiki isn't
static and what you want changes on a very rapid rate
On Friday, June 19, 2015, D. Hansen wrote:
> Hi!
> I have tried to get a list of all .svg-files on commons.wikipedia.
>
> Of course I could just parse through commons,
All dumps are scheduled to run once a month, some just take longer than
others to run, and there are issues some time, not sure why the order
really matters.
The WMF dump all of their active databases (any database that is hosting a
site, regardless of the status of the site) If you want dumps/wik
You would probably need to pull from wikidata
On Fri, Oct 3, 2014 at 10:31 AM, Ditty Mathew wrote:
> I am trying to get paired articles from Simple English Wikipedia and
> English Wikipedia. For that I am looking for language links for Simple
> English Wikipedia. Is it available?
>
> with regard
http://en.wikipedia.org/w/api.php?action=query&meta=siteinfo&siprop=namespaces|namespacealiases&format=jsonfm
should be what you need.
On Tue, Jul 2, 2013 at 3:11 PM, Byrial Jensen wrote:
> Hi, is there a file somewhere with a list of all namespace names and
> numbers for all the Wikimedia wikis
I am actually looking to re-write that tool to avoid those bugs which is
why I was asking :)
On Friday, November 9, 2012, Platonides wrote:
> On 09/11/12 20:21, John wrote:
> > I am looking to create a script for creating manual dumps for those
> > wikis that either dont or won
I am looking to create a script for creating manual dumps for those
wikis that either dont or wont publish their own dumps and that I dont
have server access to. To that end I am writing a python dump creator,
however I would like to ensure that my format is the same as the
existing. I could revers
Take a look at http://en.wikipedia.org/w/api.php?action=parse it is
exactly what you are looking for. Also a 7GB app is something you want
to CLEARLY state as eating up that much device space/ download
bandwidth is probably a problem for most users
On Sun, Sep 9, 2012 at 3:07 PM, Roberto Flores w
I am CCing the toolserver user in question to see if they can re-run the query
On Sat, Sep 8, 2012 at 6:09 AM, Gregor Martynus wrote:
> Can anybody help or give me a hint whom I could ask?
>
> --
> Gregor Martynus
>
> On Friday, 7. September 2012 at 14:40, Gregor Martynus wrote:
>
> Yeah, that'd
Im running a basic check (parsing every page in the wiki) using the
toolserver's 6-1 copy. Ill let you know if I see any issues.
John
On Thu, Jun 7, 2012 at 2:45 PM, Felipe Ortega wrote:
> > De: Platonides
> > Para: Felipe Ortega
> > CC: "xmldatadumps-l@lists.wikim
ear) to leave enough time to remove copyvios.
I'd also like an account there; read-only would be OK so I can watch
the progress!
--
John Vandenberg
___
Xmldatadumps-l mailing list
Xmldatadumps-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/xmldatadumps-l
19 matches
Mail list logo