tp://it.wikipedia.org/w/api.php?action=query&export&exportnowrap&prop=revisions&rvprop=timestamp%7Ccontent&titles=India>
Regards
Amit
From: gaurav pant mailto:golup...@gmail.com>>
Date: Thursday, March 7, 2013 1:04 PM
To: Amit Kumar mailto:amitk...@yahoo-i
Hi Gaurav,
I don't know your exact use case but here's what we do. There is an IRC channel
where wikipedia continuously lists the pages as and when it changes. We listen
to the irc channel and every hour make a list of unique pages that changed.
Wikipedia's mediawiki software gives you an api to
xtractDates above extractNumber if works. I'm attaching a patch with the said
changes. I have tested in on the above mentioned Manmohan Singh Page. It fixes
this dates parsing bug, but I don't know if this change will affect some other
use case. Can someone look and tell me if the ch
com>>
Date: Wednesday, December 5, 2012 1:53 PM
To: Amit Kumar mailto:amitk...@yahoo-inc.com>>
Cc: "dbpedia-discuss...@lists.sf.net<mailto:dbpedia-discuss...@lists.sf.net>"
mailto:dbpedia-discuss...@lists.sf.net>>
Subject: Re: [Dbpedia-discussion] Error in running the
Hi all,
I'm have been running dbpedia extraction framework on wikipedia dumps. It was
working fine but recently it has started giving error. I'm using the following
version of the code
--
changeset: 1610:59dda670016e
branch: wiktionary
tag: tip
parent: 1609:a71b7
appreciated.
Cheers
Pablo
On Mar 19, 2012 10:38 AM, "Amit Kumar" wrote:
Hi,
We have been trying to setup an instance of dbpedia to continously extract data
from wikipedia dumps/updates. While going through the output we observed that
the image extractor was only picking up the
Hi,
We have been trying to setup an instance of dbpedia to continously extract data
from wikipedia dumps/updates. While going through the output we observed that
the image extractor was only picking up the first image for any page.
I can see commented out code present in the ImageExtractor whic
Regards
Amit kumar
Tech Lead @ Yahoo!
Sent from Samsung Mobile
--
RSA(R) Conference 2012
Mar 27 - Feb 2
Save $400 by Jan. 27
Register now!
http://p.sf.net/sfu/rsa-sfdev2dev2___
N_OPTS=-Xmx rather than JAVA_OPTS
>
> The dump also comes in 27 files rather than one big one. You can use
> these alternatively.
>
> --
> @tommychheng
> qwiki.com
>
>
> On Wed, Nov 30, 2011 at 11:01 PM, Amit Kumar wrote:
>>
>> Hi Pablo,
>> Thanks for your
.dbpedia.extraction.dump.Extract
-Xmx1024m
Cheers,
Pablo
On Thu, Dec 1, 2011 at 8:01 AM, Amit Kumar wrote:
Hi Pablo,
Thanks for your valuable input. I got the Mediawiki think working and am able
to run the abstract extractor as
n mappings.dbpedia.org
<http://mappings.dbpedia.org> . That, plus getting a stable version of the
framework tested and run would probably explain the choice of periodicity.
Best,
Pablo
On Tue, Nov 22, 2011 at 12:03 PM, Amit Kumar wrote:
Hey everyone,
I’m trying to setup the Dbpedia extra
Hey everyone,
I’m trying to setup the Dbpedia extraction framework as I’m interested in
getting structured data from already downloaded wikipedia dumps. As per my
understanding I need to work in the ‘dump’ directory of the codebase. I have
tried to reverse engineer ( given scala is new for me
12 matches
Mail list logo