Hi Morsey/All,

I come across this blog where you have placed your comment that problem
solved. But how you have fixed the problem is not mentioned.

http://sourceforge.net/mailarchive/forum.php?thread_name=4E36B592.2020208%40informatik.uni-leipzig.de&forum_name=dbp-spotlight-users

I am facing the same problem during build. Can you please let me know how
you have resolved this?

Thanks


On Sun, Feb 24, 2013 at 12:36 PM, Dimitris Kontokostas <jimk...@gmail.com>wrote:

> Hi Gaurav,
>
> I recently created a dump for Dutch [1] & Greek [2] so the extractors are
> working fine and this is probably a configuration issue too.
> I am not aware of any perl based script, maybe you should ask a Wikipedia
> or a perl-related mailing list for that
>
> Best,
> Dimitris
>
> [1] http://nl.dbpedia.org/downloads/nlwiki/20130205/
> [2] http://wiki.el.dbpedia.org/downloads/20130208/
>
>
> On Sat, Feb 23, 2013 at 3:40 PM, gaurav pant <golup...@gmail.com> wrote:
>
>> Hi Morsey/Dimitris/All,
>>
>> Can you suggest Perl based API for extracting information for these wiki
>> media dumps as that extractor is still not working properly.
>>
>> My requirement is to get some basic information about the search term and
>> a sort abstract about title. I am fine with writing a script to parse
>> English language page but for other language i am facing lot of problem
>> using Perl.
>>
>>
>> Thanx .
>>
>> On Fri, Feb 22, 2013 at 1:38 AM, Mohamed Morsey <
>> mor...@informatik.uni-leipzig.de> wrote:
>>
>>> Hi Gaurav,
>>>
>>>
>>> On 02/21/2013 03:05 PM, gaurav pant wrote:
>>>
>>>> Hi Morsey/Dimitris,
>>>>
>>>> I have tried a fresh installation on other machine and it finally
>>>> successful. I am following the same steps but don't know why could not
>>>> installed in other machine.
>>>>
>>>> Anyway thanks for being too patient to me...
>>>>
>>>> I want one more help from you guys that I do not want to download using
>>>> this module because i have per-existing dataset which are being
>>>> downloaded.I want to extract information from 
>>>> "eswiki-20130208-pages-**articles.xml.bz2"
>>>> page. Can I directly do this?
>>>>
>>>> If there is any start up tutorial than please let me know. Also let me
>>>> know if it is possible to  get information into rdf format from these pages
>>>> , than it will be of great help.
>>>>
>>>
>>> You can use the guide Dimitris suggested before [1], and go directly to
>>> the last step, as you have the dumps already.
>>> One more thing, the file you mentioned is the dump of the Spanish
>>> Wikipedia, so you can configure the framework to extract only from a
>>> specific language(s) by adapting file [2].
>>>
>>>
>>>>
>>>> Thanks again... :)
>>>>
>>>
>>> [1] https://github.com/dbpedia/**extraction-framework/wiki/**
>>> Extraction-Instructions<https://github.com/dbpedia/extraction-framework/wiki/Extraction-Instructions>
>>> [2] http://dbpedia.hg.sourceforge.**net/hgweb/dbpedia/extraction_**
>>> framework/file/d244cce11e6a/**dump/extraction.default.**properties<http://dbpedia.hg.sourceforge.net/hgweb/dbpedia/extraction_framework/file/d244cce11e6a/dump/extraction.default.properties>
>>>
>>>
>>> --
>>> Kind Regards
>>> Mohamed Morsey
>>> Department of Computer Science
>>> University of Leipzig
>>>
>>>
>>
>>
>> --
>> Regards
>> Gaurav Pant
>> +91-7709196607,+91-9405757794
>>
>
>
>
> --
> Kontokostas Dimitris
>



-- 
Regards
Gaurav Pant
+91-7709196607,+91-9405757794
------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_feb
_______________________________________________
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion

Reply via email to