IMHO, the time when you had to invest time into crawling wikipedia
have long past. I'd recommend to use Dbpedia who have already crawled
a lot of data from wikipedia. They also have a tool for altering and
tuning their parsers, http://mappings.dbpedia.org .
-----
Yury Katkov, WikiVote



On Wed, Sep 11, 2013 at 7:12 AM, Wenqin Ye <wenqin...@gmail.com> wrote:
> If we are creating an ai app that needs to get information , would we be
> allowed to crawl wikipedia for this information? The app would probably be
> a search query of some kind, that give information back to the user, one of
> the sites used is wikipedia. The app would use parts of wikipedia's
> articles and send that info back to the user, and give them a link to click
> if they want to visit the full article. Each user can only query/search
> once per second; however the collective user base might query wikipedia
> more than once. Therefore, this web crawler may crawl more than once per
> second collectively with every user. Would this be allowed?
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

_______________________________________________
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Reply via email to