Hi,

I think that WWW::Mechanize will be perfect for you. 
It supports performing a sequence of page fetches including following links and 
submitting forms. Each fetched page is parsed and its links and forms are 
extracted.

see http://search.cpan.org/~petdance/WWW-Mechanize-1.30/lib/WWW/Mechanize.pm

end some useful examples at

http://search.cpan.org/~petdance/WWW-Mechanize-1.30/lib/WWW/Mechanize/Examples.pod
 


Yaron Kahanovitch
----- Original Message -----
From: "perl pra" <[EMAIL PROTECTED]>
To: "Beginners List" <beginners@perl.org>
Sent: Monday, August 27, 2007 10:13:25 AM (GMT+0200) Auto-Detected
Subject: perl script to crawl the web

Hi Gurus,

I need to write a perl  script for a web spider that will crawl the web
looking for specific file(s) that I specify (such as .mp3) and store the
URL's into a mysql database. This needs to work on a linux server. I will
also require a skip list that will skip certain sites that I input into some
kind of array.

Can anybody give me some ideas to do the same.

Thanks,
PP


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
http://learn.perl.org/


Reply via email to