Excerpts from Peter Rundle's message of Tue Jun 03 14:20:08 +1000 2008:
> I'm looking for some recommendations for a *simple* Linux based tool to spider
> a web site and pull the content back into 
> plain html files, images, js, css etc.
> 
> I have a site written in PHP which needs to be hosted temporarily on a server
> which is incapable (read only does static 
> content). This is not a problem from a temp presentation point of view as the
> default values for each page will suffice. 
> So I'm just looking for a tool which will quickly pull the real site (on my
> home php capable server) into a directory 
> that I can zip and send to the internet addressable server.
> 
> I know there's a lot of code out there, I'm asking for recommendations.

wget can do that. Use the recurse option.

rgh

> TIA's
> 
> Pete
> 

-- 
+61 (0) 410 646 369
[EMAIL PROTECTED]

You're worried criminals will continue to penetrate into cyberspace, and
I'm worried complexity, poor design and mismanagement will be there to meet
them - Marcus Ranum
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Reply via email to