I'm looking for some recommendations for a *simple* Linux based tool to spider a web site and pull the content back into
plain html files, images, js, css etc.
I have a site written in PHP which needs to be hosted temporarily on a server which is incapable (read only does static
content). This is not a problem from a temp presentation point of view as the default values for each page will suffice.
So I'm just looking for a tool which will quickly pull the real site (on my home php capable server) into a directory
that I can zip and send to the internet addressable server.
I know there's a lot of code out there, I'm asking for recommendations.
TIA's
Pete
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html