On Sun 04 Mar 2012 08:13:32 NZDT +1300, Phill Coxon wrote:

> I want to download a copy of a small site so I have the layout /
> design to use as a source of ideas for a similar project I'm going
> to create in the near future.

> I'm trying to use  wget to mirror the site but all of the site
> images are cached on Amazon S3 and wget will not download them as
> local copies.

> Any suggestions on how to use wget or curl to download the images
> stored on Amazon S3 with links updated for local storage / viewing?

Completely forget about curl for anything except downloading a single
file (and even there I find it's popular, but not very good).

Try wget -D , giving the host names to consider for download. 

The problem with wget always is that it doesn't turn links to local
links if the target isn't downloaded, so you don't get a full local copy
with only local links (some broken), but a partially local copy with
some external links going to unstable targets.

There was another downloader I tried years back which did all this
perfectly, but it had obvious bugs and was no longer maintained. gnome
gui, I think the name was pavluk but I can't find it now.

Googling turns up httrack.

Volker

-- 
Volker Kuhlmann
http://volker.dnsalias.net/     Please do not CC list postings to me.
_______________________________________________
Linux-users mailing list
Linux-users@lists.canterbury.ac.nz
http://lists.canterbury.ac.nz/mailman/listinfo/linux-users

Reply via email to