As seen here
http://www.gnu.org/software/wget/wget.html
it seems the desired behaviour is to mirror
a site beginning at a url.

Rebol does not have something like wget built in.
However, you can make a recursive function of your
own to do the job. I wanted to do something like
this for a while, but only got to the stage of
extracting the links from a single url. See:
http://www.lexicon.net/anton/rebol/web/extract-html-links.r

After extracting the links, you need to filter out the
external links and keep the internal links.
I've got some functions lying around somewhere for that..

Then save and recurse on the internal links.

Anton.

> am still a bloody newbie and
> want to do the following (on a Redhat box, downloaded latest core today):
> 
> I have a file containing a list of ftp URLs in the format:
> 
> ftp://<userid>:<password>@hostname
> 
> I want to iterate through this list and call wget for each entry 
> like this:
> 
> wget <some parameters> -P <userid>
> 
> -P specifies the directory to save the retrieved tree under.
> 
> Would a true Reboler even use wget or do everything in Rebol and 
> if so, how?
> 
> TIA,
> Kai

-- 
To unsubscribe from this list, just send an email to
[EMAIL PROTECTED] with unsubscribe as the subject.

Reply via email to