On Thu, 10 May 2007 16:04:41 -0500 (CDT)
Steven M. Schweda wrote:

> From: R Kimber
> 
> > Yes there's a web page.  I usually know what I want.
> 
>    There's a difference between knowing what you want and being able
> to describe what you want so that it makes sense to someone who does
> not know what you want.

Well I was wondering if wget had a way of allowing me to specify it.

> > But won't a recursive get get more than just those files? Indeed,
> > won't it get everything at that level? The accept/reject options
> > seem to assume you know what's there and can list them to exclude
> > them.  I only know what I want. [...]
> 
>    Are you trying to say that you have a list of URLs, and would like
> to use one wget command for all instead of one wget command per URL? 
> Around here:
> 
> ALP $ wget -h
> GNU Wget 1.10.2c, a non-interactive network retriever.
> Usage: alp$dka0:[utility]wget.exe;13 [OPTION]... [URL]...
> [...]
> 
> That "[URL]..." was supposed to suggest that you can supply more than
> one URL on the command line.  Subject to possible command-line length
> limitations, this should allow any number of URLs to be specified at
> once.
> 
>    There's also "-1" ("--input-file=FILE").  No bets, but it looks as
> if you can specify "-" for FILE, and it'll read the URLs from stdin,
> so you could pipe them in from anything.

Thanks, but my point is I don't know the full URL, just the pattern.

What I'm trying to download is what I might express as:

http://www.stirling.gov.uk/*.pdf

but I guess that's not possible.  I just wondered if it was possible
for wget to filter out everything except *.pdf - i.e. wget would look
at a site, or a directory on a site, and just accept those files that
match a pattern.

- Richard
-- 
Richard Kimber
http://www.psr.keele.ac.uk/

Reply via email to