You may be getting caught by robots.txt, try setting the user agent header,
i.e. -U agent-string

On Fri, Nov 17, 2023 at 8:31 AM Rich Shepard <rshep...@appl-ecosys.com>
wrote:

> On Fri, 17 Nov 2023, Rich Shepard wrote:
>
> > I need to download ~15G of data from a web site. Using a PLUG mail list
> > thread from 2008 I tried this syntax:
> > wget -r --accept *.* http://ph-public-data.com/
>
> To clarify, I don't think that I want to use the wget -m (mirror) command
> because I don't think that I want to duplicate the web site, only the data
> files in each page identified by the second column (study).
>
> Rich
>

Reply via email to