Hello all:

I'm having a problem with wget.  I need to have the program (while
running recursively) output to stdout so that I can pipe the output to a
separate filter process.  Unfortunately, wget will only download the
first file from any site I point it at when stdout is specified as the
file to write to.

I am invoking it like this:
wget -r -O - http://www.google.com/ (for instance)

After downloading the file it is served at google (or wherever), it
always dies with:
www.google.com/index.html: No such file or directory

Does that mean it's trying to write to the non-existant www.google.com
directory on my drive, or does it mean that there's no index.html file
on any server I want to suck from?

Any help that you all could offer would be greatly appreciated!  Thanks
in advance from

Greg Robinson

BTW, I'm running linux 2.2.13, glibc 2.1.3, i686.

Reply via email to