Worked... Thanks.
> grep -v "^#" sites.html | wget --spider -S -o log.txt -i -
-Original Message-
From: Hrvoje Niksic [mailto:[EMAIL PROTECTED]
Sent: Sun 9/18/2005 2:25 PM
To: Arthur DiSegna
Cc: wget@sunsite.dk
Subject:Re: Retrieving File dates withou
Hello,
Is it possible to download the file date without downloading the file?
Right now I am using:
grep -v "^#" sites.html | wget --spider -o log.txt -i -
Basically, I am asking WGET to look through sites.html to see if certain files
exist. This gives me an OK and a file size. I would to go o
Thank you...
This worked for me: grep -v "^#" test.html | wget --spider -o log.txt
-i -
-Original Message-
From: Hrvoje Niksic [mailto:[EMAIL PROTECTED]
Sent: Monday, September 12, 2005 5:31 PM
To: Arthur DiSegna
Cc: wget@sunsite.dk
Subject: Re: Comment HTML File?
"
I am using –i to read urls from an HTML file. How can
I make comments to this file with out the log showing “test.html: Invalid
URL #”
Thanks
Thanks
wget --spider -o log.txt -i test.html worked for me...
-Original Message-
From: Frank McCown [mailto:[EMAIL PROTECTED]
Sent: Friday, August 19, 2005 8:31 AM
To: wget@sunsite.dk
Cc: Arthur DiSegna
Subject: Re: Test a websites availability
You can use the --spider option to not
Hello,
I have been looking around and haven't found an answer yet...
Is it possible to use WGET to test a website for being up only. I don't want to
download any files just test the site for availability? Any ideas?
Psudo code: wget www.google.com. Is connected? YES/NO. If YES write answe