>>>>> "Www" == Www <[EMAIL PROTECTED]> writes:
Www> Actually,
Www> I think my code should be used because it gives much more information
Www> about the current status of the web server. A web server isnt just UP or
Www> DOWN. There are many things that can be going on which would cause an
Www> error. The server could be "Too Busy", you could get a bad VirtHost and
Www> Permission Denied Directory listing, if you calling a specific target
Www> there could be a 404 error.
OK, then, at least use the published interface. Don't "peer inside
the hash". That's never been a supported interface.
#!/usr/bin/perl
use LWP::UserAgent;
use HTTP::Request::Common qw(GET);
my $url = "http://www.stonehenge.com/";
my $response = LWP::UserAgent->new(env_proxy => 1)->simple_request(GET $url);
if ($response->is_success)) {
print "$url successfully fetched\n"
} else {
print "$url failed, status is ", $http->status_line, "\n";
}
Www> Also, my code only did a HEAD request. The user indicated that he did not
Www> want to perform a GET. With a GET request you have to receive the HTML
Www> data associated with GET "/", which could be 2 Kb or as much as 8 Kb
Www> depending on the site, or even more. If you were writing a script
Www> that was to check 1000's of sites, would you want to be transferring all
Www> that data if you didnt have to.
And there are some servers that will return a "not supported" with
HEAD even though GET gets the data. Go figure. I do a GET. If
limiting the output is your desire, you can construct a callback for
the useragent that aborts after the first payload.
Hey, I've written about a dozen link checkers, gotten published,
gotten feedback. I'm not doing this idly. Please stop arguing with
me. It merely wastes your time and mine. :)
--
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
<[EMAIL PROTECTED]> <URL:http://www.stonehenge.com/merlyn/>
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!