From: "Dan Sugalski" <[EMAIL PROTECTED]>
> While I don't know if Larry will mandate it, I would like this code:
>
>    open PAGE, "http://www.perl.org";
>    while (<PAGE>) {
>          print $_;
>    }
>
> to dump the HTML for the main page of www.perl.org to get dumped to
stdout.
>

Now I would like to get some of the metadata for that page like expiration
date, length, content type, etc. How?

Moreover, the http server would return a "404" in case the remote document
is not found (probably not under the example above, but...) I would like
ability to trap such type of remote exceptions.

jc

Reply via email to