> > While I don't know if Larry will mandate it, I would like this code:
> >    open PAGE, "http://www.perl.org";
> >    while (<PAGE>) {
> >          print $_;
> >    }
> > to dump the HTML for the main page of www.perl.org to get 
> dumped to stdout.
> 
> Well, this seems innocent enough, but how far do you want to stretch it?
> 
> Should this work?
>  use lib "http://my.site.org/lib";
> If not; why? (except security issues)

Hrmm... It'd have interesting repercussions for CPAN... :-)

How about doing something like:

use lib "CPAN::HTML::Module";

which goes and grabs the module in question from CPAN for 
use. Picking, the closest mirror, of course.

Would be interesting, but is probably bloatware... 

> what about 
>  if (-r "http://www.perl.com/") {...}
> or
>  if (-M "http://www.perl.com/" < -M "http://www.python.org/") {...}

Could be useful to determine if a site is available... Quite a
nifty thing for testing uptime of servers, etc.

> should 
>  opendir (FTP, "ftp://sunsite.uio.no/");
> work as expected?

See below. ;)

> Should URLs behave as much like regular files/directories as possible,
> or should they be limited to a small set of operations (like open()).

Hrmmm... You definately raise an interesting point... Potentially,
you could operate a website/ftp link as if it were a mounted 
filesystem. It would be much easier with FTP than HTTP, I would 
imagine, given the HTTP header info... 

Biggest problem is it'd have to be able to work through proxies as well
as through direct connection... Tho perhaps:

use Proxy 'cache.company.com:8080'; 

could be a useful directive... It seems the most Perlish way of
doing things I can think of... Definately don't want environment
variables or other non-Perlish methods getting in the way... 

Greg

Reply via email to