Am 26.06.97 schrieb branden # purdue.edu ...

Moin!

b> For every .html request that comes in (or perhaps for any request in
b> general), look for a file fitting the traditional spec.
b> If that fails, look for a .gz version of that file in the same directory.
b> If that fails, return the usual 404 error.
b> Does anything already implement this?  If not, why not?

This should be very easy to implement. But a lot of user (for example  
notebook user) don't like to run a WWW server on their slow machines.

In my opinion it would be much better to convert the links inside the  
documents to .html.gz. And all Unix/Linux WWW browsers should be able to  
uncompress the documents. With netscape and lynx for example there's no  
problem.

An other question: is there a way to get which WWW browser the user would  
like to use and if there's a WWW server installed in a script?

cu, Marco

--
Uni: [EMAIL PROTECTED]      Fido: 2:240/5202.15
Mailbox: [EMAIL PROTECTED]    http://www.tu-harburg.de/~semb2204/


--
TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word "unsubscribe" to
[EMAIL PROTECTED] . 
Trouble?  e-mail to [EMAIL PROTECTED] .

Reply via email to