>I am wondering about way to grep or to view with editor /usr/doc/*/* files. >Of course, these files are gziped, according to debian policy. >Is there any way to choose to install these docs in ungziped as default? >I can ungzip these, but also want to leave these under control of package >manager.
I am uploading here a small, hackish perl script that, along with some apache configuration changes, will allow you to view the compressed files in http://your-machine/doc as if they were not comrpessed. This issue was a real annoyance to me, which is why I had written this. Note that you HAVE to use the webserver for my hack to work - you cannot cd to the directory and run lynx on the inidividual file. There are two issues that this resolves: 1) when you attempt to fetch a compressed html or text file, this will uncompress the file, drunkenly guess the content-type, and send it to your browser. 2) When you click on a link, the browser requests a .html file which doesn't exist. So you configure apache to send such errors to this script, which attempts to correct for them. I am able to browse several compresses files, and click on links to move between them, on my local machine, with the browser unaware that the files are compressed. This is all a small hack, don't take it too seriously or expect it to work all the time. Use at your own risk. Steps: 1) as root, edit /etc/apache/httpd.conf and uncomment the following line: LoadModule action_module /usr/lib/apache/1.3/mod_actions.so 2) as root, edit /etc/apache/access.conf, and make your <Directory /usr/doc> look like this: <Directory /usr/doc> Options Indexes FollowSymLinks AllowOverride None order allow,deny allow from all Action doc /cgi-bin/doc AddHandler doc .gz AddHandler doc .Z ErrorDocument 404 /cgi-bin/doc </Directory> 3) now, put the following script into /usr/lib/cgi-bin/doc and chmod it to 755. NOTE: While I hope that this script is reasonably secure, I can't make any promises. Use it at your own risk, or better yet have someone knowledgable read over it and give you their opinion. --begin /usr/lib/cgi-bin/doc #!/usr/bin/perl # A small hack by Carl Mummert <[EMAIL PROTECTED]> to # auto-gunzip compressed html files in /usr/doc # # I release this to the public domain; # do whatever you want with it. #damn buffering select STDOUT; $| = 1; # the filename comes in different vars # depending on whether this is a # 404 or not if ( ! defined $ENV{'PATH_TRANSLATED'} ) { if ( defined $ENV{'REDIRECT_URL'} ) { $path = $ENV{'REDIRECT_URL'}; } else { print "content-type: text/plain\n\n\n"; print "internal error, sorry.\n"; exit 0; } } else { $path = $ENV{'PATH_TRANSLATED'}; } # silly attempt to remove '..' from path $path =~ s/\.\.//g; # kill tag names from filenames ($path, $rest) = split /#/, $path, 2; #ensure that we aren't trying to go somewhere else... if ( $path !~ m!^/usr/doc! ) #uncompress looks like this { if ( $path =~ m!^/doc! ) # 404 looks like this { $path = "/usr$path"; } else { print "content-type: text/plain\n\n"; print "Error: invalid location $path\n"; exit 0; } } # is this a compressed file being fetched as if it's uncompressed? if ( (! -r $path) && ( -r "$path.gz") ) { $path = "$path.gz"; } if (! -r $path ) { print "content-type: text/plain\n\n\n"; print "Error: cannot read $path\n"; } #drunkenly guess content-type if ( $path =~ /html?\.((gz)|z)$/i ) { print "content-type: text/html\n\n\n"; } else { print "content-type: text/plain\n\n\n"; } print "<!-- Uncompressed from $path --> \n"; #hopefully gzip is secure... exec "/bin/gzip", "-dc", "$path"; --end /usr/lib/cgi-bin.doc