On Monday, February 3, 2003, at 10:32  AM, [EMAIL PROTECTED] wrote:
Something along the lines of:

#!/bin/sh

for dir in `ls -1 /webroot/`; do
  cat /var/log/httpd/access_log | grep "$dir" >
/var/log/httpd/access_log_$dir
done
Tip - whenever 'cat' is the first command in a pipeline, it should raise a red flag. In this case, you've got what's called a UUoC: Useless Use of Cat. Just do:

for dir in `ls -1 /webroot/`; do
grep "$dir" /var/log/httpd/access_log > /var/log/httpd/access_log_$dir
done


I am no shell hacker and the above is untested, but you get the idea. In general Perl would not be a good choice for performing something so simple that already has a command line solution available.
True, unless you may want to extend that program to do more sophisticated stuff later. Shell scripts usually scale *really* badly - there's a reason you don't often hear of "shell applications", "shell libraries", or "shell frameworks".


If you were going to do it in Perl, rather than looking for each vhost in the log file, you would be better off unpacking or splitting, etc. the log line and storing that line to an array that is associated with the particular vhost in the log line and then printing each vhost's array to a file, or you would have to open a filehandle for each vhost at the beginning of the script and then just print the line to whichever filehandle is associated with a particular vhost. Stepping through every line of the log file foreach of the vhosts in Perl would probably be a really bad way to handle things.
Well-said. You could also just start stepping through the log file and just create the separate log files on demand as you encounter vhosts, avoiding the need to either pre-open filehandles or store the data in memory.

-Ken


--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to