hadi motamedi wrote: > On my Debian server, I need to separate the individual logs coming > from various modules concurrently. Please find below a sample of the > logs: > ' IPTR >..
That beginning space I think is going to cause problems when using cut since cut is very literal about its definition of fields. > SNMP >.. No leading space here. I think that means you want to use awk instead. > #cat Edit3 | tr -d "\r" | while read LINE; do Please see http://www.google.com/search?q=useless+use+of+cat for why you shouldn't be using cat that way. You always want to use read -r to avoid backslash interpretation. Otherwise backslashes in the input will be lost. To avoid BASH FAQ E4 becoming an issue I wouldn't pipe into while. Instead I would redirect into it. Since you are already using bash I would use a bash-ism for it. while read -r LINE; do ... done < <(tr -d "\r" < test.txt) > >echo "$LINE" | grep -q '>\.\.' > >if [ $? -eq 0 ]; then Instead of using $? there you can just use the exit code of grep directly. Much cleaner and clearer. if echo "$LINE" | grep -q '>\.\.'; then > >LOGFILE=`echo $LINE | cut -d' ' -f1`.log Instead of using backticks since you are already using bash you might as well use the newer and easier $(...) instead. The quoting rules are much simpler. Plus using cut that way doesn't always yield the results people expect. Better to use awk splitting instead. Plus you should quote the $LINE for the echo just in case it contains unfortunate characters. LOGFILE=$(echo "$LINE" | awk '{print$1}').log > >else > >echo "$LINE" >> $LOGFILE You have LOGFILE set to a data dependent value. It may be one word or it may be two words. You should quote it. It is the only way to be sure. echo "$LINE" >> "$LOGFILE" > >fi > >done > > But when I run it, I am receiving the following error: > -bash: -f1.log : command not found > -bash: $LOGFILE : ambiguous redirect > Can you please help me to correct my code? Regarding: > >echo "$LINE" >> $LOGFILE That error coupled with that redirect line means that $LOGFILE must be two or more words. The shell is redirecting to something that has multiple words and is therefore generating an ambiguous redirect error. You can test this yourself by this example: $ file="one two" $ echo hello >$file bash: $file: ambiguous redirect To debug your own code display the result of the LOGFILE. Something like this: while read -r LINE; do if echo "$LINE" | grep -q '>\.\.'; then LOGFILE=$(echo "$LINE" | awk '{print$1}').log echo "DEBUG: LOGFILE='$LOGFILE'" else echo "DEBUG: $LINE >> '$LOGFILE'" fi done < <(tr -d "\r" < test.txt) I am sure that at that time you will find that LOGFILE is not always the values that you expect. It is being set to multiple words. The test case you provided does not exhibit this behavior. But I am sure your actual data case does show it. Now to solve some other nagging problems. Looking for the pattern ">.." doesn't mean that there is a field before it "FOO >.." and so I think your grep should look for the combination of those things. if echo "$LINE" | grep -q '^[^[:space:]][^[:space:]]* >\.\.'; then LOGFILE=$(echo "$LINE" | awk '{print$1}').log Doing it that way will guarantee that there is a pattern to be used for the logfile on the next line. And then the data would be more strictly behaved and the 'cut' would be more reliable there but I would still use awk in that case. But going further with that I think it is better avoid spreading the logic across two lines. Just try to read the line as a log file changing line and if so then use it. Using awk for this is a good fit. echo "$LINE" | awk '$2 == ">.." { print $1 }' If $2 (the second field) is exactly ">.." then the pattern matches and the associated action of printing $1 (the first field) is executed. If the first field is printed then we have a new logfile name. If not then nothing is printed. This does the matching and the extracting all at one step. NEWLOGNAME=$(echo "$LINE" | awk '$2 == ">.." { print $1 }') Plus, I don't like using upper case variable names. They might collide with the shell's internal variables. So better to use lower case variables for your own script. Here is a complete example with all of my suggestions. After debugging the LOGFILE problem having multiple words in your original then you might want to try something like this where it is all put together. #!/bin/bash logfile="start.log" while read -r line; do newlogname=$(echo "$line" | awk '$2 == ">.." { print $1 }') if [ -n "$newlogname" ]; then logfile=$newlogname.log else echo "$line" >> "$logfile" fi done < <(tr -d "\r" < test.txt) HTH, Bob
signature.asc
Description: Digital signature