Hi,

I have a file that contains the redirected output of a big "find" 
command.  I want to learn how to quickly scan this file for unique 
file names, and while I could write a lengthy Perl one-liner, I was 
wondering if it could be done more simply with awk.  But since I 
have never used awk, I was hoping someone could show me The Way.

What I want to do is pretty simple -- awk the file such that the 
only output is the contents after the last slash (everything after 
that is a file name).  One way I wonder if it could be done:

$ awk '/\/([^/]+)$/{print $1}' findoutput.txt

Of course, awk doesn't treat $1 as the captured text from a regex, 
as Perl does, so this doesn't really work -- it prints out the 
whole first field of any records that match.  The net effect of 
this is the same as if I had simply done `cat findoutput.txt`.

Can text be captured from an awk regex?

If not, the other alternative I was thinking of was the awk 
equivalent of

1. set the field separator to a slash
2. awk the file for the last field.

I've figured out how to set the field separator (from the man page) 
but it seems I need to use a numeric variable to represent the 
field I want to print.  I don't know of a way to get the last field 
for any given record/line since one one line it could be $5 and on 
another it might be $7, for example.


I'm using awk, not gawk, on a Solaris box.  Thanks for any insight.


Erik
_______________________________________________
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss

Reply via email to