[EMAIL PROTECTED] (Kevin D. Clark) writes: > Zhao Peng writes: > >> I'm back, with another "extract string" question. //grin > > > find FOLDERNAME -name \*sas7bdat -print | sed 's/.*\///' | cut -d _ -f 2 | > sort -u > somefile.txt
Or, to simplify this: find ./ -name \*sas7bdat | awk -F_ '{print $2}' |sort -u ls *sas7bdat | perl -F_ -ane 'print "$F[1]\n";'|sort -u perl -e 'opendir(DIR,"."); map { if (/sas7bdat$/) { $k = (split(/_/,$_))[1]; $f{$k} =1; } } readdir(DIR); map { print "$_\n";}sort keys %f;' That last one might be a little better formatted like: perl -e 'opendir(DIR,"."); map { if (/sas7bdat$/) { $k = (split(/_/,$_))[1]; $f{$k}=1; } } readdir(DIR); map { print "$_\n";} sort keys %f;' It should be rather obvious that your best bet for quick one-liners for this type of thing is to probably stick with standard UNIX tools like sort, cut, sed, awk, etc. Perl is great for text manipulation, but as you can see, none of the perl one-liners has been nearly as concise as the shell variants. If speed matters, or process overhead, then maybe perl is better. Of course for such a small data set as you've given, the perl versions are both harder and longer to type. hth. -- Seeya, Paul _______________________________________________ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss