Re: awk assistance
In a message dated: Thu, 14 Nov 2002 16:40:47 EST [EMAIL PROTECTED] said: >On Thu, 14 Nov 2002, at 1:04pm, [EMAIL PROTECTED] wrote: >> If this type of thing were being done from a web app running from a CGI, >> it might turn out to be faster with perl if the use mod_perl than spawning >> a new awk instance every time. > > Gawd. If you're invoking shell tools for a web page that gets called >often enough for that to matter, you deserve to be shot. ;-) Which is why I stated that perl might be the better solution here. Though you'd be surprised at the number of people who do use shell scripts as cgis. Well, maybe you wouldn't :) -- Seeya, Paul -- It may look like I'm just sitting here doing nothing, but I'm really actively waiting for all my problems to go away. If you're not having fun, you're not doing it right! ___ gnhlug-discuss mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: awk assistance
On Thu, 14 Nov 2002, at 1:04pm, [EMAIL PROTECTED] wrote: > If this type of thing were being done from a web app running from a CGI, > it might turn out to be faster with perl if the use mod_perl than spawning > a new awk instance every time. Gawd. If you're invoking shell tools for a web page that gets called often enough for that to matter, you deserve to be shot. ;-) -- Ben Scott <[EMAIL PROTECTED]> | The opinions expressed in this message are those of the author and do not | | necessarily represent the views or policy of any other person, entity or | | organization. All information is provided without warranty of any kind. | ___ gnhlug-discuss mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: awk assistance
In a message dated: Thu, 14 Nov 2002 08:32:11 EST Dan Coutu said: >Michael O'Donnell wrote: > >> find / -type f | while read f; do basename $f; done >> > > >This does not meet the requirement of providing UNIQUE > >instances of filenames though. Easily fixed by piping it >through sort and then uniq ala: Why the need for both sort and uniq? If all you care about is uniqueness, then use uniq with it's various switches. If you need it sorted and unique, then use sort -u. That cuts down on a process. >If using awk is the 'old' way then this approach >must be the 'ancient' way since it actually uses the original >UNIX design thinking of pipes and filters that often gets lost >nowadays. I don't agree that this is an 'ancient' way. The OP asked about an awk solution, which compared to perl, I consider to be 'old' since perl has for almost all instances, replaced the complete functionality of awk. Pipelining is a design feature of the UNIX Philosophy, and therefore, tantamount to "THE way", which never actually ages :) It's like the difference in musin between an "oldie" and a "classic". Pipelining is "classic" whereas awk is an "oldie". Of course, the determination of what qualifies as a "classic" vs. an "oldie' depends completely upon who is making that determination :) -- Seeya, Paul -- It may look like I'm just sitting here doing nothing, but I'm really actively waiting for all my problems to go away. If you're not having fun, you're not doing it right! ___ gnhlug-discuss mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: awk assistance
In a message dated: Wed, 13 Nov 2002 20:28:49 EST [EMAIL PROTECTED] said: > Generally true, I expect, but there are still reasons to use awk. Agreed! >> Awk has a lot of limitations which don't exist in perl, like line length >> (1024 characters?), etc. > > FWIW, I believe gawk has removed many of those. Of course, if you've got >the GNU tools, there is little reason not to have Perl, too. :) True, but didn't the OP say they were on Solaris without access to the GNU tools? (maybe I'm confusing discussions). > Also FWIW, this awk code > > awk -F/ '{ print $NF }' > >appears to run about 4 times faster than this Perl code > > perl -F'/' -ane 'print "$F[$#F]";' You know, I was trying to make that determination yesterday, but kept getting interupted, and therefore never got around to it :) Thanks! >, at least in my quick tests. I mention this out of curiosity more than >anything else; I highly doubt the performance difference could ever matter >in practice. Probably not, but, it also depends upon the application. If this type of thing were being done from a web app running from a CGI, it might turn out to be faster with perl if the use mod_perl than spawning a new awk instance every time. (purely speculation on my part). -- Seeya, Paul -- It may look like I'm just sitting here doing nothing, but I'm really actively waiting for all my problems to go away. If you're not having fun, you're not doing it right! ___ gnhlug-discuss mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
RE: awk assistance
> -Original Message- > From: Dan Coutu [mailto:coutu@;snowy-owl.com] > Sent: Thursday, November 14, 2002 8:32 AM > To: [EMAIL PROTECTED] > Subject: Re: awk assistance > > > The nice thing about this approach is that it uses NO code > of any sort. If using awk is the 'old' way then this approach > must be the 'ancient' way since it actually uses the original > UNIX design thinking of pipes and filters that often gets lost > nowadays. Not to contradict you, but I think that it is code. It's the language of the bash shell, and does use a loop construct so I would call it code. (Of course, I majored in communication in college so actually we could go further and define what "code" really is, in which case you would be unable to access a computer without employing some level of "code" :) Erik ___ gnhlug-discuss mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: awk assistance
Michael O'Donnell wrote: find / -type f | while read f; do basename $f; done This does not meet the requirement of providing UNIQUE instances of filenames though. Easily fixed by piping it through sort and then uniq ala: find / -type f | while read f; do basename $f; done\ | sort | uniq The nice thing about this approach is that it uses NO code of any sort. If using awk is the 'old' way then this approach must be the 'ancient' way since it actually uses the original UNIX design thinking of pipes and filters that often gets lost nowadays. -- Dan Coutu Managing Director Snowy Owl Internet Consulting, LLC http://www.snowy-owl.com/ Mobile: 603-759-3885 Fax: 603-673-6676 ___ gnhlug-discuss mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
RE: awk assistance
> -Original Message- > From: [EMAIL PROTECTED] [mailto:bscott@;ntisys.com] > Sent: Wednesday, November 13, 2002 8:29 PM > To: Greater NH Linux User Group > Subject: Re: awk assistance > > Another is if one is writing a shell script that > must work even if > Perl is not installed (seems like a strange idea, yes, but > believe it or > not, Perl is not part of the kernel (neither is Emacs)). I have found this to be the case on this Solaris box that I sometimes work with here at work. I guess it's good in a sense, because I probably should have learned how to use vi before, but I'm really an emacs fan. (Actually, vi and bash aren't part of the kernel either, are they? IIUC, they're both external utilities, one being an editor and the other a shell... but I shouldn't complain too much, since at least I don't have to use "/bin/sh" and "ed"!) Erik ___ gnhlug-discuss mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: awk assistance
On Wed, 13 Nov 2002, at 1:03pm, [EMAIL PROTECTED] wrote: > I'd argue that 'awk' is "The Old Way" and has been replaced by Perl as > "The Way" :) Generally true, I expect, but there are still reasons to use awk. One reason is if one has to deal with lots of crufty old Unix systems that don't have Perl (not an ideal situation, of course, but life is often less than ideal). Another is if one is writing a shell script that must work even if Perl is not installed (seems like a strange idea, yes, but believe it or not, Perl is not part of the kernel (neither is Emacs)). Perl is also unavailable if /usr is not mounted, which can be significant if you are an initscript. But, again, what you say is generally true. I'm just picky. :-) > Awk has a lot of limitations which don't exist in perl, like line length > (1024 characters?), etc. FWIW, I believe gawk has removed many of those. Of course, if you've got the GNU tools, there is little reason not to have Perl, too. :) Also FWIW, this awk code awk -F/ '{ print $NF }' appears to run about 4 times faster than this Perl code perl -F'/' -ane 'print "$F[$#F]";' , at least in my quick tests. I mention this out of curiosity more than anything else; I highly doubt the performance difference could ever matter in practice. -- Ben Scott <[EMAIL PROTECTED]> | The opinions expressed in this message are those of the author and do not | | necessarily represent the views or policy of any other person, entity or | | organization. All information is provided without warranty of any kind. | ___ gnhlug-discuss mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: awk assistance
In a message dated: Wed, 13 Nov 2002 15:28:38 EST Michael O'Donnell said: >find / -type f | while read f; do basename $f; done Yeah, but that's neither perl nor awk, the only 2 languages mentioned in the original post :) -- Seeya, Paul -- It may look like I'm just sitting here doing nothing, but I'm really actively waiting for all my problems to go away. If you're not having fun, you're not doing it right! ___ gnhlug-discuss mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: awk assistance
On Wed, Nov 13, 2002 at 03:28:38PM -0500, you wrote: > find / -type f | while read f; do basename $f; done find / -type f -exec basename {} \; :-) -- Roger H. Goun Brentwood Country Animal Hospital, P.C. Chief Kennel Officer Exeter, New Hampshire, USA ___ gnhlug-discuss mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: awk assistance
"Steven W. Orr" <[EMAIL PROTECTED]> writes: > Everyone starts with the Camel book but everyone soon realizes that it's > pretty crappy to try to learn from. It's best left as a reference > book. I dunno, it worked for me. I disagree with your assessment. > The BEST book for learing perl that I've found is Object Oriented Perl by > Damian Conway. This is also a good book. --kevin -- Kevin D. Clark / Cetacean Networks / Portsmouth, N.H. (USA) cetaceannetworks.com!kclark (GnuPG ID: B280F24E) alumni.unh.edu!kdc ___ gnhlug-discuss mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: awk assistance
find / -type f | while read f; do basename $f; done . ___ gnhlug-discuss mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
RE: awk assistance
> -Original Message- > From: [EMAIL PROTECTED] [mailto:pll@;lanminds.com] > Sent: Wednesday, November 13, 2002 3:15 PM > To: Price, Erik > Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED] > Subject: Re: awk assistance > > Obviously the 'find ./ -type f' portion could be replaced with > 'cat findoutput.txt', but that's an extra shell process better done: > > perl -e 'some perl code here' findoutput.txt If you can believe it, I didn't even know that you could follow a "perl -e" one liner with a file name and have it act on that file. > Since I used the -F to set the split character to '/' and -a to turn > on autosplit mode, and -n create a loop around my code, what I end up > with is this: > > while (<>) { > @F=split('/'); > print $F[$#F]; > } Wow ... that autosplitter is a timesaver alright. I will definitely remember to use that (with the -n flag too). I learned more about Perl in this post than I have in the past couple of weeks. (Probably because I haven't been learning much about Perl in the past couple of weeks, but what I'm trying to say is thanks for the tips.) Erik ___ gnhlug-discuss mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: awk assistance
In a message dated: Wed, 13 Nov 2002 13:30:28 EST "Price, Erik" said: >> Why must it be lengthy? >> >> find ./ -type f | perl -F'/' -ane 'print "$F[$#F]";' >> >> seems to do the trick just fine. And if you want to weed out=20 >> duplicates pipe the output through 'uniq' with your choice of swiches. > > >My own one-liner would have been lengthy. It would have gone >something like (untested): > >cat findoutput.txt | perl -e \ > 'while(){m!([^/]+)$!;print $1}'| uniq Well, the above code does exactly what you're attempting to do here. Obviously the 'find ./ -type f' portion could be replaced with 'cat findoutput.txt', but that's an extra shell process better done: perl -e 'some perl code here' findoutput.txt The rest of this code: >> find ./ -type f | perl -F'/' -ane 'print "$F[$#F]";' says: perl- run the perl interpreter -F'/' - same as awk, we're going to split on the '/' character -a - turn on 'autosplit' mode and place the results of the split() into @F -n - place a while loop around your code IOW, using the -an construct, assume the following loop around any other code: while (<>) { @F = split(' '); # your code goes here } -e - execute the following code snippet 'print "$F[$#F]";' - print the last element of the array @F Since I used the -F to set the split character to '/' and -a to turn on autosplit mode, and -n create a loop around my code, what I end up with is this: while (<>) { @F=split('/'); print $F[$#F]; } Which says: while there's still something to read split on the '/' and place the results into the @F array then print the last element of the array the '$#' construct is used in perl to determine the last element in an array. So, with any given array $arrayname[$#arrayname] indexes into the last element. >I still haven't finished the Camel Book. I'm not very good with >Perl's command line switches and, without resorting to a man page, >I still have no idea what the above code does! ;) > On Wed, 13 Nov 2002, "Steven" == Steven W. Orr wrote: Steven> Everyone starts with the Camel book but everyone soon Steven> realizes that it's pretty crappy to try to learn from. It's Steven> best left as a reference book. The BEST book for learing Steven> perl that I've found is Object Oriented Perl by Damian Steven> Conway. I agree that Damian's book is fantastic and that the Camel is a reference, not a tutorial. However, I don't know if I agree that Damian's book is the best for learning Perl. There are a lot of uses for perl which do not require OO, and will in fact, suffer from using it. It's been a while since I picked up Damian's book, but I don't recall thinking it a very good book for beginners. IMO, the best way to learn any language is to just use it. I picked up perl with nothing more then the on-line perl documentation. I already knew the basics of programming, so I guess all I need was a reference to determine syntax, etc. I don't think I ever picked up a copy of the Camel until well after I had been hacking perl for about 3 or 4 years. But, I didn't need to, since the camel is nothing more than the perldocs on dead trees :) If you really want to learn perl, or any other language, just use it. When you get stumped, read the docs and ask questions when you don't understand the docs. comp.lang.perl.* and the various perl-mongers mail lists are great forums for this! -- Seeya, Paul -- It may look like I'm just sitting here doing nothing, but I'm really actively waiting for all my problems to go away. If you're not having fun, you're not doing it right! ___ gnhlug-discuss mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
RE: awk assistance
On Wed, 13 Nov 2002, Price, Erik wrote: =>> From: [EMAIL PROTECTED] [mailto:pll@;lanminds.com] =>> Sent: Wednesday, November 13, 2002 1:04 PM =>> =>> In a message dated: Wed, 13 Nov 2002 10:53:45 EST =>> "Price, Erik" said: =>> =>> >I have a file that contains the redirected output of a big "find" =>> >command. I want to learn how to quickly scan this file for unique =>> >file names, and while I could write a lengthy Perl one-liner, =>> =>> Why must it be lengthy? =>> =>> find ./ -type f | perl -F'/' -ane 'print "$F[$#F]";' =>> =>> seems to do the trick just fine. And if you want to weed out =>> duplicates pipe the output through 'uniq' with your choice of swiches. => =>I still haven't finished the Camel Book. I'm not very good with =>Perl's command line switches and, without resorting to a man page, =>I still have no idea what the above code does! ;) => =>My own one-liner would have been lengthy. It would have gone =>something like (untested): => =>cat findoutput.txt | perl -e \ => 'while(){m!([^/]+)$!;print $1}'| uniq => => =>... still learning... Everyone starts with the Camel book but everyone soon realizes that it's pretty crappy to try to learn from. It's best left as a reference book. The BEST book for learing perl that I've found is Object Oriented Perl by Damian Conway. -- -Time flies like the wind. Fruit flies like a banana. Stranger things have - -happened but none stranger than this. Does your driver's license say Organ -Donor?Black holes are where God divided by zero. Listen to me! We are all- -individuals! What if this weren't a hypothetical question? [EMAIL PROTECTED] ___ gnhlug-discuss mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: awk assistance
"Scott Prive" said: >Be aware this catches all last fields... *including* directories. Use `find -t >ype f` if you want to pre-filter anything that isn't a file. > >Most UNIX literature gives awk very little coverage (a pity). The awk User Gui >de is helpful: >http://www.gnu.org/manual/gawk-3.1.1/gawk.html I'd also recomment the Awk Programming book by the authors of awk: Aho, Weinberger, and Kernihan (AWK). Classic Kernihan style. > > >> -Original Message- >> From: Price, Erik [mailto:eprice@;ptc.com] >> Sent: Wednesday, November 13, 2002 11:19 AM >> To: Mark Polhamus >> Cc: [EMAIL PROTECTED] >> Subject: RE: awk assistance >> >> >> >> >> > -Original Message- >> > From: Mark Polhamus [mailto:meplists@;earthlink.net] >> > Sent: Wednesday, November 13, 2002 11:12 AM >> > To: Price, Erik >> > Cc: [EMAIL PROTECTED] >> > Subject: Re: awk assistance >> > >> > >> > Price, Erik wrote: >> > > ... >> > > If not, the other alternative I was thinking of was the awk >> > > equivalent of >> > > >> > > 1. set the field separator to a slash >> > > 2. awk the file for the last field. >> > > >> > > I've figured out how to set the field separator (from the >> man page) >> > > but it seems I need to use a numeric variable to represent the >> > > field I want to print. I don't know of a way to get the >> last field >> > > for any given record/line since one one line it could be >> $5 and on >> > > another it might be $7, for example. >> > >> > awk -F/ '{print $NF}' >> >> >> Works perfect. >> >> >> Erik >> ___ >> gnhlug-discuss mailing list >> [EMAIL PROTECTED] >> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss >> >> >___ >gnhlug-discuss mailing list >[EMAIL PROTECTED] >http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss > -- --- Tom Buskey ___ gnhlug-discuss mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
RE: awk assistance
Be aware this catches all last fields... *including* directories. Use `find -type f` if you want to pre-filter anything that isn't a file. Most UNIX literature gives awk very little coverage (a pity). The awk User Guide is helpful: http://www.gnu.org/manual/gawk-3.1.1/gawk.html > -Original Message- > From: Price, Erik [mailto:eprice@;ptc.com] > Sent: Wednesday, November 13, 2002 11:19 AM > To: Mark Polhamus > Cc: [EMAIL PROTECTED] > Subject: RE: awk assistance > > > > > > -Original Message- > > From: Mark Polhamus [mailto:meplists@;earthlink.net] > > Sent: Wednesday, November 13, 2002 11:12 AM > > To: Price, Erik > > Cc: [EMAIL PROTECTED] > > Subject: Re: awk assistance > > > > > > Price, Erik wrote: > > > ... > > > If not, the other alternative I was thinking of was the awk > > > equivalent of > > > > > > 1. set the field separator to a slash > > > 2. awk the file for the last field. > > > > > > I've figured out how to set the field separator (from the > man page) > > > but it seems I need to use a numeric variable to represent the > > > field I want to print. I don't know of a way to get the > last field > > > for any given record/line since one one line it could be > $5 and on > > > another it might be $7, for example. > > > > awk -F/ '{print $NF}' > > > Works perfect. > > > Erik > ___ > gnhlug-discuss mailing list > [EMAIL PROTECTED] > http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss > > ___ gnhlug-discuss mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
RE: awk assistance
> -Original Message- > From: Mark Polhamus [mailto:meplists@;earthlink.net] > Sent: Wednesday, November 13, 2002 11:12 AM > To: Price, Erik > Cc: [EMAIL PROTECTED] > Subject: Re: awk assistance > > > Price, Erik wrote: > > ... > > If not, the other alternative I was thinking of was the awk > > equivalent of > > > > 1. set the field separator to a slash > > 2. awk the file for the last field. > > > > I've figured out how to set the field separator (from the man page) > > but it seems I need to use a numeric variable to represent the > > field I want to print. I don't know of a way to get the last field > > for any given record/line since one one line it could be $5 and on > > another it might be $7, for example. > > awk -F/ '{print $NF}' Works perfect. Erik ___ gnhlug-discuss mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: awk assistance
Price, Erik wrote: > ... If not, the other alternative I was thinking of was the awk equivalent of 1. set the field separator to a slash 2. awk the file for the last field. I've figured out how to set the field separator (from the man page) but it seems I need to use a numeric variable to represent the field I want to print. I don't know of a way to get the last field for any given record/line since one one line it could be $5 and on another it might be $7, for example. awk -F/ '{print $NF}' -- Mark ___ gnhlug-discuss mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Re: awk assistance
find / | sed -e 's;^.*/;;' ___ gnhlug-discuss mailing list [EMAIL PROTECTED] http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss