In a message dated: Thu, 14 Nov 2002 16:40:47 EST
[EMAIL PROTECTED] said:
>On Thu, 14 Nov 2002, at 1:04pm, [EMAIL PROTECTED] wrote:
>> If this type of thing were being done from a web app running from a CGI,
>> it might turn out to be faster with perl if the use mod_perl than spawning
>> a new aw
On Thu, 14 Nov 2002, at 1:04pm, [EMAIL PROTECTED] wrote:
> If this type of thing were being done from a web app running from a CGI,
> it might turn out to be faster with perl if the use mod_perl than spawning
> a new awk instance every time.
Gawd. If you're invoking shell tools for a web page t
In a message dated: Thu, 14 Nov 2002 08:32:11 EST
Dan Coutu said:
>Michael O'Donnell wrote:
>
>> find / -type f | while read f; do basename $f; done
>>
>
>
>This does not meet the requirement of providing UNIQUE
>
>instances of filenames though. Easily fixed by piping it
>through sort and then u
In a message dated: Wed, 13 Nov 2002 20:28:49 EST
[EMAIL PROTECTED] said:
> Generally true, I expect, but there are still reasons to use awk.
Agreed!
>> Awk has a lot of limitations which don't exist in perl, like line length
>> (1024 characters?), etc.
>
> FWIW, I believe gawk has removed
> -Original Message-
> From: Dan Coutu [mailto:coutu@;snowy-owl.com]
> Sent: Thursday, November 14, 2002 8:32 AM
> To: [EMAIL PROTECTED]
> Subject: Re: awk assistance
>
>
> The nice thing about this approach is that it uses NO code
> of any sort. If usin
Michael O'Donnell wrote:
find / -type f | while read f; do basename $f; done
This does not meet the requirement of providing UNIQUE
instances of filenames though. Easily fixed by piping it
through sort and then uniq ala:
find / -type f | while read f; do basename $f; done\
| sort | uniq
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:bscott@;ntisys.com]
> Sent: Wednesday, November 13, 2002 8:29 PM
> To: Greater NH Linux User Group
> Subject: Re: awk assistance
>
> Another is if one is writing a shell script that
> must work even if
On Wed, 13 Nov 2002, at 1:03pm, [EMAIL PROTECTED] wrote:
> I'd argue that 'awk' is "The Old Way" and has been replaced by Perl as
> "The Way" :)
Generally true, I expect, but there are still reasons to use awk. One
reason is if one has to deal with lots of crufty old Unix systems that don't
hav
In a message dated: Wed, 13 Nov 2002 15:28:38 EST
Michael O'Donnell said:
>find / -type f | while read f; do basename $f; done
Yeah, but that's neither perl nor awk, the only 2 languages mentioned
in the original post :)
--
Seeya,
Paul
--
It may look like I'm just sitting here doing n
On Wed, Nov 13, 2002 at 03:28:38PM -0500, you wrote:
> find / -type f | while read f; do basename $f; done
find / -type f -exec basename {} \;
:-)
--
Roger H. Goun Brentwood Country Animal Hospital, P.C.
Chief Kennel Officer Exeter, New Hampshire, USA
"Steven W. Orr" <[EMAIL PROTECTED]> writes:
> Everyone starts with the Camel book but everyone soon realizes that it's
> pretty crappy to try to learn from. It's best left as a reference
> book.
I dunno, it worked for me. I disagree with your assessment.
> The BEST book for learing perl that
find / -type f | while read f; do basename $f; done
.
___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:pll@;lanminds.com]
> Sent: Wednesday, November 13, 2002 3:15 PM
> To: Price, Erik
> Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: Re: awk assistance
>
> Obviously the 'find ./ -type f' por
In a message dated: Wed, 13 Nov 2002 13:30:28 EST
"Price, Erik" said:
>> Why must it be lengthy?
>>
>> find ./ -type f | perl -F'/' -ane 'print "$F[$#F]";'
>>
>> seems to do the trick just fine. And if you want to weed out=20
>> duplicates pipe the output through 'uniq' with your choice of
On Wed, 13 Nov 2002, Price, Erik wrote:
=>> From: [EMAIL PROTECTED] [mailto:pll@;lanminds.com]
=>> Sent: Wednesday, November 13, 2002 1:04 PM
=>>
=>> In a message dated: Wed, 13 Nov 2002 10:53:45 EST
=>> "Price, Erik" said:
=>>
=>> >I have a file that contains the redirected output of a big "find
dnesday, November 13, 2002 11:19 AM
>> To: Mark Polhamus
>> Cc: [EMAIL PROTECTED]
>> Subject: RE: awk assistance
>>
>>
>>
>>
>> > -Original Message-
>> > From: Mark Polhamus [mailto:meplists@;earthlink.net]
>>
.html
> -Original Message-
> From: Price, Erik [mailto:eprice@;ptc.com]
> Sent: Wednesday, November 13, 2002 11:19 AM
> To: Mark Polhamus
> Cc: [EMAIL PROTECTED]
> Subject: RE: awk assistance
>
>
>
>
> > -Original Message-
> > From: Mark Polhamus [mail
> -Original Message-
> From: Mark Polhamus [mailto:meplists@;earthlink.net]
> Sent: Wednesday, November 13, 2002 11:12 AM
> To: Price, Erik
> Cc: [EMAIL PROTECTED]
> Subject: Re: awk assistance
>
>
> Price, Erik wrote:
> > ...
> > If not, the
Price, Erik wrote:
> ...
If not, the other alternative I was thinking of was the awk
equivalent of
1. set the field separator to a slash
2. awk the file for the last field.
I've figured out how to set the field separator (from the man page)
but it seems I need to use a numeric variable to repr
find / | sed -e 's;^.*/;;'
___
gnhlug-discuss mailing list
[EMAIL PROTECTED]
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
Hi,
I have a file that contains the redirected output of a big "find"
command. I want to learn how to quickly scan this file for unique
file names, and while I could write a lengthy Perl one-liner, I was
wondering if it could be done more simply with awk. But since I
have never used awk, I w
21 matches
Mail list logo