Tom Buskey <[EMAIL PROTECTED]> writes:
> Unix Shell Programming by Kochan and Wood is a classic on shell programming
>
>
> Portable Shell Programming by Blinn
> The Awk Programming Language by Aho, Weinberger and Kernighan
I'm also a big fan of Kernighan and Pikes, "The UNIX Programming
Environme
[EMAIL PROTECTED] (Kevin D. Clark) writes:
> Zhao Peng writes:
>
>> I'm back, with another "extract string" question. //grin
>
>
> find FOLDERNAME -name \*sas7bdat -print | sed 's/.*\///' | cut -d _ -f 2 |
> sort -u > somefile.txt
Or, to simplify this:
find ./ -name \*sas7bdat | awk -F_ '{pri
Zhao Peng wrote:
string1_string2_string3_string4.sas7bdat
abc_st_nh_num.sas7bdat
abc_st_vt_num.sas7bdat
abc_st_ma_num.sas7bdat
abcd_region_NewEngland_num.sas7bdat
abcd_region_South_num.sas7bdat
My goal is to :
1, extract string2 from each file name
2, then sort them and keep only unique ones
3
"cat -n" will number output lines
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss
On 1/13/06, Ben Scott <[EMAIL PROTECTED]> wrote:
> On 1/13/06, Zhao Peng <[EMAIL PROTECTED]> wrote:
> > Is it possible to number the extracted string2?
>
> find -name \*sas7bdat -printf '%f\n' | cut -d _ -f 2 | sort | uniq | cat -n
I forgot to mention: If the *only* files in that directory are t
On 1/13/06, Zhao Peng <[EMAIL PROTECTED]> wrote:
> Is it possible to number the extracted string2?
find -name \*sas7bdat -printf '%f\n' | cut -d _ -f 2 | sort | uniq | cat -n
Run that pipeline in the directory you are interested in.
The find(1) command finds files, based on their name or oth
On Fri, Jan 13, 2006 at 11:40:26AM -0500, Zhao Peng wrote:
> Kevin,
>
> Thank you very much! I really appreciate it.
>
> I like your "find" approach, it's simple and easy to understand.
>
> I'll also try to understand your perl approach, when I got time to start
> learning it. (Hopefully it won
Kevin,
Thank you very much! I really appreciate it.
I like your "find" approach, it's simple and easy to understand.
I'll also try to understand your perl approach, when I got time to start
learning it. (Hopefully it won't be un-fulfilled forever)
I have one more question:
Is it possible to
Zhao Peng wrote:
My goal is to :
1, extract string2 from each file name
2, then sort them and keep only unique ones
3, then output them to a .txt file. (one unique string2 per line)
It is really interesting how many ways there are to do things in *nix. My
first reaction, if this is a one time
On Jan 12, 2006, at 19:40, Zhao Peng wrote:
I also downloaded an e-book called "Learning Perl" (OReilly,
4th.Edition), and had a quick look thru its Contents of Table, but did
not find any chapter which looks likely addressing any issue related
to my question.
Good start. Read these section
On 1/12/06, Ben Scott <[EMAIL PROTECTED]> wrote:
On 1/12/06, Zhao Peng <[EMAIL PROTECTED]> wrote:> I'm back, with another "extract string" question. //grin It sounds like you could use a tutorial on Unix text processing and
command line tools, specifically, one which addresses pipes andredirection
On Jan 12, 2006, at 8:25 PM, Ben Scott wrote:
It sounds like you could use a tutorial on Unix text processing and
command line tools, specifically, one which addresses pipes and
redirection, as well as the standard text tools (grep, cut, sed, awk,
etc.). While Paul's recommendation about the
Zhao Peng writes:
> I'm back, with another "extract string" question. //grin
find FOLDERNAME -name \*sas7bdat -print | sed 's/.*\///' | cut -d _ -f 2 | sort
-u > somefile.txt
or
perl -MFile::Find -e 'find(sub{$string2 = (split /_/)[2]; $seen{$string2}++; },
@ARGV); map { print "$_\n"; } key
On Thu, 2006-01-12 at 19:40 -0500, Zhao Peng wrote:
> For example:
> abc_st_nh_num.sas7bdat
> abc_st_vt_num.sas7bdat
> abc_st_ma_num.sas7bdat
> abcd_region_NewEngland_num.sas7bdat
> abcd_region_South_num.sas7bdat
You're not the only one learning here.
I put these names into a file called str2-t
On 1/12/06, Zhao Peng <[EMAIL PROTECTED]> wrote:
> I'm back, with another "extract string" question. //grin
It sounds like you could use a tutorial on Unix text processing and
command line tools, specifically, one which addresses pipes and
redirection, as well as the standard text tools (grep, c
On 1/11/06, Ben Scott <[EMAIL PROTECTED]> wrote:
> I felt really embarrassed with my stupid mistake. //blushYou think you were embarrassed? There was a certain instance of
someone accidentally hitting "Reply to All" to a list message which isstill remembered to this day. I won't mention any names
Zhao Peng <[EMAIL PROTECTED]> writes:
> You said that "there is an extra column in the 3rd line". I disagree
> with you from my perspective. As you can see, there are 3 commas in
> between "jesse" and "Dartmouth college". For these 3 commas, again, if
> we think the 2nd one as an merely indicatio
perl
split on the char pair ,"
Take last element of returned array, either remove the " at the end or replace the one you ate with the split.
Keep a running variable containing largest length encountered so far.
Add 10 to be safe. ;-)
Any regexp I have to think about for more than 30 seconds is
On 1/11/06, Bill McGonigle <[EMAIL PROTECTED]> wrote:
On Jan 11, 2006, at 08:42, [EMAIL PROTECTED] wrote:> This poses an interesting problem. The "," is being used for two
> purposes: a delimiter *AND* as a place holder.Now, for the Lazy, Perl regular expressions are a state machine ofsorts. I sus
On 1/11/06, Zhao Peng <[EMAIL PROTECTED]> wrote:
> Secondly I'm sorry for the big stir-up as to "homework problems" which
> flooded the list, since I'm origin of it.
*Trust me*, that wasn't a "big" stir-up. Search the list archives
for "taxes" if you want to see big ones. The homework thread w
On Jan 11, 2006, at 08:42, [EMAIL PROTECTED] wrote:
This poses an interesting problem. The "," is being used for two
purposes: a delimiter *AND* as a place holder.
I tried to prove to myself last night that this method would produce
unresolvable ambiguities, but if you think like a state mach
On 1/11/06, Zhao Peng <[EMAIL PROTECTED]> wrote:
Hi All,First I really cannot be more grateful for the answers to my questionfrom all of you, I appreciate your help and time. I'm especially touched
by the outpouring of response on this list., which I have neverexperienced before anywhere else.
William D Ricker <[EMAIL PROTECTED]> writes:
>> On 1/10/06, Paul Lussier <[EMAIL PROTECTED]> wrote:
>> > > perl -ne 'split ","; $_ = $_[2]; s/(^")|("$)//g; print if m/univ/;' <
>> > > abc.txt > def.txt
>> > Egads!
[outstanding explanation I didn't have time to write myself removed ]
> None of th
Zhao Peng writes:
> ... your "grep univ abc.txt | cut -f3 -d, | sed s/\"//g >> dev.txt"
> works.
It "works" but is it correct?
What happens if you pass it the following line of input?:
"Aunivz","28","Cambridge Community College"
By your original problem description, you don't want to see "C
Zhao Peng <[EMAIL PROTECTED]> writes:
> First I really cannot be more grateful for the answers to my question
> from all of you, I appreciate your help and time. I'm especially
> touched by the outpouring of response on this list., which I have
> never experienced before anywhere else.
Zhao, thi
-- Original message --
From: Zhao Peng <[EMAIL PROTECTED]>
> Hi All,
>
> Kenny, your "grep univ abc.txt | cut -f3 -d, | sed s/\"//g >> dev.txt"
> works. I mis-read /\ as a simliar sign on the top of "6" key on the
> keyboard(so when I typed that sign, I felt st
Zhao,
I am really busy right now, so I have not read all of the responses to your
problem completely, but I did notice this:
[EMAIL PROTECTED] said:
> You said that "there is an extra column in the 3rd line". I disagree with
> you from my perspective. As you can see, there are 3 commas in betw
Hi All,
First I really cannot be more grateful for the answers to my question
from all of you, I appreciate your help and time. I'm especially touched
by the outpouring of response on this list., which I have never
experienced before anywhere else.
Secondly I'm sorry for the big stir-up as
> On 1/10/06, Paul Lussier <[EMAIL PROTECTED]> wrote:
> > > perl -ne 'split ","; $_ = $_[2]; s/(^")|("$)//g; print if m/univ/;' <
> > > abc.txt > def.txt
> > Egads!
That's a literal start at a Perl "bring the grep sed and cut-or-awk
into one process", but it's not maximally Perl-ish. It is also
i
Ben Scott <[EMAIL PROTECTED]> writes:
> On 1/10/06, Jon maddog Hall <[EMAIL PROTECTED]> wrote:
>> I was the senior systems administrator for Bell Labs in North Andover, MA. I
>> got the job without ever having seen a UNIX system.
>
> Well, really. How many people *had* seen a UNIX system, back
Ben Scott writes:
> Is there a tool that quickly and easily extracts one or more columns
> of text (separated by whitespace) from an output stream? I'm familiar
> with the
>
> awk '{ print $3 }'
>
> mechanism, but I've always felt that was clumsy. I've tried to get
> cut(1) to do it
-- Original message --
From: Zhao Peng <[EMAIL PROTECTED]>
> Kenny,
>
> Thank you for your suggestion.
>
> The following line works:
> grep univ abc.txt | cut -f3 -d, >> dev.txt.
>
>
> While the following line intended to remove quotes does NOT work:
> grep uni
On 1/10/06, Jon maddog Hall <[EMAIL PROTECTED]> wrote:
> I was the senior systems administrator for Bell Labs in North Andover, MA. I
> got the job without ever having seen a UNIX system.
Well, really. How many people *had* seen a UNIX system, back then? ;-)
(Sorry, couldn't resist.)
> It
[EMAIL PROTECTED] said:
>> While the following line intended to remove quotes does NOT work:
>> grep univ abc.txt | cut -f3 -d, | sed s/\"//g >> dev.txt
>> It resulted in a line starts with ">" prompt, and not output dev.txt
> I can't see any reason why what state should be happening. As a matte
On 1/10/06, Drew Van Zandt <[EMAIL PROTECTED]> wrote:
> While it does seem like a few man page pointers would be better (more
> instructive in the long run), I have to admit I wasn't familiar with cut, so
> I've learned something from this one.
Since we're on the subject...
Is there a tool th
-- Original message --
From: Paul Lussier <[EMAIL PROTECTED]>
> [EMAIL PROTECTED] writes:
>
> > Actually, if you are looking for only lines that contain the string "univ",
> then you would want to grep for it:
> >
> > grep univ abc.txt | cut -f3 -d, >> dev.txt.
>
On 1/10/06, Paul Lussier <[EMAIL PROTECTED]> wrote:
> > perl -ne 'split ","; $_ = $_[2]; s/(^")|("$)//g; print if m/univ/;' <
> > abc.txt > def.txt
>
> Egads!
Egads?
-- Ben "As I was saying about explanation..." Scott
___
gnhlug-discuss mailing list
g
While it does seem like a few man page pointers would be better (more
instructive in the long run), I have to admit I wasn't familiar with
cut, so I've learned something from this one.
--Drew
Zhao Peng <[EMAIL PROTECTED]> writes:
> While the following line intended to remove quotes does NOT work:
> grep univ abc.txt | cut -f3 -d, | sed s/\"//g >> dev.txt
> It resulted in a line starts with ">" prompt, and not output dev.txt
I can't see any reason why what state should be happening. A
> Ooo, look! - a new business model for Lugs!
I happen to like these threads and far from regarding
them as a burden I think they're a pleasant diversion
and extremely useful as learning opportunities.
But I've been asking for a long time when our IPO will
be happening; we've got more talent an
[EMAIL PROTECTED] writes:
> Actually, if you are looking for only lines that contain the string "univ",
> then you would want to grep for it:
>
> grep univ abc.txt | cut -f3 -d, >> dev.txt.
Why are you appending to dev.txt? (or def.txt even). Are you assuming
the file already exists and don't w
Ben Scott <[EMAIL PROTECTED]> writes:
> Here's one way, as a Perl one-liner:
>
> perl -ne 'split ","; $_ = $_[2]; s/(^")|("$)//g; print if m/univ/;' <
> abc.txt > def.txt
Egads!
--
Seeya,
Paul
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug
Zhao Peng <[EMAIL PROTECTED]> writes:
> Hi
>
> Suppose that I have a file called abc.txt, which contains the
> following 5 lines (columns are delimited by ",")
>
> "name","age","school"
> "jerry" ,"21","univ of Vermont"
> "jesse","28","Dartmouth college"
> "jack","18","univ of Penn"
> "john","20",
Ooo, look! - a new business model for Lugs!
Achieve Lug financial independence today!
Now your Lug can achieve its financial funding goals simply by charging
25 cents for each shell scripting homework problem answered and 50 cents
for extended explanations such as rendered below. :-)
All we need
On 1/10/06, Zhao Peng <[EMAIL PROTECTED]> wrote:
> While the following line intended to remove quotes does NOT work:
> grep univ abc.txt | cut -f3 -d, | sed s/\"//g >> dev.txt
> It resulted in a line starts with ">" prompt, and not output dev.txt
The ">" prompt indicates the shell thinks you are
On 1/10/06, Whelan, Paul <[EMAIL PROTECTED]> wrote:
> Like so: cat abc.txt | cut -d, -f3
1. Randal Schwartz likes to call that UUOC (Useless Use Of cat). :-)
You can just do this instead:
cut -d, -f3 < abc.txt
If you like the input file at the start of the command line, that's legal, to
On 1/10/06, Zhao Peng <[EMAIL PROTECTED]> wrote:
> how could I extract the string which
> contains "univ" and create an output file called def.txt, which only has
> 3 following lines:
Here's one way, as a Perl one-liner:
perl -ne 'split ","; $_ = $_[2]; s/(^")|("$)//g; print if m/univ/;' <
abc.tx
Kenny,
Thank you for your suggestion.
The following line works:
grep univ abc.txt | cut -f3 -d, >> dev.txt.
While the following line intended to remove quotes does NOT work:
grep univ abc.txt | cut -f3 -d, | sed s/\"//g >> dev.txt
It resulted in a line starts with ">" prompt, and not output de
Actually, if you are looking for only lines that contain the string "univ",
then you would want to grep for it:
grep univ abc.txt | cut -f3 -d, >> dev.txt.
Paul's example would give you the third field of each line, even if they don't
have "univ" in them. Now, if you wanted to remove the quotes
Like so: cat abc.txt | cut -d, -f3
Thanks.
-Original Message-
From: Zhao Peng [mailto:[EMAIL PROTECTED]
Sent: Tuesday, January 10, 2006 11:51 AM
To: gnhlug-discuss@mail.gnhlug.org
Subject: extract string
Hi
Suppose that I have a file called abc.txt, which contains the following
5 line
50 matches
Mail list logo