> hehe nope! I'll give the data sort feature a gander when I'm in the office
> tomorrow. Also, 'uniq -c' provided what she really needed- the number
'uniq -c' is one I haven't tried yet, hmm. This type of problem
(counting occurences of patterns in a file) has usually been easily
solvable using
On Monday 03 Feb 2003 6:10 am, civileme wrote:
> On Sunday 02 February 2003 12:46 am, magnet wrote:
> > I have a large text file containing thousands of url's, one per line, and
> > am trying to find a suitable utility that will strip out identical lines
> > and leave a condensed file. Can anyone s
On Sunday 02 February 2003 12:46 am, magnet wrote:
> I have a large text file containing thousands of url's, one per line, and
> am trying to find a suitable utility that will strip out identical lines
> and leave a condensed file. Can anyone suggest a good solution?
> Thanks :)
---
On Sunday 02 February 2003 06:03 pm, David E. Fox wrote:
> > of lines long), so she consulted with our MS guru who didn't have a
> > solution. Didn't take long for me and my SCO box to provide her with
>
> Her M/S "guru" didn't know about Excel's Data Sort feature? She could
> have had that report
> I have a large text file containing thousands of url's, one per line, and a=
> m=20
> trying to find a suitable utility that will strip out identical lines and=
> =20
> leave a condensed file. Can anyone suggest a good solution?
sort < urlfile | uniq >newfile
HTH
>
Want to buy you
On Sun, 2003-02-02 at 21:19, magnet wrote:
> On Sunday 02 Feb 2003 9:55 am, Benjamin Jeeves wrote:
> > Perl will do it but you will need to make the scripted or there might be
> > one on the net have a look
> >
> > On Sunday 02 Feb 2003 9:46 am, magnet wrote:
> > > I have a large text file containi
On Sunday 02 Feb 2003 9:55 am, Benjamin Jeeves wrote:
> Perl will do it but you will need to make the scripted or there might be
> one on the net have a look
>
> On Sunday 02 Feb 2003 9:46 am, magnet wrote:
> > I have a large text file containing thousands of url's, one per line, and
> > am trying
Perl will do it but you will need to make the scripted or there might be one
on the net have a look
On Sunday 02 Feb 2003 9:46 am, magnet wrote:
> I have a large text file containing thousands of url's, one per line, and
> am trying to find a suitable utility that will strip out identical lines
>
I have a large text file containing thousands of url's, one per line, and am
trying to find a suitable utility that will strip out identical lines and
leave a condensed file. Can anyone suggest a good solution?
Thanks :)
--
magnet
Registered Linux User: 281659
Registered machines: 163839,163840