piping to | sort | uniq might be useful.

Given the file "sort.file" with the following

1
5
2
7
3
8
5
1
7
4
9
7
0
4
5
6
7
8
9
3
3
4

then running "sort sort.file | uniq" outputs 

0
1
2
3
4
5
6
7
8
9

You could then use this file as a key to find and aggregate data from the
input file

Martin Visser
Technology Consultant - Compaq Global Services

Compaq Computer Australia
410 Concord Road
Rhodes, Sydney NSW 2138
Australia

Phone: +61-2-9022-5630
Mobile: +61-411-254-513
Fax:+61-2-9022-7001
Email:[EMAIL PROTECTED]


-----Original Message-----
From: MacFarlane, Jarrod [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, 6 September 2000 9:43 AM
To: '[EMAIL PROTECTED]'
Subject: [SLUG] Removing duplicate entries from a file


Hey sluggers,

I need to look at a particular machines web hits.. I am currently using:

cat /usr/local/squid/logs/access.log |grep 1.2.3.4 |cut -f4 -d"/" >
logfile.txt

This outputs something like:
www.reallynaughtysite.com
www.smackmeimbad.com
and so on....

The problem is that it has many double ups... are there a long confusing
string of commands that will go through my logfile and remove all but one
instance of every domain listed?

Thanks,
Jarrod.


--
SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/
More Info: http://slug.org.au/lists/listinfo/slug


--
SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/
More Info: http://slug.org.au/lists/listinfo/slug

Reply via email to