RE: [SLUG] Removing duplicate entries from a file

2000-09-05 Thread Dave Kempe
I think you are best off with calamaris, the log analyser for squid. it turns out really nice log reports with soem good detail. It a perl script so you could modify it if it wasn't up to scratch, but im sure it is. Search for it on google. dave :) -Original Message- From: [EMAIL

RE: [SLUG] Removing duplicate entries from a file

2000-09-05 Thread Michael Still
Dave Kempe [EMAIL PROTECTED] said: I think you are best off with calamaris, the log analyser for squid. it turns out really nice log reports with soem good detail. It a perl script so you could modify it if it wasn't up to scratch, but im sure it is. Search for it on google. Or analog --

RE: [SLUG] Removing duplicate entries from a file

2000-09-05 Thread Visser, Martin (SNO)
piping to | sort | uniq might be useful. Given the file "sort.file" with the following 1 5 2 7 3 8 5 1 7 4 9 7 0 4 5 6 7 8 9 3 3 4 then running "sort sort.file | uniq" outputs 0 1 2 3 4 5 6 7 8 9 You could then use this file as a key to find and aggregate data from the input file Martin

RE: [SLUG] Removing duplicate entries from a file

2000-09-05 Thread MacFarlane, Jarrod
Thank you all for your quick responses! " | sort | uniq " did the trick for me this time, but I will definately look in to some of the packages recommended by others. Thanks again. Jarrod. -- SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/ More Info: