On 25-May-00 at 03:28:09 Hal Burgiss wrote:
> On Wed, May 24, 2000 at 11:07:33PM -0400, erik wrote:
>> i was wondering if anyone knew of a command or a script that would
>> find with the same name on my computer.  i know I coud issue a 
>> 'find * | grep <whatever>' but I need to know what <whatever> is. what
>> I want is something that only returns value if the is two or more of a
>> file.
> 
> Would something like sorting the output of find to a file, while using
> 'basename' to extract just the filename itself. Then use 'uniq' to
> create a second file of unique names. Then do a diff on those two?
> Would get a list of duplicate filenames. The first part:
> 
>  $ find / -name "*" | xargs -n 1 basename | sort > sorted_list
> 
> Maybe there is a slicker way.
> 
The xargs/basename bit seems to be very slow - it'll work, but slow. I would
suggest something like:

  locate -l 0 -r . | awk -F/ '{ print $NF }' | sort > sorted_list

You need to have the locate database however, but my work PC runs 24/7 and
so I have configured locate to always run (can't remember if RH does that by
default). Either way, I have found locate to be a lot quicker generally than
using 'find /'. See man 'locate' for details. If you don't have locate setup
then just try using the awk bit rather than xargs.

Obviously you can further pipe this to 'uniq' to get just the duplicates or
whatever.

Regards,

John.

--------------------------------------------------------------------------
John Horne, University of Plymouth, UK             Tel: +44 (0)1752 233914
E-mail: [EMAIL PROTECTED]
PGP key available from public key servers


-- 
To unsubscribe: mail [EMAIL PROTECTED] with "unsubscribe"
as the Subject.

Reply via email to