Lothar Behrens wrote:
I have expected a smaller amount of records due to the fact that for 4
files each available 2 times (sum = 8) I
have 8 records in ECADFiles, but must have 4 in the above result.
So for an average of 2 doubles I expected half the files from
ECADFiles, because one is
On 18 Nov., 07:40, [EMAIL PROTECTED] (Craig Ringer) wrote:
-- Once paths is populated, extract duplicates:
SELECT get_filename(path) AS fn, count(path) AS n
FROM paths HAVING count(path) 1
INTO TEMPORARY TABLE dup_files;
-- Creates UNIQUE index on PATH as well
ALTER TABLE dup_files ADD
On Mon, Nov 17, 2008 at 11:22:47AM -0800, Lothar Behrens wrote:
I have a problem to find as fast as possible files that are double or
in other words, identical.
Also identifying those files that are not identical.
I'd probably just take a simple Unix command line approach, something
like:
On Tue, Nov 18, 2008 at 12:36:42PM +, Sam Mason wrote:
On Mon, Nov 17, 2008 at 11:22:47AM -0800, Lothar Behrens wrote:
I have a problem to find as fast as possible files that are double or
in other words, identical.
Also identifying those files that are not identical.
I'd probably
Hi,
I have a problem to find as fast as possible files that are double or
in other words, identical.
Also identifying those files that are not identical.
My approach was to use dir /s and an awk script to convert it to a sql
script to be imported into a table.
That done, I could start issuing
Hi Ho!
--- On Tue, 11/18/08, Lothar Behrens [EMAIL PROTECTED] wrote:
Hi,
I have a problem to find as fast as possible files that are
double or
in other words, identical.
Also identifying those files that are not identical.
My approach was to use dir /s and an awk script to convert
it
Lothar Behrens wrote:
But how to query for files to display a 'left / right view' for each
file that is on multible places ?
One approach is to use a query to extract the names of all files with
duplicates, and store the results in a TEMPORARY table with a UNIQUE
index (or PRIMARY KEY) on the