Hi Kirk, I'll take a deeper look but at first glance... well, I didn't think I
was looking for any forensic tools. Remember it's all about revealing unique
data on a source drive such that something labeled "temp" or "backup" or an
obvious drive that was in use before the contents were cloned
Robert,
On Tue, Mar 14, 2017 at 2:56 PM, Robert ListMail via 4D_Tech <
4d_tech@lists.4d.com> wrote:
> Actually, I’ve looked for years and have never found anything that will do
> this. It’s not impossible to do what needs to be done manually, it’s just a
> royal pain and prone to user errors that
I’m not looking to identify all duplicates. I want to know what unique data
might be on a disk labeled “backup"…. To determine this the software (rsync?)
must query an index of ALL specified drives where the original pathname may be
very different for each an every file. Can rsync perform the
You can try this link :
http://download.cnet.com/s/duplicate-file-finder/mac/
> I guess Git does a pretty decent job at tracking changes and
> movements in a designated directory,
> but if you feel 4D give you the strength and flexibility to do
> exactly what you want,
> I have nothing against
I came across a tool :
zsDuplicateHunter
I haven't used it for a long time - so not clue as to how (or if) it is has
been updated
> Hi Kirk!
>
>> On Mar 14, 2017, at 11:23 AM, Kirk Brooks via 4D_Tech
>> <4d_tech@lists.4d.com> wrote:
>> Robert,
>> It sounds like you are doing some really
I guess Git does a pretty decent job at tracking changes and movements in a
designated directory,
but if you feel 4D give you the strength and flexibility to do exactly what you
want,
I have nothing against it.
> 2017/03/15 8:46、Robert ListMail via 4D_Tech <4d_tech@lists.4D.com> のメール:
> Chip,
Chip, thanks for your input but no traditional rsync or clone tools or backup
software is up to the task…. Since the pathnames are guaranteed to be
different…
R
> On Mar 14, 2017, at 5:33 PM, Keisuke Miyako via 4D_Tech
> <4d_tech@lists.4d.com> wrote:
>
> for simply synchronising two
for simply synchronising two directories possibly on a separate volume,
rsync has been around for quite some time.
https://en.wikipedia.org/wiki/Rsync
but I may be getting wrong the "this" in "anything that will do this".
> 2017/03/15 6:56、Robert ListMail via 4D_Tech <4d_tech@lists.4D.com>
Thanks Tim, I’ll have a look.
R
> On Mar 14, 2017, at 2:56 PM, Timothy Penner via 4D_Tech
> <4d_tech@lists.4d.com> wrote:
>
> It's been a few years since I looked at it but I think the "HASH Examples in
> 4D" tech note includes a sample database that had as a proof of concept the
> ability
Hi Kirk!
> On Mar 14, 2017, at 11:23 AM, Kirk Brooks via 4D_Tech <4d_tech@lists.4d.com>
> wrote:
> Robert,
> It sounds like you are doing some really interesting stuff.
Why yes, is there any other way? :)
> kinds of files where surreptitious data are easily hidden. Then we get to
> the
It's been a few years since I looked at it but I think the "HASH Examples in
4D" tech note includes a sample database that had as a proof of concept the
ability to "find duplicate files on a hard drive"
http://kb.4d.com/assetid=76130
Tech Note: Hash Examples in 4D
PRODUCT: 4D | VERSION: 12 |
Robert,
It sounds like you are doing some really interesting stuff. It also sounds
like you might want to be looking for forensic tools already been built for
this sort of work.
Alex is right about hashing the file blob to develop a unique identifier
for exact matches regardless of name but that
You're welcome.
I need this as we store all attached files of e-mails in our DMS and here we
find truckloads of duplicates (logos in mail signatures, etc.)
In order to avoid blowing up storage we calculate a hash of every file and only
store uniques, and then link them to the appropriate source
Alex, thanks for the input. I thought it might be a good task for 4D. So how
or why do you need such a tool?
Btw: I don't really need the hash comparison if I had other file attributes.
I'll look at this again tomorrow.
Thanks,
Robert
Sent from my iPhone
> On Mar 14, 2017, at 3:25 AM,
Hi,
I use a similar Algorithm for optimizing document storage.
Pretty simple actually:
just troll through all directories recursively and store each file in a record.
You just need the path and the file hash which you can create with
DOCUMENT TO BLOB($t_DocPath;$x_Content)
$t_FileHash:=Generate
I need a utility that can scan a backup drive (or index) and identify what’s
unique to the backup volume without expecting identical pathnames on the other
drives... So, the routine would have to query (effectively a Finder Search for
each file) all specified drives looking for each file and
16 matches
Mail list logo