Re: [Full-disclosure] Gutmann's research paper today

2006-02-08 Thread Bipin Gautam
but guys the old 'Gutmann's research paper' doesn't properly clearify
a PRACTICAL way to sanatise the RAM. Ya ofcource, without physically
destroying it.

(anyone? who good idea on this topic???)

say; i have to sanatize my 512mb RAM. Which would be more better to
implement in immidiate emergency?

a). within the next 5 minutes completely overwrite my RAM (say) 5 x 60
= 300 times with a bootable media after reboot.
 or
b). With a bootable media after reboot do some complete random wipes
first. Hold the memory like that for next 10 minutes and repeat this
same process several times.

Feel free to point me to some kernel modules (if any?) that can be
used to transperently encrypt the I/O of RAM once system boots up and
basic drivers are loded.

I need some recomendations on 'proper' RAM sanitization!
-bipin
___
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/

Re: [Full-disclosure] Gutmann's research paper today

2006-02-08 Thread Valdis . Kletnieks
On Wed, 08 Feb 2006 10:11:45 +0100, [EMAIL PROTECTED] said:
> But isn't recovering from lower "layers" much easier, if you can predict over 
> write-patterns?

It's a matter of signal strength.  The un-overwritten residual field is on the
order of 1/1000th of the main signal.  After 2 passes, the signal you want to
find is down close to a millionth of the over-write signal.  At that point, it
becomes hard to tell whether that tiny bump in the signal is a 1 that survived
2 overwrites of zero, or if it's a random fluctuation in the signal strength
of the written signal.

> After i read "a few passes" another question arised to me:

> In his paper he wrote, that securely deleting data from disk is very
> difficult, because of the fact that write head doesn't set polarity of all
> "magnetic domains":

Right.  The residual signal is basically the signal from domains that didn't 
quite
get their polarity reset.

> So doesn't incrementing amount of rounds of random writing increase
> probabilty, that write head sets polarity of _all_ magnetic domains sooner or
> later and thus making secure deleting closer?

Well.. OK. Sure. You can spend a week over-writing that drive with 2,000 passes,
and have a *really* high confidence that you got every single domain.

On the other hand, a lot of us live in the real world, and stop after 3 passes
because at that point, it's highly unlikely that any *useful* data will be
recoverable.  If you're *so* paranoid that it really *does* matter that the
attacker could spend a month working on the drive and recover 'Ano' from 3.6
gigabytes into the drive, and 'the' from 9.8 gig in, it's a lot more time and
cost effective to just nail the drive with thermite or a degausser rated for
the jog.

And exactly that level of paranoia is why overwrites are only good up to Secret,
and Top Secret and above require physical destruction (a recent NIST document,
for example, recommends that for grinding, the remaining pieces be no larger
that 0.25mm is size).


pgpn3rBklRiUM.pgp
Description: PGP signature
___
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/

Re: [Full-disclosure] Gutmann's research paper today

2006-02-08 Thread Thomas

> One place where "random scrubbing" falls down is the requirement to
> *verify* that the blocks were written.

use the PRNG seed you used for overwriting for verifying too.

-- 
Tom <[EMAIL PROTECTED]>
fingerprint = F055 43E5 1F3C 4F4F 9182  CD59 DBC6 111A 8516 8DBF
___
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/


Re: [Full-disclosure] Gutmann's research paper today

2006-02-08 Thread gimeshell
On Tue, 07 Feb 2006 10:07:38 -0500
[EMAIL PROTECTED] wrote:

> DoD 5220.22M only requires 3 passes and verify of each pass - all zeros, all
> ones, and all "the same character" (for instance, 'AAA..' or similar).
> That's good for sanitizing disks up to Secret.  For anything higher, physical
> destruction is mandated. A "few passes of random scrubbing" is probably
> equivalent to 5220.22M for any realistic usage.

But isn't recovering from lower "layers" much easier, if you can predict 
overwrite-patterns?


After i read "a few passes" another question arised to me:

In his paper he wrote, that securely deleting data from disk is very difficult, 
because of the fact that write head doesn't set polarity of all "magnetic 
domains":

"Faced with techniques such as MFM, truly deleting data from magnetic media is 
very difficult. The problem lies in the fact that when data is written to the 
medium, the write head sets the polarity of most, but not all, of the magnetic 
domains. This is partially due to the inability of the writing device to write 
in exactly the same location each time, and partially due to the variations in 
media sensitivity and field strength over time and among devices."

Probably this statement is right for modern (E)PRML drives, too.

So doesn't incrementing amount of rounds of random writing increase probabilty, 
that write head sets polarity of _all_ magnetic domains sooner or later and 
thus making secure deleting closer?

regards
___
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/


Re: [Full-disclosure] Gutmann's research paper today

2006-02-07 Thread Frank Knobbe
On Tue, 2006-02-07 at 08:24 -0800, Mike Owen wrote:
> Funny, that's how my backups always end up working as well. 'cat
> /dev/urandom > /dev/tape'

:)

No, actually the backup is more like  tar ...|openssl ...|dd ...|
tee /dev/nsa0 |md5 

But yeah, for the disk, you're right:
  dd if=/dev/urandom | tee /dev/ad2 | md5
Then:
  dd if=/dev/ad2 | md5  
and compare. 

Cheers,
Frank

-- 
It is said that the Internet is a public utility. As such, it is best
compared to a sewer. A big, fat pipe with a bunch of crap sloshing
against your ports.



signature.asc
Description: This is a digitally signed message part
___
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/

Re: [Full-disclosure] Gutmann's research paper today

2006-02-07 Thread Mike Owen
On 2/7/06, Frank Knobbe <[EMAIL PROTECTED]> wrote:
> I'm performing backups where the stream is tee'ed to the drive and into
> md5 for hash creation. Works great with tapes, should work for drives
> too.
>
> Cheers,
> Frank
>

Funny, that's how my backups always end up working as well. 'cat
/dev/urandom > /dev/tape'

Mike
___
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/


Re: [Full-disclosure] Gutmann's research paper today

2006-02-07 Thread Frank Knobbe
On Tue, 2006-02-07 at 10:07 -0500, [EMAIL PROTECTED] wrote:
> One place where "random scrubbing" falls down is the requirement to *verify*
> that the blocks were written.  If you wrote a disk full of zeros, it's a
> trivial matter to read it back and verify that all the bytes are zeros.  If 
> you
> wrote a whole disk of pseudo-random, then you have to regenerate the entire
> pseudo-random data stream in order to compare it

Shouldn't you be able to do that cluster by cluster? Grab 1024 or
random, write it, then read it and verify?

Alternatively, more stream based, you could dd /dev/random, tee that to
the drive and pipe the other stream into md5. Take note of the hash. At
the end, dd the drive into md5 and compare the resulting hash.

I'm performing backups where the stream is tee'ed to the drive and into
md5 for hash creation. Works great with tapes, should work for drives
too.

Cheers,
Frank

-- 
It is said that the Internet is a public utility. As such, it is best
compared to a sewer. A big, fat pipe with a bunch of crap sloshing
against your ports.



signature.asc
Description: This is a digitally signed message part
___
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/

Re: [Full-disclosure] Gutmann's research paper today

2006-02-07 Thread Valdis . Kletnieks
On Tue, 07 Feb 2006 15:44:37 +0100, [EMAIL PROTECTED] said:

> Am i misunderstanding something or you can really say, if you're
> writing to a modern disk, forget all special scrubbing technologies,
> don't use Gutmann, don't use DoS 5220.22M or other pattern writing
> technologies, only a few passes of random scrubbing will do the job?

DoD 5220.22M only requires 3 passes and verify of each pass - all zeros, all
ones, and all "the same character" (for instance, 'AAA..' or similar).
That's good for sanitizing disks up to Secret.  For anything higher, physical
destruction is mandated. A "few passes of random scrubbing" is probably
equivalent to 5220.22M for any realistic usage.

One place where "random scrubbing" falls down is the requirement to *verify*
that the blocks were written.  If you wrote a disk full of zeros, it's a
trivial matter to read it back and verify that all the bytes are zeros.  If you
wrote a whole disk of pseudo-random, then you have to regenerate the entire
pseudo-random data stream in order to compare it

And yes, the verify step is important - I've had more than one disk drive that
was still perfectly readable, but suffered hardware damage to the write 
hardware.
Writing 3 passes of anything and failing to verify on such a disk would result
in a disclosure of the entire disk's contents.  Yes Virginia, there *are* disk
drive failures that will report a successful write but not actually work... ;)


pgpTu84n3xSZW.pgp
Description: PGP signature
___
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/

[Full-disclosure] Gutmann's research paper today

2006-02-07 Thread gimeshell
Hi,

i'm trying to view Gutmann's paper "Secure Deletion of Data from
Magnetic and Solid-State Memory" from today's point of view and
wondering which information brought in this paper applies to modern
effort to securely erase sensitive data.

That is what he said, which may be of especial interest:

"In the time since this paper was published, some people have treated
the 35-pass overwrite technique described in it more as a kind of
voodoo incantation to banish evil spirits than the result of a
technical analysis of drive encoding techniques. As a result, they
advocate applying the voodoo to PRML and EPRML drives even though it
will have no more effect than a simple scrubbing with random data. In
fact performing the full 35-pass overwrite is pointless for any drive
since it targets a blend of scenarios involving all types of
(normally-used) encoding technology, which covers everything back to 30
+-year-old MFM methods (if you don't understand that statement, re-read
the paper). If you're using a drive which uses encoding technology X,
you only need to perform the passes specific to X, and you never need
to perform all 35 passes. For any modern PRML/EPRML drive, a few passes
of random scrubbing is the best you can do. As the paper says, "A good
scrubbing with random data will do about as well as can be expected".
This was true in 1996, and is still true now."

So there aren't any capable patterns to be written on modern PRML/EPRML
drives to reach best results in sensitive data destruction?
He explicitely stated to use "a few passes of random scrubbing".

Am i misunderstanding something or you can really say, if you're
writing to a modern disk, forget all special scrubbing technologies,
don't use Gutmann, don't use DoS 5220.22M or other pattern writing
technologies, only a few passes of random scrubbing will do the job?

regards
___
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/