On Fri, 15 Sep 2000 [EMAIL PROTECTED] wrote:

> Actually a utility that can point to a device that can duplicate/image copy 
> disks..  Basically what one would really want to do is cobble together 
> device that is capable of slapping an empty drive on slot 1 and on the 
> suspect drive in slot 0, run the dup, and then do the rest of the forensics 
> process.

That could be scripted pretty quickly with dd.

> >"Deleted yet remaining files" is a concept from DOS filesystems, and
> >doesn't translate well outside of that- remaing disk blocks from old files
> >is more accurate to modern filesystems.
> 
> 
> One could be involved in any type of forensic from DOS to Linux to whatever.

Right, the point is that other than for FAT-based filesystems and the ^Z
over the first byte of the directory entry, remaining disk blocks is the
best way to approach the problem.  Since that includes everything but the
first byte of the filename under FAT it's more inclusive.

> > > Another utility that would  recover all (or as much as possible) of
> > > discovered deleted files.
> >
> >This is filesystem specific.
> 
> Not really.

Sure it is- recovering a deleted file's inodes under IBM's JFS is a
completely different than following the chain under NTFS.  Other methods
risk data-based attacks or versioning problems.

> > > A data viewer that would reveal (to the extent possible) the contents of
> > > hidden files as well as temporary or swap files used by both the
> > > application programs and the operating system.
> >
> >I typically use grep's regexp-based pattern matching combined with dd's
> >bit-wise view of the filesystem in question for string-based stuff.  Going
> >beyond that depends on the filesystem type and data necessary.
> 
> What one would want to have is something that is capable of parsing for 
> keywords that someone would type in or reassemble the bits and bytes..

grep parses for keywords pretty well ;)  The 'grep/dd' approach works for
things like recovering deleted syslog output on Solaris- a new
implementation of the same process would be a good way to approach it if
you're dead set against using existing toolsets and just automating them.

> > > An analysis utility that would analyze all possibly relevant data found in
> > > special (and typically inaccessible) areas of a disk. This includes but is
> > > not limited to what is called 'unallocated' space on a disk (currently
> > > unused, but possibly the repository of previous data that is relevant
> > > evidence), as well as 'slack' space in a file (the remnant area at the end
> > > of a file, in the last assigned disk cluster, that is unused by current
> > > file data, but once again may be a possible site for previously created 
> > and
> > > relevant evidence).
> >
> >Once again, that's FS-dependent (ntfs is different than fat32 is different
> >than ext2...)  Using dd gets around a lot of that.
> 
> dd has a 2 gb limit, unless one modifies dd to bypass the 2gb limit 
> checker.  dd also has some block overrun issues is the target drive is 
> larger than the source..

That modification is trivial- and not necessary for all dd's.  If you're
simply examining data (e.g. dd to grep) then target geometry is a
non-issue.

> >I think Linux would be more useful simply because getting support for
> >read-only mounting of non-native filesystems is easy - also direct disk
> >access without a device driver is trivial (as is writing a device driver
> >if necessary.)
> 
> This is a trivial issue, but one would like a nice UI to do 
> everything.  Actually doing forensic on Win boxes is something of a pain, 
> since most of the Windows based apps likes to install something on the 
> target.

UI's are easy in either environment (I've been playing with glade
recently- oh joy!)  Opening Solaris/BSD/Netware/SGI-type filesystems
outside of Linux doesn't seem too trivial to me- calling into the VFS
layer might even make the entire process more efficient.

> I know one or two people who can actually re-construct from a copy of a 
> completely toasted drive in less than 48 hours, and loss like a couple of 
> zero length files.. :)

I'm pretty sure I know more than two people who could do it faster.
Chain-of-evidence things are more difficult to do in an auditable way.

> The issue that Paul raises is that one would have to have all the tools or 
> little scripts already assembled prior to a forensic engagement, but the 
> issue is that one would to have a suite of tools and the procedure so that 
> any lacky could do the procedure and that the process is the same over and 
> over again.  Offering it as a service and being able to do a forensic 
> exercise once or twice, but to cookie cutter it is something else.  :)

What does Dan and Weitse's stuff do?  I've been meaning to look at that
for a while.

Paul
-----------------------------------------------------------------------------
Paul D. Robertson      "My statements in this message are personal opinions
[EMAIL PROTECTED]      which may have no basis whatsoever in fact."
                                                                     PSB#9280

-
[To unsubscribe, send mail to [EMAIL PROTECTED] with
"unsubscribe firewalls" in the body of the message.]

Reply via email to