Thanks for all the suggestions, people.

We've since done some tests using an MS-Dos file system (which we are more
knowledgeable about and comfortable with) and found that using a dos
partition that was *logically* read only (i.e. we didn't write to it) and a
separate data partition which we did write to ensured that the OS didn't get
corrupted over some hundreds of random power on/off cycles (which equates to
about 5 years in our application).  

Furthermore, writing random data to files which didn't move, and whose size
didn't change in the data partition caused the file system to remain intact,
and only the files themselves to be corrupted occasionally (as one would
expect if power is lost part way through a write).  So we're kind of
absolving the hardware at this stage.  A similar strategy should be fairly
easy under Linux.

Regards, 
Alf Katz 
                                                    
Principal Software Engineer
Vision Instruments Limited
e-mail: [EMAIL PROTECTED]
ph:  -61 3 9265 8900
fax: -61 3 9265 8847 


-----Original Message-----
From: Steven Work [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, 13 January 1999 4:39
To: Alf Katz
Cc: [EMAIL PROTECTED]
Subject: Re: Disk on chip corruption


Alf Katz <[EMAIL PROTECTED]> writes:

> Does anybody out there know of any tests performed on Disk On Chip devices
> to determine whether or not there are any problems with memory loss when
> cycling power.  We've set up a test where we allow an Advantech card with
a
> DOC to power up for a random time (0 to 4 minutes), and are getting
> corrupted files.

Are you using ext2fs?  Use chattr to set the +S attribute on the files 
you change, so they flush through to disk as they are written.  You'll 
be amazed how painfully this impacts performance.  You'll also get
killed by inode access time updates, so use the chattr +A flag also,
and know that support for it isn't in the 2.0 series kernels (not sure 
when it appears, actually).

The only way to use these things safely under Linux, to my knowledge,
is mounted read-only.

I've ranted repeatedly (though not recently) about the need for (1)
raw kernel access to flash chips (not interface-laced "flash disks"),
and (2) an appropriate kernel filesystem built on top of these raw
chips, perhaps a log filesystem (which stores writes as a list of
changes to files, instead of trying to overwrite the files
themselves).  This is both a hardware problem -- I don't know of
commercial single-board computers that offer more than about 1M of
raw flash chips -- and a software problem, solved by writing software.
I'd be happy to outline my previous rants on request.

The DOC things claim to do wear leveling and such by abstracting the
raw chips -- when you write to a "location", the interface (probably
implemented in the binary-only kernel driver, but maybe partly in
hardware) maps the logical "location" to a physical chip address based
on its own idea of how it should be done.  You and I, mere mortals,
aren't privy to the details of this mapping, so we can't know whether
it works well with non-MSDOS filesystems.  I'd bet there are
half-a-dozen assumptions built into the interface that aren't valid
for ext2fs -- I'm safe in offering that bet because the only people
who might be able to disprove it are committed to keeping the
necessary details to themselves.
-- 
steven Work
Renaissance Labs
[EMAIL PROTECTED]
360 647-1833

Reply via email to