Re: [zfs-discuss] Re: Proposal: multiple copies of user

2006-09-15 Thread Frank Cusack
On September 15, 2006 3:49:14 PM -0700 "can you guess?" 
<[EMAIL PROTECTED]> wrote:

(I looked at my email before checking here, so I'll just cut-and-paste
the email response in here rather than send it.  By the way, is there a
way to view just the responses that have accumulated in this forum since
I last visited - or just those I've never looked at before?)


subscribe via email instead of reading it as a forum
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Proposal: multiple copies of user

2006-09-12 Thread Torrey McMahon

Celso wrote:

a couple of points

  

One could make the argument that the feature could
cause enough 
confusion to not warrant its inclusion. If I'm a
typical user and I 
write a file to the filesystem where the admin set
three copies but 
didn't tell me it might throw me into a tizzy trying
to figure out why 
my quota is 3X where I expect it to be.





I don't think anybody is saying it is going to be the default setup. If someone is not comfortable with a feature, surely  they can choose to ignore it. An admin can use actual mirroring, raidz etc, and carry on as before. 


There are many potentially confusing features of almost any computer system. 
Computers are complex things.

I admin a couple of schools with a total of about 2000 kids. I really doubt 
that any of them would have a problem understanding it.

More importantly, is an institution utilizing quotas really the main market for 
this feature. It seems to me that it is clearly aimed at people in control of 
their own machines (even though I can see uses for this in pretty much any 
environment). I doubt anyone capable of installing and running Solaris on their 
laptop would be confused by this issue.
  



Its not the smart people I would be worried about. It's the ones where 
you would get into endless loops of conversation around "But I only 
wrote 1MB how come it says 2MB?" that worry me. Especially, when it 
impacts a lot of user level tools and could be a surprise if set by a 
BOFH type.


That said I was worried about that type of effect when the change itself 
seemed to have low value. However, you and Richard have pointed to at 
least one example where this would be useful at the file level




Given a situation, where you: 


a) have a laptop or home computer which you have important data on.
b) for whatever reason, you can't add another disk to utilize mirroring (and 
you are betweenbackups)

this seems to me to be a very valid solution.


... and though I see that as a valid solution to the issue does it 
really cover enough ground to warrant inclusion of this feature given 
some of the other issues that have been brought up?


In the above case I think people would me more concerned with the entire 
system going down, a drive crashing, etc. then the possibility of a 
checksum error or data corruption requiring the lookup on a ditto block 
if one exists. In that case they would create a copy on an independent 
system, like a USB disk, some sort of archiving media, like a CD-R, or 
even place a copy on a remote system, to maintain the data in case of a 
failure. Hell, I've been known to do all three to meet my own paranoia 
level.


IMHO, It's more ammo to include the feature but I'm not sure its enough. 
Perhaps Richard's late breaking data concerning drive failures will add 
some more weight?




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Proposal: multiple copies of user data

2006-09-12 Thread Dick Davies

On 12/09/06, Celso <[EMAIL PROTECTED]> wrote:


One of the great things about zfs, is that it protects not just against 
mechanical failure, but against silent data corruption. Having this available 
to laptop owners seems to me to be important to making zfs even more attractive.


I'm not arguing against that. I was just saying that *if* this was useful to you
(and you were happy with the dubious resilience/performance benefits) you can
already create mirrors/raidz on a single disk by using partitions as
building blocks.
There's no need to implement the proposal to gain that.



Am I correct in assuming that having say 2 copies of your "documents" 
filesystem means should silent data corruption occur, your data can be reconstructed. So 
that you can leave your os and base applications with 1 copy, but your important data can 
be protected.


Yes.

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Proposal: multiple copies of user data

2006-09-12 Thread Al Hopper
On Tue, 12 Sep 2006, Anton B. Rang wrote:

 reformatted 
> >True - I'm a laptop user myself. But as I said, I'd assume the whole disk
> >would fail (it does in my experience).

Usually a laptop disk suffers a mechanical failure - and the failure rate
is a lot higher than disks in a fixed location environment.

> That's usually the case, but single-block failures can occur as well.
> They're rare (check the "uncorrectable bit error rate" specifications)
> but if they happen to hit a critical file, they're painful.
>
> On the other hand, multiple copies seems (to me) like a really expensive
> way to deal with this. ZFS is already using relatively large blocks, so
> it could add an erasure code on top of them and have far less storage
> overhead. If the assumed problem is multi-block failures in one area of
> the disk, I'd wonder how common this failure mode is; in my experience,
> multi-block failures are generally due to the head having touched the
> platter, in which case the whole drive will shortly fail. (In any case,

The following is based on dated knowledge from personal experience and I
can't say if its (still) accurate information today.

Drive failures in a localized area are generally caused by the heads being
positioned in the same (general) cylinder position for long periods of
time.  The heads ride on a air bearing - but there is still a lot of
friction caused by the movement of air under the heads.  This is turn
generates heat.  Localized heat buildup can cause some of the material
coated on the disk to break free.  The drive is designed for this
eventuality - since it is equipped with a very fine filter which will
catch and trap anything that breaks free and the airflow is designed to
constantly circulate the air through the filter.  However, some of the
material might get trapped between the head and the disk and possibly
stick to the disk.  In this case, the neighbouring disk cylinders in this
general area will probably be damaged and, if enough material accumulates,
so might the head(s).

In the old days people wrote their own head "floater" programs - to ensure
that the head was moved randomly across the disk surface from time to
time.

I don't know if this is still relevant today - since the amount of
firmware a disk drive executes, continues to increase every day.  But in a
typical usage scenario, where a user does, for example, a find operation
in a home directory - and the directory caches are not sized large
enough, there is a good probability that the heads will end up in the same
general area of the disk, after the find op completes.  Assuming that the
box has enough memory, the disk may not be accessed again for a long time
- and possibly only during another find op (wash, rinse, repeat).

Continuing: a buildup of heat in a localized cylinder area, will cause the
disk platter to expand and shift, relative to the heads.  The disk
platter has one surface dedicated to storing servo information - and from
this the disk can "decide" that it is on the wrong cylinder after a head
movement.  In which case the drive will recalibrate itself (thermal
recalibration) and store a table of offsets for different cylinder ranges.
So when the head it told, for example, to move to cylinder 1000, the
correction table will tell it to move to where physical cylinder 1000
should be and then add the correction delta (plus or minus) for that
cylinder range to figure out where to the actually move the heads to.

Now the heads are positioned on the correct cylinder and should be
centered on it.  If the drive gets a bad CRC after reading a cylinder it
can use the CRC to correct the data or it can command that the data be
re-read, until a correctable read is obtained.  Last I heard, the number
of retries is of the order of 100 to 200 or more(??).  So this will be
noticable - since 100 reads will require 100 revolutions of the disk.
Retries like this will probably continue to provide correctable data to
the user and the disk drive will ignore the fact that there is an area of
disk where retries are constantly required.  This is what Steve Gibson
picked up on for his SpinRite product.  If he runs code that can determine
that CRC corrections or re-reads are required to retrieve good data, then
he "knows" this is a likely area of the disk to fail in the (possibly
near) future.  So he relocates the data in this area, marks the area
"bad", and the drive avoids it.  Given what I wrote earlier, that there
could be some physical damage in this general area - having the heads
avoid it is a Good Thing.

So the question is, how relevant is storing multiple copies of data on a
disk in terms of the mechanics of modern disk drive failure modes.
Without some "SpinRite" like functionality in the code, the drive will
continue to access the deteriorating disk cylinders, now a localized
failure, and eventually it will deteriorate further and cause enough
material to break free to take out the head(s).  At which time the 

Re: [zfs-discuss] Re: Proposal: multiple copies of user data

2006-09-12 Thread Darren J Moffat

Anton B. Rang wrote:
The biggest problem I see with this is one of observability, if not all 
of the data is encrypted yet what should the encryption property say ? 
If it says encryption is on then the admin might think the data is 
"safe", but if it says it is off that isn't the truth either because 
some of it maybe still encrypted.



From a user interface perspective, I'd expect something like


  Encryption: Being enabled, 75% complete
or
  Encryption: Being disabled, 25% complete, about 2h23m remaining


and if we are still writing to the file systems at that time ?

Maybe this really does need to be done with the file system locked.


I'm not sure how you'd map this into a property (or several), but it seems like "on"/"off" ought to 
be paired with "transitioning to on"/"transitioning to off" for any changes which aren't 
instantaneous.


Agreed, and checksum and compression would have the same issue if there 
was a mechanism to rewrite with the new checksums or compression settings.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss