Miles Nordin wrote:
"sl" == Scott Lawson <scott.law...@manukau.ac.nz> writes:

    sl> Electricity *is* the lifeblood of available storage.

I never meant to suggest computing machinery could run without
electricity.  My suggestion is, if your focus is _reliability_ rather
than availability, meaning you don't want to lose the contents of a
pool, you should think about what happens when power goes out, not
just how to make sure power Never goes out Ever Absolutely because we
Paid and our power is PERFECT.
My focus is on both. And I understand that nothing is ever perfect, only
that one should strive for it if possible. But when one lives in a place like NZ where our power grid system is creaky, it starts becoming a real liability that needs mitigation
thats all. I am sure there are plenty of  ZFS users in the same boat.
 * pools should not go corrupt when power goes out.
Absolutely agree.
 * UPS does not replace need for NVRAM's to have batteries in it
   because there are things between the UPS and the NVRAM like cords
   and power supplies, and the UPS themselves are not reliable enough
   if you have only one, and the controller containing the NVRAM may
   need to be hard-booted because of bugs.
Fully understand this too. If you use as I do hardware RAID arrays behind zpool vdevs then it is very important that this stuff is maintained and that the batteries backing the RAID array write caches are good and that you can have power available to allow them to flush cache to disk before the batteries go flat. This is certainly true of any file system that
is built upon LUNS from hardware backed RAID arrays.
 * supplying superexpensive futuristic infalliable fancypower to all
   disk shelves does not mean the SYNC CACHE command can be thrown
   out.  maybe the power is still not infalliable, or maybe there will
   be SAN outages or blown controllers or shelves with junky software
   in them that hang the whole array when one drive goes bad.
In general why I use mirrored vdevs with LUNS provided from two different arrays geographically isolated, less likely to be a problem hopefully. But yes anything that
ignores SYNC CACHE could pose a serious problem if it is hidden by an array
controller from ZFS.
If you really care about availability:

 * reliability crosses into availability if you are planning to have
   fragile pools backed by a single SAN LUN, which may become corrupt
   if they lose power.  Maybe you're planning to destroy the pool and
   restore from backup in that case, and you have some
   carefully-planned offsite backup heirarchy that's always recent
   enough to capture all the data you care about.  But, a restore
   could take days, which turns two minutes of unavailable power into
   one day of unavailable data.  If there were no reliability problem
   causing pool loss during power loss, two minutes unavailable power
   maybe means 10min of unavailable data.
Agreed and is why I would recommend against a single hardware RAID SAN LUN for a zpool. At bare minimum for this you would want to use copies=2 if you really care about your data. IF you don't care about the data then no problems, go ahead. I do use zpools for transient data that I don't care about and favor capacity over resiliency. (main think I want is L2ARC for these,
think squid proxy server caches)
 * there are reported problems with systems that take hours to boot
   up, ex. with thousands of filesystems, snapshots, or nfs exports,
   which isn't exactly a reliability problem, but is a problem.  That
   open issue falls into the above outage-magnification category, too.
Have seen this myself. Not nice after a system reboot. Can't recall if I have seen
it recently though? Seem to recall it was more around S10 U2 or U3.
I just don't like the idea people are building fancy space-age data
centers and then thinking they can safely run crappy storage software
that won't handle power outages because they're above having to worry
about all that little-guy nonsense.  A big selling point of the last
step-forward in filesystems (metadata logging) was that they'd handle
power failures with better consistency guarantees and faster
reboots---at the time, did metadata logging appeal only to people with
unreliable power?  I hope not.
I am just trying to put forward the perspective of a big user here. I have already generated numerous off list posts with people wanting more info on the methodology
that we like to use. If I can be of help to people I will.
never mind those of us who find these filsystem features important
because we'd like cheaper or smaller systems, with cords that we
sometimes trip over, that are still useful.  I think having such
protections in the storage software and having them actually fully
working not just imaginary or fragile, is always useful,
Absolutely. It is all part of the big picture. Albeit probably the *the* most important part. Consistency of your data is the paramount concern for all people that store it. I
just like to make sure it's also available and not just consistent on disk.
 isn't
something you can put yourself above by ``careful power design'' or
``paying for it'' because without them, in a disaster you've got this
brittle house-of-cards system that cracks once you deviate from the
specific procedures you've planned.
Generally in system builds their resilience to failure events is tested at the time of commissioning and then not tested again once the system goes live, has patches
applied, security updates, bugs introduced etc. So in my experience trying
to mitigate outage risks as much as possible is a good idea and is the root behind
the idea that I am trying to convey. Calling this a house of cards because
of adherence to operational procedures is at least somewhat better than it being a mud hut.
I'm glad your disaster planning has stood the test of practice so
well.  But we're supposed to have an industry baseline right now that
databases and MTA's and NFS servers and their underlying filesystems
can lose power without losing any data, and I think we should stick to
that rather than letting it slip.
Absolutely.
------------------------------------------------------------------------

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to