Re: [zfs-discuss] Motherboard for home zfs/solaris file server

2009-07-20 Thread Keith Bierman

hopefully the lead itself won't be radioactive)

Or the chips themselves don't have some alpha particle generation. It  
has happened and from premium vendors


There is no replacement for good system design :)

khb...@gmail.com
Sent from my iPod

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on 32 bit?

2009-06-15 Thread Keith Bierman

I had a 32 bit zfs server up for months with no such issue

Performance is not great but it's no buggier than anything else. War  
stories from the initial zfs drops notwithstanding


khb...@gmail.com | keith.bier...@quantum.com
Sent from my iPod

On Jun 15, 2009, at 3:59 PM, Orvar Korvar no-re...@opensolaris.org  
wrote:


Ive asked the same question about 32bit. I created a thread and  
asked. It were something like does 32bit ZFS fragments RAM? or  
something similar. As I remember it, 32 bit had some issues. Mostly  
due to RAM fragmentation or something similar. The result was that  
you had to restart your server after a while. But I shuts down my  
desktopPC every night so I never had any issues.

--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Observation of Device Layout vs Performance

2009-01-06 Thread Keith Bierman

On Jan 6, 2009, at 9:44 AM   1/6/, Jacob Ritorto wrote:

  but catting /dev/zero to a file in the pool now f

Do you get the same sort of results from /dev/random?

I wouldn't be surprised if /dev/zero turns out to be a special case.

Indeed, using any of the special files is probably not ideal.


-- 
Keith H. Bierman   khb...@gmail.com  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Observation of Device Layout vs Performance

2009-01-06 Thread Keith Bierman

On Jan 6, 2009, at 11:12 AM   1/6/, Bob Friesenhahn wrote:

 On Tue, 6 Jan 2009, Keith Bierman wrote:

 Do you get the same sort of results from /dev/random?

 /dev/random is very slow and should not be used for benchmarking.

Not directly, no. But copying from /dev/random to a real file and  
using that should provide better insight than all zeros or all ones  
(I have seen clever devices optimize things away).

Tests like bonnie are probably a better bet than rolling one's own;  
although the latter is good for building intuition ;

-- 
Keith H. Bierman   khb...@gmail.com  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] VERY URGENT Compliance for ZFS

2008-11-10 Thread Keith Bierman

On Nov 10, 2008, at 4:47 AM, Vikash Gupta wrote:

 Hi Parmesh,

 Looks like this tender specification meant for Veritas.

 How do you handle this particular clause ?
 Shall provide Centralized, Cross platform, Single console management
 GUI

Does it really make sense to have a discussion like this on an  
external open list? Contracts are customarily private, and company  
confidential.

-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-11 Thread Keith Bierman

On Oct 10, 2008, at 7:55 PM   10/10/, David Magda wrote:


 If someone finds themselves in this position, what advice can be
 followed to minimize risks?

Can you ask for two LUNs on different physical SAN devices and have  
an expectation of getting it?



-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An slog experiment (my NAS can beat up your NAS)

2008-10-09 Thread Keith Bierman

On Oct 8, 2008, at 4:27 PM   10/8/, Jim Dunham wrote:
 , a single Solaris node can not be both
 the primary and secondary node.

 If one wants this type of mirror functionality on a single node, use
 host based or controller based mirroring software.


If one is running multiple zones, couldn't you fool AVS into thinking  
that one zone was the primary and the other the secondary?
-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel M-series SSD

2008-09-10 Thread Keith Bierman

On Sep 10, 2008, at 11:40 AM, Bob Friesenhahn wrote:


 Write performance to SSDs is not all it is cracked up to be.  Buried
 in the AnandTech writeup, there is mention that while 4K can be
 written at once, 512KB needs to be erased at once.  This means that
 write performance to an empty device will seem initially pretty good,
 but then it will start to suffer as 512KB regions need to be erased to
 make space for more writes.

That assumes that one doesn't code up the system to batch up erases  
prior to writes.

...
 returns to the user faster.  This may increase the chance of data loss
 due to power failure.


Presumably anyone deft enough to design such an enterprise grade  
device will be able to provide enough super-capacitor (or equivalent)  
to ensure that DRAM is flushed to SSD before anything bad happens.

Clever use of such devices in L2ARC and slog ZFS configurations (or  
moral equivalents in other environments) is pretty much the only  
affordable way (vs. huge numbers of spindles) to bridge the gap  
between rotating rust and massively parallel CPUs.

One imagines that Intel will go back to fabbing their own at some  
point; that is closer to their usual business model than OEMing other  
people's parts ;


-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel M-series SSD

2008-09-10 Thread Keith Bierman

On Sep 10, 2008, at 12:37 PM, Bob Friesenhahn wrote:

 On Wed, 10 Sep 2008, Keith Bierman wrote:

 written at once, 512KB needs to be erased at once.  This means that
 write performance to an empty device will seem initially pretty  
 good,
 but then it will start to suffer as 512KB regions need to be  
 erased to
 make space for more writes.

 That assumes that one doesn't code up the system to batch up  
 erases prior to writes.

 Is the notion of block erase even exposed via SATA/SCSI  
 protocols? Maybe it is for CD/DVD type devices.

 This is something that only the device itself would be aware of.  
 Only the device knows if the block has been used before.


A conspiracy between the device and a savvy host is sure to emerge ;
 ...
 That is reasonable.  It adds to product cost and size though. Super- 
 capacitors are not super-small.

True, but for enterprise class devices they are sufficiently small.  
Laptops will have a largish battery and won't need the caps ;  
Desktops will be on their own.

-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] eWeek: corrupt file brought down FAA's antiquated IT system

2008-08-28 Thread Keith Bierman

On Aug 28, 2008, at 11:38 AM, Bob Friesenhahn wrote:

  The old FORTRAN code
 either had to be ported or new code written from scratch.

Assuming it WAS written in FORTRAN there is no reason to believe it  
wouldn't just compile with a modern Fortran compiler. I've often run  
codes originally written in the sixties without any significant  
changes (very old codes may have used the frequency statement,  
toggled front panel lights or sensed toggle switches ... but that's  
pretty rare).



-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] pulling disks was: ZFS hangs/freezes after disk failure,

2008-08-27 Thread Keith Bierman

On Aug 27, 2008, at 11:17 AM, Richard Elling wrote:
   In my pile of broken parts, I have devices
 which fail to indicate an unrecoverable read, yet do indeed suffer
 from forgetful media.

A long time ago, in a hw company long since dead and buried, I spent  
some months trying to find an intermittent error in the last bits of  
a complicated floating point application. It only occurred when disk  
striping was turned on (but the OS and device codes checked cleanly).  
In the end, it turned out that one of the device vendors had modified  
the specification slightly (by like 1 nano-sec) and the result was  
that least significant bits were often wrong when we drove the disk  
cage to it's max.

Errors were occurring randomly (e.g. swapping, paging, etc.) but no  
other application noticed. As the error was within the margin of  
error a less stubborn analyst might have not made a serious of  
federal cases about the non-determinism ;

My point is that undetected errors happen all the time; that people  
don't notice doesn't mean that they don't happen ...


-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS deduplication

2008-08-26 Thread Keith Bierman

On Aug 26, 2008, at 9:58 AM, Darren J Moffat wrote:


 than a private copy. I wouldn't expect that to have too big an  
 impact (I


On a SPARC CMT (Niagara 1+) based system wouldn't that be likely to  
have a large impact?

-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS deduplication

2008-08-26 Thread Keith Bierman
On Tue, Aug 26, 2008 at 10:11 AM, Darren J Moffat [EMAIL PROTECTED]wrote:

 Keith Bierman wrote:





 On a SPARC CMT (Niagara 1+) based system wouldn't that be likely to have a
 large impact?


 UltraSPARC T1 has no hardware SHA256 so I wouldn't expect any real change
 from running the private software sha256 copy in ZFS versus the software
 sha256 in the crypto framework.  The


Sorry for the typo (or thinko; I did know that but it's possible that it
slipped my mind in the moment). Admittedly most community members probably
don't have an N2 to play with, but it might well be available in the DC.
-- 
Keith Bierman
[EMAIL PROTECTED]
kbiermank AIM
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] confusion and frustration with zpool

2008-07-09 Thread Keith Bierman

On Jul 9, 2008, at 11:12 AM, Miles Nordin wrote:

 ah == Al Hopper [EMAIL PROTECTED] writes:

 ah I've had bad experiences with the Seagate products.

 I've had bad experiences with all of them.
 (maxtor, hgst, seagate, wd)

 ah My guess is that it's related to duty cycle -

 Recently I've been getting a lot of drives from companies like newegg
 and zipzoomfly that fail within the first month.  The rate is high
 enough that I would not trust a two-way mirror with 1mo old drives.


While I've always had good luck with zipzoomfly, infant mortality  
is a well known feature of many devices. Your advice to do some burn  
in testing of drives before putting them into full production is  
probably a very sound one for sites large enough to maintain a bit of  
inventory ;


-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS deduplication

2008-07-08 Thread Keith Bierman

On Jul 8, 2008, at 11:00 AM, Richard Elling wrote:

 much fun for people who want to hide costs.  For example, some bright
 manager decided that they should charge $100/month/port for ethernet
 drops.  So now, instead of having a centralized, managed network with
 well defined port mappings, every cube has an el-cheapo ethernet  
 switch.
 Saving money?  Not really, but this can be hidden by the accounting.


Indeed, it actively hurts performance (mixing sunray, mobile, and  
fixed units on the same subnets rather than segregation by type).
-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-07-01 Thread Keith Bierman

On Jul 1, 2008, at 10:55 AM, Miles Nordin wrote:

 I don't think it's overrated at all.  People all around me are using
 this dynamic_pager right now, and they just reboot when they see too
 many pinwheels.  If they are ``quite happy,'' it's not with their
 pager.

I often exist in a sea of mac users, and I've never seen them reboot  
other than after the periodic Apple Updates. Killing firefox every  
couple of days, or after visiting certain demented sites is not  
uncommon and is probably a good idea.
 

 They see demand as capacity rather than temperature but...the machine
 does need to run out of memory eventually.  Don't drink the
 dynamic_pager futuristic kool-aid.  It's broken, both in theory and in
 the day-to-day experience of the Mac users around me.


I've got macs with uptimes of months ... admittedly not in the same  
territory as my old SunOS or Solaris boxes, but Apple has seldom  
resisted the temptation to drop a security update or a quicktime  
update for longer.

-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle and ZFS

2008-06-23 Thread Keith Bierman

On Jun 23, 2008, at 11:36 AM, Miles Nordin wrote:

 unplanned power outage that
 happens after fsync returns

Aye, but isn't that the real rub ... when the power fails after the  
write but *before* the fsync has occurred...


-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-12 Thread Keith Bierman

On Jun 12, 2008, at 12:46 PM, Chris Siebenmann wrote:


  Or to put it another way: disk space is a permanent commitment,
 servers are not.


In the olden times (e.g. 1980s) on various CDC and Univac timesharing  
services, I recall there being two kinds of storage ... dayfiles  
and permanent files. The former could (and as a matter of policy did)  
be removed at the end of the day.

It was typically cheaper to move the fraction of one's dayfile output  
to tape, and have it rolled back in the next day ... but that was an  
optimization (or pessimization if the true costs were calculated).

I could easily imagine providing two tiers of storage for a  
university environment ... one which wasn't backed up, and doesn't  
come with any serious promises ... which could be pretty inexpensive  
and the second tier which has the kind of commitments you suggest are  
required.

Tier 2 should be better than storing things in /tmp, but could  
approach consumer pricing ... and still be good enough for a lot of  
uses.
-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't rm file when No space left on device...

2008-06-05 Thread Keith Bierman

On Jun 5, 2008, at 8:58 PM   6/5/, Brad Diggs wrote:

 Hi Keith,

 Sure you can truncate some files but that effectively corrupts
 the files in our case and would cause more harm than good. The
 only files in our volume are data files.




So an rm is ok, but a truncation is not?

Seems odd to me, but if that's your constraint so be it.

-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Hardware Check, OS X Compatibility, NEWBIE!!

2008-06-04 Thread Keith Bierman

On Jun 4, 2008, at 10:47 AM, Bill Sommerfeld wrote:


 On Wed, 2008-06-04 at 11:52 -0400, Bill McGonigle wrote:
 but we got one server in
 where 4 of the 8 drives failed in the first two months, at which
 point we called Seagate and they were happy to swap out all 8 drives
 for us.   I suspect a bad lot, and even found some other complaints
 about the lot on Google.

 Problems like that seem to pop up with disturbing regularity, and have
 done so for decades.  (Anyone else remember the DEC RA81 glue  
 problem in
 around 1985-1986?)

 I've thought for some time that a good way to defend against the bad
 lot problem (if you can manage it) is to buy half of your disks from
 each of two manufacturers and then set up mirror pairs containing one
 disk of each model...




I think it's a bit more common to arrange to have same vendor parts,  
but different lots. There are still lots of potential correlations,  
shared chassis, shared power, etc. mean shared vibration, shared  
environment, and shared human error sources ;



-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-06-02 Thread Keith Bierman

On Jun 2, 2008, at 3:24 AM   6/2/, Erik Trimble wrote:

 Keith Bierman wrote:
 On May 30, 2008, at 6:59 PM, Erik Trimble wrote:


 The only drawback of the older Socket 940 Opterons is that they  
 don't
 support the hardware VT extensions, so running a Windows guest   
 under xVM
 on them isn't currently possible.



 That is correct. VirtualBox does _not_ require the VT extensions.   
 I was referring to xVM, which I'm still taking as synonymous with  
 the Xen-based system.  xVM _does_ require the VT hardware  
 extensions to run guest OSes in an unmodified form, which currently  
 includes all flavors of Windows.



Ah, Marketing rebranding befuddles again.

It's Sun xVM VirtualBox (tm) as best I can tell from sun.com. So I  
assumed you were using the xVM in generic sense, not as Xen vs.  
Virtual Box.


-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-06-01 Thread Keith Bierman

On May 30, 2008, at 6:59 PM, Erik Trimble wrote:

 The only drawback of the older Socket 940 Opterons is that they don't
 support the hardware VT extensions, so running a Windows guest  
 under xVM
 on them isn't currently possible.



 From the VirtualBox manual, page 11

• No hardware virtualization required. VirtualBox does not require  
processor
features built into newer hardware like VT-x (on Intel processors) or  
AMD-V
(on AMD processors). As opposed to many other virtualization  
solutions, you
can therefore use VirtualBox even on older hardware where these  
features are
not present. In fact, VirtualBox’s sophisticated software techniques  
are typically
faster than hardware virtualization, although it is still possible to  
enable hard-
ware virtualization on a per-VM basis. Only for some exotic guest  
operating
systems like OS/2, hardware virtualization is required.




I've been running windows under OpenSolaris on an aged 32-bit Dell.  
I'm morally certain it lacks the hardware support, and in any event,  
the VBOX configuration is set to avoid using the VT extensions anyway.

Runs fine. Not the fastest box on the planet ... but it's got limited  
DRAM.



-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS sharing options for Windows

2008-05-30 Thread Keith Bierman

On May 30, 2008, at 10:45 AM, Craig Smith wrote:

 The tough thing is trying to make this fit
 well in a Windows world.

If you hang all the disks off the OpenSolaris system directly, and  
export via CIFS ... isn't it just a NAS box from the windows  
perspective? If so, how is it any harder to explain/fit than a NetApp  
box (or any other commercial NAS solution)?



-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS sharing options for Windows

2008-05-30 Thread Keith Bierman

On May 30, 2008, at 6:49 AM   5/30/, Craig J Smith wrote:


  It also should be noted that I am
 having to run on Solaris and not Opensolaris due to adaptec  
 am79c973 scsi
 driver issues in Opensolaris.

Well that is probably a showstopper then, since the in-kernel support  
isn't in the production Solaris leg yet.



-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-05-28 Thread Keith Bierman

On May 28, 2008, at 10:27 AM   5/28/, Richard Elling wrote:

 Since the mechanics are the same, the difference is in the electronics


In my very distant past, I did QA work for an electronic component  
manufacturer. Even parts which were identical were expected to  
behave quite differently ... based on population statistics. That is,  
the HighRel MilSpec parts were from batches with no failures (even  
under very harsh conditions beyond the normal operating mode, and all  
tests to destruction showed only the expected failure modes) and the  
hobbyist grade components were those whose cohort *failed* all the  
testing (and destructive testing could highlight abnormal failure  
modes).

I don't know that drive builders do the same thing, but I'd kinda  
expect it.

-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The ZFS inventor and Linus sitting in a tree?

2008-05-20 Thread Keith Bierman

On May 20, 2008, at 10:42 AM, Joerg Schilling wrote:

 Bob Friesenhahn [EMAIL PROTECTED] wrote:

 ,,,
 It may be that you confuse the term work in trying to extend it  
 in a wrong way.
...many wise words elided...

Not being a lawyer, and this not being a Legal forum ... can we leave  
license analysis alone?

-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS pool/filesystem layout design considerations

2008-05-14 Thread Keith Bierman


On May 14, 2008, at 10:06 AM, Todd E. Moore wrote:

I'm working with a group who is designing an application that  
distributes redundant copies of their data across multiple server  
nodes; something akin to RAIS (redundant array of independent  
servers).


That part sounds good.

Within the individual server, they have an application that stores  
the particular data into a file on a filesystem based on a hash or  
some other means by which to distribute the data across the various  
filesystems.


That sounds potentially good, if the underlying filesystems aren't  
reliable.


In their early testing, they found performance gains of ZFS  
compared to other filesystems.  As they begin to think about the  
production implementation they are considering the following design  
using external JBOD arrays - each drive is a separate zpool with a  
single filesystem


That seems like a very bad idea to me. If a system has multiple  
drives, using RAIDZ or some equivalent would be much sounder than  
relying on each drive to remain sane. Of course, their multiple  
copies can save them ... unless there's some correlated event (e.g.  
power surge) that causes failures in multiple drives and even  
multiple systems.

...
I have my concerns regarding this design, but I do not have the in- 
depth knowledge of ZFS to make the case for or against this design  
approach.  I need help to identify the pros/cons so I can continue  
the design discussion?



As you are at Sun, it would seem to me you should tap into the RAS  
expertise and tools available internally to evaluate the  
probabilistic failure modes in the light of field experience with  
various components. I'd expect that multiple systems, each of which  
has RAIDZ ZFS pools (leveraging multiple disks per spool) should have  
much higher RAS figures than the proposed alternative.




--
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 24-port SATA controller options?

2008-04-15 Thread Keith Bierman


On Apr 15, 2008, at 10:58 AM, Tim wrote:




On Tue, Apr 15, 2008 at 10:09 AM, Maurice Volaski  
[EMAIL PROTECTED] wrote:

I have 16 disks in RAID 5 and I'm not worried.

I'm sure you're already aware, but if not, 22 drives in a raid-6 is
absolutely SUICIDE when using SATA disks.  12 disks is the upper  
end of what
you want even with raid-6.  The odds of you losing data in a 22  
disk raid-6

is far too great to be worth it if you care about your data.  /rant


You could also be driving your car down the freeway at 100mph  
drunk, high, and without a seatbelt on and not be worried.  The  
odds will still be horribly against you.




Perhaps providing the computations rather than the conclusions would  
be more persuasive  on a technical list ;


--
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 24-port SATA controller options?

2008-04-15 Thread Keith Bierman

On Apr 15, 2008, at 11:18 AM, Bob Friesenhahn wrote:
 On Tue, 15 Apr 2008, Keith Bierman wrote:

 Perhaps providing the computations rather than the conclusions  
 would be more persuasive  on a technical list ;

 No doubt.  The computations depend considerably on the size of the  
 disk drives involved.  The odds of experiencing media failure on a  
 single 1TB SATA disk are quite high.  Consider that this media  
 failure may occur while attempting to recover from a failed disk.   
 There have been some good articles on this in USENIX Login magazine.

 ZFS raidz1 and raidz2 are NOT directly equivalent to RAID5 and  
 RAID6 so the failure statistics would be different.  Regardless,  
 single disk failure in a raidz1 substantially increases the risk  
 that something won't be recoverable if there is a media failure  
 while rebuilding. Since ZFS duplicates its own metadata blocks, it  
 is most likely that some user data would be lost but the pool would  
 otherwise recover.  If a second disk drive completely fails, then  
 you are toast with raidz1.

 RAID5 and RAID6 rebuild the entire disk while raidz1 and raidz2  
 only rebuild existing data blocks so raidz1 and raidz2 are less  
 likely to experience media failure if the pool is not full.

Indeed; but worked illustrative examples are apt to be more helpful  
than blanket pronouncements ;

-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Administration

2008-04-09 Thread Keith Bierman

On Apr 9, 2008, at 6:54 PM, Wee Yeh Tan wrote:
 I'm just thinking out loud.  What would be the advantage of having
 periodic snapshot taken within ZFS vs invoking it from an external
 facility?

I suspect that the people requesting this really want a unified  
management tool (GUI and possibly CLI). Whether the actual  
implementation were inside of the filesystem code, or implemented via  
cron or equivalent is probably irrelevant.

Their point, I think, is that we've got this nice management free  
technology ... except for these bits that still have to be done  
independently, and (to the non-unix experienced somewhat) arcane. If  
we aspire to achieve the sort of user friendlyness that is the Mac,  
then there's work to be in this area ;




-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Downgrade zpool version?

2008-04-07 Thread Keith Bierman

On Apr 7, 2008, at 1:46 PM, David Loose wrote:
  my Solaris samba shares never really played well with iTunes.


Another approach might be to stick with Solaris on the server, and  
run netatalk netatalk.sourceforge.net instead of SAMBA (or, you  
know your macs can speak NFS ;).
-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss