Re: [zfs-discuss] zfs rewrite?

2007-01-27 Thread Toby Thain


On 27-Jan-07, at 4:57 AM, Frank Cusack wrote:

On January 27, 2007 12:27:17 AM -0200 Toby Thain  
<[EMAIL PROTECTED]> wrote:

On 26-Jan-07, at 11:34 PM, Pawel Jakub Dawidek wrote:

3. I created file system with huge amount of data, where most of the
data is read-only. I change my server from intel to sparc64 machine.
Adaptive endianess only change byte order to native on write and
because
file system is mostly read-only, it'll need to byteswap all the  
time.

And here comes 'zfs rewrite'!


Why would this help? (Obviously file data is never 'swapped').


Metadata (incl checksums?) still has to be byte-swapped.


I'm aware, but is this really ever going to be an issue?

--T


Or would
atime updates also force a metadata update?  Or am I totally mistaken.

-frank


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS or UFS - what to do?

2007-01-27 Thread James C. McPherson

Selim Daoud wrote:

it would be good to have real data and not only guess ot anecdots

this story about wrong blocks being written by  RAID controllers
sounds like the anti-terrorism propaganda we are leaving in: exagerate
the facts to catch everyone's attention
.It's going to take more than that to prove RAID ctrls have been doing
a bad jobs for the last 30 years
Let's make up  real stories with hard fact first


I have actual hard data and bitter experience (from support calls)
to backup the allegations that raid controllers can and do write
bad blocks.

No, I cannot and will not provide specifics - I signed an NDA
which expressly deals with confidentiality of customer information.


What I can say is that if we'd had ZFS to manage the filesystems
in question, not only would we have detected the problem much
earlier, but the flow-on effect to the end-users would have been
much more easily managed.


James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
  http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Adding my own compression to zfs

2007-01-27 Thread roland
is it planned to add some other compression algorithm to zfs ?

lzjb is quite good and especially performing very well, but i`d like to have 
better compression (bzip2?) - no matter how worse performance drops with this. 

regards
roland
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: high density SAS

2007-01-27 Thread Al Hopper
On Fri, 26 Jan 2007, Anton B. Rang wrote:

> > > How badly can you mess up a JBOD?
> >
> > Two words: vibration, cooling.
>
> Three more: power, signal quality.
>
> I've seen even individual drive cases with bad enough signal quality to cause 
> bit errors.

Yes - me too.  I was a early adopter of Fibre Channel, and the first FC
enclosure I had the misfortune to purchase, had really terrible signal
quality and noise issues.  It took me awhile to figure out if it was the
beta drivers, brand new FC HBA, copper wiring, Seagate disk drives or 
The breakthrough came when I got a simple FC interface board (from
Seagate[0]) that had a 3ft FC (copper) cable and could be plugged directly
into a FC disk drive.  At that point I knew that everything was solid,
from computer system to the disk drive, and because the same drive would
not perform in the FC enclosure - it was obvious where the problem lay.
Looking inside the enclosure, it was wired, point-to-point, with cables
and connectors that closely resemble the cable/connector you see that
carries the digital audio from a CD-ROM to a PC motherboard.  It was
pretty obvious that the company who built this enclosure (they manufacture
a bunch of disk drive enclosures - many as OEM products)  did'nt realize
that they were now dealing with microwave signals and needed RF (Radio
Frequency) expertize in order to design a solid FC enclosure.

[0] it was intended for disk drive trouble-shooting/testing etc and is
still very useful if you "play" with FC disk drives.

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
   Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
 OpenSolaris Governing Board (OGB) Member - Feb 2006
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS or UFS - what to do?

2007-01-27 Thread David Magda


On Jan 26, 2007, at 14:05, Ed Gould wrote:

It will work, but if the storage system corrupts the data, ZFS will  
be unable to correct it.  It will detect the error.


Unless you turn checksuming off. From zfs(1M):

checksum=on | off | fletcher2, | fletcher4 | sha256
Controls the checksum used to verify data integrity. The default  
value is “on”, which automatically selects an appropriate algorithm  
(currently, fletcher2, but this may change in future releases). The  
value “off” disables integrity checking on user data. Disabling  
checksums is NOT a recommended practice.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS or UFS - what to do?

2007-01-27 Thread David Magda


On Jan 26, 2007, at 14:43, Gary Mills wrote:


Our Netapp does double-parity RAID.  In fact, the filesystem design is
remarkably similar to that of ZFS.  Wouldn't that also detect the
error?  I suppose it depends if the `wrong sector without notice'
error is repeated each time.  Or is it random?


On most (all?) other systems the parity only comes into effect when a  
drive fails. When all the drives are reporting "OK" most (all?) RAID  
systems don't use the parity data at all. ZFS is the first (only?)  
system that actively checks the data returned from disk, regardless  
of whether the drives are reporting they're okay or not.


I'm sure I'll be corrected if I'm wrong. :)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs rewrite?

2007-01-27 Thread Frank Cusack

On January 27, 2007 6:15:29 AM -0200 Toby Thain <[EMAIL PROTECTED]> wrote:


On 27-Jan-07, at 4:57 AM, Frank Cusack wrote:


On January 27, 2007 12:27:17 AM -0200 Toby Thain
<[EMAIL PROTECTED]> wrote:

On 26-Jan-07, at 11:34 PM, Pawel Jakub Dawidek wrote:

3. I created file system with huge amount of data, where most of the
data is read-only. I change my server from intel to sparc64 machine.
Adaptive endianess only change byte order to native on write and
because
file system is mostly read-only, it'll need to byteswap all the
time.
And here comes 'zfs rewrite'!


Why would this help? (Obviously file data is never 'swapped').


Metadata (incl checksums?) still has to be byte-swapped.


I'm aware, but is this really ever going to be an issue?


Well, it IS extra work.  But yeah, it seems pretty insignificant to me.
-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: ZFS or UFS - what to do?

2007-01-27 Thread Anantha N. Srirama
I'm not sure what benefit you forsee by running a COW filesystem (ZFS) on a COW 
array (NetApp). 

Back to regularly scheduled programming: I still say you should let ZFS manage 
JBoD type storage. I can personally recount the horror of relying upon an 
intelligent storage array (EMC DMX3500 in our case.) We had in flight data 
corruption that EMC faithfully wrote just like NetApp would in your case. 
Everybody is assuming that corruption or data loss occurs only on disks, it can 
happen everywhere. In a datacenter SAN you've so many more paths that can 
introduce data corruption. Hence the need for ensuring data integrity closest 
to the use of data, namely ZFS. ZFS will not stop alpha particle induced memory 
corruption after data has been received by server and verified to be correct. 
Sadly I've been hit with that as well.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: ZFS or UFS - what to do?

2007-01-27 Thread Toby Thain


On 27-Jan-07, at 10:15 PM, Anantha N. Srirama wrote:

We had in flight data corruption that EMC faithfully wrote just  
like NetApp would in your case. Everybody is assuming that  
corruption or data loss occurs only on disks, it can happen  
everywhere. In a datacenter SAN you've so many more paths that can  
introduce data corruption. Hence the need for ensuring data  
integrity closest to the use of data, namely ZFS.



Now how do we get this message out there and understood, fellow  
evangelicals? :)


--Toby

ZFS will not stop alpha particle induced memory corruption after  
data has been received by server and verified to be correct. Sadly  
I've been hit with that as well.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: ZFS or UFS - what to do?

2007-01-27 Thread Toby Thain


On 27-Jan-07, at 10:15 PM, Anantha N. Srirama wrote:

... ZFS will not stop alpha particle induced memory corruption  
after data has been received by server and verified to be correct.  
Sadly I've been hit with that as well.



My brother points out that you can use a rad hardened CPU. ECC should  
take care of the RAM. :-)


I wonder when the former will become data centre best practice?

--Toby
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: ZFS or UFS - what to do?

2007-01-27 Thread Gary Mills
On Sat, Jan 27, 2007 at 04:15:30PM -0800, Anantha N. Srirama wrote:
>
> I'm not sure what benefit you forsee by running a COW filesystem
> (ZFS) on a COW array (NetApp).

Assuming that that question was addressed to me, the primary feature
that I need from ZFS is snapshots.  The Netapp has snapshots too, but
they are done by disk blocks since, for an iSCSI LUN, the Netapp has
no concept of files.  ZFS snapshots allow restore of individual files
when users accidentally delete them.

As well, I do need a filesystem of some sort on the iSCSI LUN.  If
ZFS is superior to UFS in this application, I'd like to use it.

-- 
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss