Re: [zfs-discuss] ZFS and Storage

2011-05-10 Thread przemol...@poczta.fm
On Thu, Jun 29, 2006 at 10:01:15AM +0200, Robert Milkowski wrote: > Hello przemolicc, > > Thursday, June 29, 2006, 8:01:26 AM, you wrote: > > ppf> On Wed, Jun 28, 2006 at 03:30:28PM +0200, Robert Milkowski wrote: > >> ppf> What I wanted to point out is the Al's example: he wrote about > >> damag

Re: [zfs-discuss] ZFS and Storage

2006-06-29 Thread Jeff Victor
[EMAIL PROTECTED] wrote: On Wed, Jun 28, 2006 at 09:30:25AM -0400, Jeff Victor wrote: For example, if ZFS is mirroring a pool across two different storage arrays, a firmware error in one of them will cause problems that ZFS will detect when it tries to read the data. Further, ZFS would be able

Re[2]: [zfs-discuss] ZFS and Storage

2006-06-29 Thread Robert Milkowski
Hello przemolicc, Thursday, June 29, 2006, 10:08:23 AM, you wrote: ppf> On Thu, Jun 29, 2006 at 10:01:15AM +0200, Robert Milkowski wrote: >> Hello przemolicc, >> >> Thursday, June 29, 2006, 8:01:26 AM, you wrote: >> >> ppf> On Wed, Jun 28, 2006 at 03:30:28PM +0200, Robert Milkowski wrote: >> >>

Re: [zfs-discuss] ZFS and Storage

2006-06-29 Thread przemolicc
On Thu, Jun 29, 2006 at 10:01:15AM +0200, Robert Milkowski wrote: > Hello przemolicc, > > Thursday, June 29, 2006, 8:01:26 AM, you wrote: > > ppf> On Wed, Jun 28, 2006 at 03:30:28PM +0200, Robert Milkowski wrote: > >> ppf> What I wanted to point out is the Al's example: he wrote about > >> damag

Re[2]: [zfs-discuss] ZFS and Storage

2006-06-29 Thread Robert Milkowski
Hello przemolicc, Thursday, June 29, 2006, 8:01:26 AM, you wrote: ppf> On Wed, Jun 28, 2006 at 03:30:28PM +0200, Robert Milkowski wrote: >> ppf> What I wanted to point out is the Al's example: he wrote about damaged >> data. Data >> ppf> were damaged by firmware _not_ disk surface ! In such case

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread przemolicc
On Wed, Jun 28, 2006 at 03:30:28PM +0200, Robert Milkowski wrote: > ppf> What I wanted to point out is the Al's example: he wrote about damaged > data. Data > ppf> were damaged by firmware _not_ disk surface ! In such case ZFS doesn't > help. ZFS can > ppf> detect (and repair) errors on disk surf

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread przemolicc
On Wed, Jun 28, 2006 at 09:30:25AM -0400, Jeff Victor wrote: > [EMAIL PROTECTED] wrote: > >On Wed, Jun 28, 2006 at 02:23:32PM +0200, Robert Milkowski wrote: > > > >What I wanted to point out is the Al's example: he wrote about damaged > >data. Data > >were damaged by firmware _not_ disk surface !

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Philip Brown
Roch wrote: Philip Brown writes: > but there may not be filesystem space for double the data. > Sounds like there is a need for a zfs-defragement-file utility perhaps? > > Or if you want to be politically cagey about naming choice, perhaps, > > zfs-seq-read-optimize-file ? :-) > P

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Nagakiran
Depends on your definition of firmware. In higher end arrays the data is checksummed when it comes in and a hash is written when it gets to disk. Of course this is no where near end to end but it is better then nothing. ... and code is code. Easier to debug is a context sensitive term.

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Casper . Dik
>Depends on your definition of firmware. In higher end arrays the data is >checksummed when it comes in and a hash is written when it gets to disk. >Of course this is no where near end to end but it is better then nothing. The checksum is often stored with the data (so if the data is not writ

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Bill Sommerfeld
On Wed, 2006-06-28 at 09:05, [EMAIL PROTECTED] wrote: > > But the point is that ZFS should detect also such errors and take > > proper actions. Other filesystems can't. > > Does it mean that ZFS can detect errors in ZFS's code itself ? ;-) In many cases, yes. As a hypothetical: Consider a bug i

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Torrey McMahon
Jeremy Teo wrote: Hello, What I wanted to point out is the Al's example: he wrote about damaged data. Data were damaged by firmware _not_ disk surface ! In such case ZFS doesn't help. ZFS can detect (and repair) errors on disk surface, bad cables, etc. But cannot detect and repair errors in

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Jeff Victor
[EMAIL PROTECTED] wrote: On Wed, Jun 28, 2006 at 02:23:32PM +0200, Robert Milkowski wrote: What I wanted to point out is the Al's example: he wrote about damaged data. Data were damaged by firmware _not_ disk surface ! In such case ZFS doesn't help. ZFS can detect (and repair) errors on disk s

Re[2]: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Robert Milkowski
Hello przemolicc, Wednesday, June 28, 2006, 3:05:42 PM, you wrote: ppf> On Wed, Jun 28, 2006 at 02:23:32PM +0200, Robert Milkowski wrote: >> Hello przemolicc, >> >> Wednesday, June 28, 2006, 10:57:17 AM, you wrote: >> >> ppf> On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote: >> >> Case

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Jeremy Teo
Hello, What I wanted to point out is the Al's example: he wrote about damaged data. Data were damaged by firmware _not_ disk surface ! In such case ZFS doesn't help. ZFS can detect (and repair) errors on disk surface, bad cables, etc. But cannot detect and repair errors in its (ZFS) code. I

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread przemolicc
On Wed, Jun 28, 2006 at 02:23:32PM +0200, Robert Milkowski wrote: > Hello przemolicc, > > Wednesday, June 28, 2006, 10:57:17 AM, you wrote: > > ppf> On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote: > >> Case in point, there was a gentleman who posted on the Yahoo Groups solx86 > >> list

Re[2]: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Robert Milkowski
Hello przemolicc, Wednesday, June 28, 2006, 10:57:17 AM, you wrote: ppf> On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote: >> Case in point, there was a gentleman who posted on the Yahoo Groups solx86 >> list and described how faulty firmware on a Hitach HDS system damaged a >> bunch of

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Al Hopper
On Wed, 28 Jun 2006 [EMAIL PROTECTED] wrote: > On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote: > > Case in point, there was a gentleman who posted on the Yahoo Groups solx86 > > list and described how faulty firmware on a Hitach HDS system damaged a > > bunch of data. The HDS system mo

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread przemolicc
On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote: > Case in point, there was a gentleman who posted on the Yahoo Groups solx86 > list and described how faulty firmware on a Hitach HDS system damaged a > bunch of data. The HDS system moves disk blocks around, between one disk > and another

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Torrey McMahon
Darren J Moffat wrote: Torrey McMahon wrote: Darren J Moffat wrote: So everything you are saying seems to suggest you think ZFS was a waste of engineering time since hardware raid solves all the problems ? I don't believe it does but I'm no storage expert and maybe I've drank too much cool a

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Al Hopper
On Tue, 27 Jun 2006 [EMAIL PROTECTED] wrote: > > >This is getting pretty picky. You're saying that ZFS will detect any > >errors introduced after ZFS has gotten the data. However, as stated > >in a previous post, that doesn't guarantee that the data given to ZFS > >wasn't already corrupted. > >

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Darren J Moffat
Torrey McMahon wrote: Darren J Moffat wrote: So everything you are saying seems to suggest you think ZFS was a waste of engineering time since hardware raid solves all the problems ? I don't believe it does but I'm no storage expert and maybe I've drank too much cool aid. I'm software person

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Darren J Moffat
Nicolas Williams wrote: On Tue, Jun 27, 2006 at 09:41:10AM -0600, Gregory Shaw wrote: This is getting pretty picky. You're saying that ZFS will detect any errors introduced after ZFS has gotten the data. However, as stated in a previous post, that doesn't guarantee that the data given to ZF

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Torrey McMahon
Jason Schroeder wrote: Torrey McMahon wrote: [EMAIL PROTECTED] wrote: I'll bet that ZFS will generate more calls about broken hardware and fingers will be pointed at ZFS at first because it's the new kid; it will be some time before people realize that the data was rotting all along. Ehh

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Jason Schroeder
Torrey McMahon wrote: [EMAIL PROTECTED] wrote: I'll bet that ZFS will generate more calls about broken hardware and fingers will be pointed at ZFS at first because it's the new kid; it will be some time before people realize that the data was rotting all along. EhhhI don't think so. M

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Dale Ghent
Torrey McMahon wrote: ZFS is greatfor the systems that can run it. However, any enterprise datacenter is going to be made up of many many hosts running many many OS. In that world you're going to consolidate on large arrays and use the features of those arrays where they cover the most gro

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Torrey McMahon
[EMAIL PROTECTED] wrote: I'll bet that ZFS will generate more calls about broken hardware and fingers will be pointed at ZFS at first because it's the new kid; it will be some time before people realize that the data was rotting all along. EhhhI don't think so. Most of our customers have

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Torrey McMahon
Darren J Moffat wrote: So everything you are saying seems to suggest you think ZFS was a waste of engineering time since hardware raid solves all the problems ? I don't believe it does but I'm no storage expert and maybe I've drank too much cool aid. I'm software person and for me ZFS is bril

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Nicolas Williams
On Tue, Jun 27, 2006 at 09:41:10AM -0600, Gregory Shaw wrote: > This is getting pretty picky. You're saying that ZFS will detect any > errors introduced after ZFS has gotten the data. However, as stated > in a previous post, that doesn't guarantee that the data given to ZFS > wasn't already

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Eric Schrock
One of the key points here is that people seem focused on two types of errors: 1. Total drive failure 2. Bit rot Traditional RAID solves #1. Reed-Solomon ECC found in all modern drives solves #2 for all but the most extreme cases. The real problem is the rising complexity of fir

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Richard Elling
Gregory Shaw wrote: Not at all. ZFS is a quantum leap in Solaris filesystem/VM functionality. Agreed. However, I don't see a lot of use for RAID-Z (or Z2) in large enterprise customers situations. For instance, does ZFS enable Sun to walk into an account and say "You can now replace all o

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Casper . Dik
>This is getting pretty picky. You're saying that ZFS will detect any >errors introduced after ZFS has gotten the data. However, as stated >in a previous post, that doesn't guarantee that the data given to ZFS >wasn't already corrupted. But there's a big difference between the time ZFS ge

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Gregory Shaw
This is getting pretty picky. You're saying that ZFS will detect any errors introduced after ZFS has gotten the data. However, as stated in a previous post, that doesn't guarantee that the data given to ZFS wasn't already corrupted. If you don't trust your storage subsystem, you're going

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Gregory Shaw
Not at all. ZFS is a quantum leap in Solaris filesystem/VM functionality. However, I don't see a lot of use for RAID-Z (or Z2) in large enterprise customers situations. For instance, does ZFS enable Sun to walk into an account and say "You can now replace all of your high- end (EMC) dis

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Jeff Victor
Unfortunately, a storage-based RAID controller cannot detect errors which occurred between the filesystem layer and the RAID controller, in either direction - in or out. ZFS will detect them through its use of checksums. But ZFS can only fix them if it can access redundant bits. It can't tell

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Torrey McMahon
Bart Smaalders wrote: Gregory Shaw wrote: On Tue, 2006-06-27 at 09:09 +1000, Nathan Kroenert wrote: How would ZFS self heal in this case? > You're using hardware raid. The hardware raid controller will rebuild the volume in the event of a single drive failure. You'd need to keep on top of

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Darren J Moffat
So everything you are saying seems to suggest you think ZFS was a waste of engineering time since hardware raid solves all the problems ? I don't believe it does but I'm no storage expert and maybe I've drank too much cool aid. I'm software person and for me ZFS is brilliant it is so much eas

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Gregory Shaw
Most controllers support a background-scrub that will read a volume and repair any bad stripes. This addresses the bad block issue in most cases. It still doesn't help when a double-failure occurs. Luckily, that's very rare. Usually, in that case, you need to evacuate the volume and t

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Roch
Mika Borner writes: > >given that zfs always does copy-on-write for any updates, it's not > clear > >why this would necessarily degrade performance.. > > Writing should be no problem, as it is serialized... but when both > database instances are reading a lot of different blocks at the sa

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Mika Borner
>given that zfs always does copy-on-write for any updates, it's not clear >why this would necessarily degrade performance.. Writing should be no problem, as it is serialized... but when both database instances are reading a lot of different blocks at the same time, the spindles might "heat up". >

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Bill Sommerfeld
On Tue, 2006-06-27 at 04:19, Mika Borner wrote: > I'm thinking about a ZFS version of this task. Requirements: the > production database should not suffer from performance degradation, > whilst running the clone in parallel. As ZFS does not clone all the > blocks, I wonder how much the procution da

Re[2]: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Robert Milkowski
Hello Mika, Tuesday, June 27, 2006, 10:19:05 AM, you wrote: >>but there may not be filesystem space for double the data. >>Sounds like there is a need for a zfs-defragement-file utility MB> perhaps? >>Or if you want to be politically cagey about naming choice, perhaps, >>zfs-seq-read-optimize-fil

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Roch
Philip Brown writes: > Roch wrote: > > And, ifthe load can accomodate a > > reorder, to get top per-spindle read-streaming performance, > > a cp(1) of the file should do wonders on the layout. > > > > but there may not be filesystem space for double the data. > Sounds like there i

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Mika Borner
>but there may not be filesystem space for double the data. >Sounds like there is a need for a zfs-defragement-file utility perhaps? >Or if you want to be politically cagey about naming choice, perhaps, >zfs-seq-read-optimize-file ? :-) For Datawarehouse and streaming applications a "seq-read-om

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Mika Borner
>The vdev can handle dynamic lun growth, but the underlying VTOC or >EFI label >may need to be zero'd and reapplied if you setup the initial vdev on >a slice. If >you introduced the entire disk to the pool you should be fine, but I >believe you'll >still need to offline/online the pool. Fin

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Richard Elling
Olaf Manczak wrote: Eric Schrock wrote: On Mon, Jun 26, 2006 at 05:26:24PM -0600, Gregory Shaw wrote: You're using hardware raid. The hardware raid controller will rebuild the volume in the event of a single drive failure. You'd need to keep on top of it, but that's a given in the case of eit

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Jonathan Edwards
-Does ZFS in the current version support LUN extension? With UFS, we have to zero the VTOC, and then adjust the new disk geometry. How does it look like with ZFS?The vdev can handle dynamic lun growth, but the underlying VTOC or EFI labelmay need to be zero'd and reapplied if you setup the initial

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Bart Smaalders
Gregory Shaw wrote: On Tue, 2006-06-27 at 09:09 +1000, Nathan Kroenert wrote: How would ZFS self heal in this case? > You're using hardware raid. The hardware raid controller will rebuild the volume in the event of a single drive failure. You'd need to keep on top of it, but that's a given

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Olaf Manczak
Eric Schrock wrote: On Mon, Jun 26, 2006 at 05:26:24PM -0600, Gregory Shaw wrote: You're using hardware raid. The hardware raid controller will rebuild the volume in the event of a single drive failure. You'd need to keep on top of it, but that's a given in the case of either hardware or softw

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Eric Schrock
On Mon, Jun 26, 2006 at 05:26:24PM -0600, Gregory Shaw wrote: > > You're using hardware raid. The hardware raid controller will rebuild > the volume in the event of a single drive failure. You'd need to keep > on top of it, but that's a given in the case of either hardware or > software raid. T

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Philip Brown
Roch wrote: And, ifthe load can accomodate a reorder, to get top per-spindle read-streaming performance, a cp(1) of the file should do wonders on the layout. but there may not be filesystem space for double the data. Sounds like there is a need for a zfs-defragement-file utility perhaps

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Gregory Shaw
On Tue, 2006-06-27 at 09:09 +1000, Nathan Kroenert wrote: > On Tue, 2006-06-27 at 02:27, Gregory Shaw wrote: > > On Jun 26, 2006, at 1:15 AM, Mika Borner wrote: > > > > > What we need, would be the feature to use JBODs. > > > > > > > If you've got hardware raid-5, why not just run regular (non-ra

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Nathan Kroenert
On Tue, 2006-06-27 at 02:27, Gregory Shaw wrote: > On Jun 26, 2006, at 1:15 AM, Mika Borner wrote: > > > What we need, would be the feature to use JBODs. > > > > If you've got hardware raid-5, why not just run regular (non-raid) > pools on top of the raid-5? > > I wouldn't go back to JBOD. H

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Darren Dunham
> > -Does ZFS in the current version support LUN extension? With UFS, we > > have to zero the VTOC, and then adjust the new disk geometry. How does > > it look like with ZFS? > > I don't understand what you're asking. What problem is solved by > zeroing the vtoc? When the underlying storage in

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Gregory Shaw
On Jun 26, 2006, at 1:15 AM, Mika Borner wrote: Hi Now that Solaris 10 06/06 is finally downloadable I have some questions about ZFS. -We have a big storage sytem supporting RAID5 and RAID1. At the moment, we only use RAID5 (for non-solaris systems as well). We are thinking about using

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Roch
About: -I've read the threads about zfs and databases. Still I'm not 100% convenienced about read performance. Doesn't the fragmentation of the large database files (because of the concept of COW) impact read-performance? I do need to get back to this thread. The way I am currently loo

[zfs-discuss] ZFS and Storage

2006-06-26 Thread Mika Borner
Hi Now that Solaris 10 06/06 is finally downloadable I have some questions about ZFS. -We have a big storage sytem supporting RAID5 and RAID1. At the moment, we only use RAID5 (for non-solaris systems as well). We are thinking about using ZFS on those LUNs instead of UFS. As ZFS on Hardware RAID5