On Thu, Jun 29, 2006 at 10:01:15AM +0200, Robert Milkowski wrote:
> Hello przemolicc,
>
> Thursday, June 29, 2006, 8:01:26 AM, you wrote:
>
> ppf> On Wed, Jun 28, 2006 at 03:30:28PM +0200, Robert Milkowski wrote:
> >> ppf> What I wanted to point out is the Al's example: he wrote about
> >> damag
[EMAIL PROTECTED] wrote:
On Wed, Jun 28, 2006 at 09:30:25AM -0400, Jeff Victor wrote:
For example, if ZFS is mirroring a pool across two different storage arrays, a
firmware error in one of them will cause problems that ZFS will detect when
it tries to read the data. Further, ZFS would be able
Hello przemolicc,
Thursday, June 29, 2006, 10:08:23 AM, you wrote:
ppf> On Thu, Jun 29, 2006 at 10:01:15AM +0200, Robert Milkowski wrote:
>> Hello przemolicc,
>>
>> Thursday, June 29, 2006, 8:01:26 AM, you wrote:
>>
>> ppf> On Wed, Jun 28, 2006 at 03:30:28PM +0200, Robert Milkowski wrote:
>> >>
On Thu, Jun 29, 2006 at 10:01:15AM +0200, Robert Milkowski wrote:
> Hello przemolicc,
>
> Thursday, June 29, 2006, 8:01:26 AM, you wrote:
>
> ppf> On Wed, Jun 28, 2006 at 03:30:28PM +0200, Robert Milkowski wrote:
> >> ppf> What I wanted to point out is the Al's example: he wrote about
> >> damag
Hello przemolicc,
Thursday, June 29, 2006, 8:01:26 AM, you wrote:
ppf> On Wed, Jun 28, 2006 at 03:30:28PM +0200, Robert Milkowski wrote:
>> ppf> What I wanted to point out is the Al's example: he wrote about damaged
>> data. Data
>> ppf> were damaged by firmware _not_ disk surface ! In such case
On Wed, Jun 28, 2006 at 03:30:28PM +0200, Robert Milkowski wrote:
> ppf> What I wanted to point out is the Al's example: he wrote about damaged
> data. Data
> ppf> were damaged by firmware _not_ disk surface ! In such case ZFS doesn't
> help. ZFS can
> ppf> detect (and repair) errors on disk surf
On Wed, Jun 28, 2006 at 09:30:25AM -0400, Jeff Victor wrote:
> [EMAIL PROTECTED] wrote:
> >On Wed, Jun 28, 2006 at 02:23:32PM +0200, Robert Milkowski wrote:
> >
> >What I wanted to point out is the Al's example: he wrote about damaged
> >data. Data
> >were damaged by firmware _not_ disk surface !
Roch wrote:
Philip Brown writes:
> but there may not be filesystem space for double the data.
> Sounds like there is a need for a zfs-defragement-file utility perhaps?
>
> Or if you want to be politically cagey about naming choice, perhaps,
>
> zfs-seq-read-optimize-file ? :-)
>
P
Depends on your definition of firmware. In higher end arrays the data
is checksummed when it comes in and a hash is written when it gets to
disk. Of course this is no where near end to end but it is better then
nothing.
... and code is code. Easier to debug is a context sensitive term.
>Depends on your definition of firmware. In higher end arrays the data is
>checksummed when it comes in and a hash is written when it gets to disk.
>Of course this is no where near end to end but it is better then nothing.
The checksum is often stored with the data (so if the data is not writ
On Wed, 2006-06-28 at 09:05, [EMAIL PROTECTED] wrote:
> > But the point is that ZFS should detect also such errors and take
> > proper actions. Other filesystems can't.
>
> Does it mean that ZFS can detect errors in ZFS's code itself ? ;-)
In many cases, yes.
As a hypothetical: Consider a bug i
Jeremy Teo wrote:
Hello,
What I wanted to point out is the Al's example: he wrote about
damaged data. Data
were damaged by firmware _not_ disk surface ! In such case ZFS
doesn't help. ZFS can
detect (and repair) errors on disk surface, bad cables, etc. But
cannot detect and repair
errors in
[EMAIL PROTECTED] wrote:
On Wed, Jun 28, 2006 at 02:23:32PM +0200, Robert Milkowski wrote:
What I wanted to point out is the Al's example: he wrote about damaged data.
Data
were damaged by firmware _not_ disk surface ! In such case ZFS doesn't help.
ZFS can
detect (and repair) errors on disk s
Hello przemolicc,
Wednesday, June 28, 2006, 3:05:42 PM, you wrote:
ppf> On Wed, Jun 28, 2006 at 02:23:32PM +0200, Robert Milkowski wrote:
>> Hello przemolicc,
>>
>> Wednesday, June 28, 2006, 10:57:17 AM, you wrote:
>>
>> ppf> On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote:
>> >> Case
Hello,
What I wanted to point out is the Al's example: he wrote about damaged data.
Data
were damaged by firmware _not_ disk surface ! In such case ZFS doesn't help.
ZFS can
detect (and repair) errors on disk surface, bad cables, etc. But cannot detect
and repair
errors in its (ZFS) code.
I
On Wed, Jun 28, 2006 at 02:23:32PM +0200, Robert Milkowski wrote:
> Hello przemolicc,
>
> Wednesday, June 28, 2006, 10:57:17 AM, you wrote:
>
> ppf> On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote:
> >> Case in point, there was a gentleman who posted on the Yahoo Groups solx86
> >> list
Hello przemolicc,
Wednesday, June 28, 2006, 10:57:17 AM, you wrote:
ppf> On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote:
>> Case in point, there was a gentleman who posted on the Yahoo Groups solx86
>> list and described how faulty firmware on a Hitach HDS system damaged a
>> bunch of
On Wed, 28 Jun 2006 [EMAIL PROTECTED] wrote:
> On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote:
> > Case in point, there was a gentleman who posted on the Yahoo Groups solx86
> > list and described how faulty firmware on a Hitach HDS system damaged a
> > bunch of data. The HDS system mo
On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote:
> Case in point, there was a gentleman who posted on the Yahoo Groups solx86
> list and described how faulty firmware on a Hitach HDS system damaged a
> bunch of data. The HDS system moves disk blocks around, between one disk
> and another
Darren J Moffat wrote:
Torrey McMahon wrote:
Darren J Moffat wrote:
So everything you are saying seems to suggest you think ZFS was a
waste of engineering time since hardware raid solves all the problems ?
I don't believe it does but I'm no storage expert and maybe I've
drank too much cool a
On Tue, 27 Jun 2006 [EMAIL PROTECTED] wrote:
>
> >This is getting pretty picky. You're saying that ZFS will detect any
> >errors introduced after ZFS has gotten the data. However, as stated
> >in a previous post, that doesn't guarantee that the data given to ZFS
> >wasn't already corrupted.
>
>
Torrey McMahon wrote:
Darren J Moffat wrote:
So everything you are saying seems to suggest you think ZFS was a
waste of engineering time since hardware raid solves all the problems ?
I don't believe it does but I'm no storage expert and maybe I've drank
too much cool aid. I'm software person
Nicolas Williams wrote:
On Tue, Jun 27, 2006 at 09:41:10AM -0600, Gregory Shaw wrote:
This is getting pretty picky. You're saying that ZFS will detect any
errors introduced after ZFS has gotten the data. However, as stated
in a previous post, that doesn't guarantee that the data given to ZF
Jason Schroeder wrote:
Torrey McMahon wrote:
[EMAIL PROTECTED] wrote:
I'll bet that ZFS will generate more calls about broken hardware
and fingers will be pointed at ZFS at first because it's the new
kid; it will be some time before people realize that the data was
rotting all along.
Ehh
Torrey McMahon wrote:
[EMAIL PROTECTED] wrote:
I'll bet that ZFS will generate more calls about broken hardware
and fingers will be pointed at ZFS at first because it's the new
kid; it will be some time before people realize that the data was
rotting all along.
EhhhI don't think so. M
Torrey McMahon wrote:
ZFS is greatfor the systems that can run it. However, any enterprise
datacenter is going to be made up of many many hosts running many many
OS. In that world you're going to consolidate on large arrays and use
the features of those arrays where they cover the most gro
[EMAIL PROTECTED] wrote:
I'll bet that ZFS will generate more calls about broken hardware
and fingers will be pointed at ZFS at first because it's the new
kid; it will be some time before people realize that the data was
rotting all along.
EhhhI don't think so. Most of our customers have
Darren J Moffat wrote:
So everything you are saying seems to suggest you think ZFS was a
waste of engineering time since hardware raid solves all the problems ?
I don't believe it does but I'm no storage expert and maybe I've drank
too much cool aid. I'm software person and for me ZFS is bril
On Tue, Jun 27, 2006 at 09:41:10AM -0600, Gregory Shaw wrote:
> This is getting pretty picky. You're saying that ZFS will detect any
> errors introduced after ZFS has gotten the data. However, as stated
> in a previous post, that doesn't guarantee that the data given to ZFS
> wasn't already
One of the key points here is that people seem focused on two types of
errors:
1. Total drive failure
2. Bit rot
Traditional RAID solves #1. Reed-Solomon ECC found in all modern drives
solves #2 for all but the most extreme cases.
The real problem is the rising complexity of fir
Gregory Shaw wrote:
Not at all. ZFS is a quantum leap in Solaris filesystem/VM functionality.
Agreed.
However, I don't see a lot of use for RAID-Z (or Z2) in large
enterprise customers situations. For instance, does ZFS enable Sun to
walk into an account and say "You can now replace all o
>This is getting pretty picky. You're saying that ZFS will detect any
>errors introduced after ZFS has gotten the data. However, as stated
>in a previous post, that doesn't guarantee that the data given to ZFS
>wasn't already corrupted.
But there's a big difference between the time ZFS ge
This is getting pretty picky. You're saying that ZFS will detect any
errors introduced after ZFS has gotten the data. However, as stated
in a previous post, that doesn't guarantee that the data given to ZFS
wasn't already corrupted.
If you don't trust your storage subsystem, you're going
Not at all. ZFS is a quantum leap in Solaris filesystem/VM
functionality.
However, I don't see a lot of use for RAID-Z (or Z2) in large
enterprise customers situations. For instance, does ZFS enable Sun
to walk into an account and say "You can now replace all of your high-
end (EMC) dis
Unfortunately, a storage-based RAID controller cannot detect errors which occurred
between the filesystem layer and the RAID controller, in either direction - in or
out. ZFS will detect them through its use of checksums.
But ZFS can only fix them if it can access redundant bits. It can't tell
Bart Smaalders wrote:
Gregory Shaw wrote:
On Tue, 2006-06-27 at 09:09 +1000, Nathan Kroenert wrote:
How would ZFS self heal in this case?
>
You're using hardware raid. The hardware raid controller will rebuild
the volume in the event of a single drive failure. You'd need to keep
on top of
So everything you are saying seems to suggest you think ZFS was a waste
of engineering time since hardware raid solves all the problems ?
I don't believe it does but I'm no storage expert and maybe I've drank
too much cool aid. I'm software person and for me ZFS is brilliant it
is so much eas
Most controllers support a background-scrub that will read a volume
and repair any bad stripes. This addresses the bad block issue in
most cases.
It still doesn't help when a double-failure occurs. Luckily, that's
very rare. Usually, in that case, you need to evacuate the volume
and t
Mika Borner writes:
> >given that zfs always does copy-on-write for any updates, it's not
> clear
> >why this would necessarily degrade performance..
>
> Writing should be no problem, as it is serialized... but when both
> database instances are reading a lot of different blocks at the sa
>given that zfs always does copy-on-write for any updates, it's not
clear
>why this would necessarily degrade performance..
Writing should be no problem, as it is serialized... but when both
database instances are reading a lot of different blocks at the same
time, the spindles might "heat up".
>
On Tue, 2006-06-27 at 04:19, Mika Borner wrote:
> I'm thinking about a ZFS version of this task. Requirements: the
> production database should not suffer from performance degradation,
> whilst running the clone in parallel. As ZFS does not clone all the
> blocks, I wonder how much the procution da
Hello Mika,
Tuesday, June 27, 2006, 10:19:05 AM, you wrote:
>>but there may not be filesystem space for double the data.
>>Sounds like there is a need for a zfs-defragement-file utility
MB> perhaps?
>>Or if you want to be politically cagey about naming choice, perhaps,
>>zfs-seq-read-optimize-fil
Philip Brown writes:
> Roch wrote:
> > And, ifthe load can accomodate a
> > reorder, to get top per-spindle read-streaming performance,
> > a cp(1) of the file should do wonders on the layout.
> >
>
> but there may not be filesystem space for double the data.
> Sounds like there i
>but there may not be filesystem space for double the data.
>Sounds like there is a need for a zfs-defragement-file utility
perhaps?
>Or if you want to be politically cagey about naming choice, perhaps,
>zfs-seq-read-optimize-file ? :-)
For Datawarehouse and streaming applications a
"seq-read-om
>The vdev can handle dynamic lun growth, but the underlying VTOC or
>EFI label
>may need to be zero'd and reapplied if you setup the initial vdev on
>a slice. If
>you introduced the entire disk to the pool you should be fine, but I
>believe you'll
>still need to offline/online the pool.
Fin
Olaf Manczak wrote:
Eric Schrock wrote:
On Mon, Jun 26, 2006 at 05:26:24PM -0600, Gregory Shaw wrote:
You're using hardware raid. The hardware raid controller will rebuild
the volume in the event of a single drive failure. You'd need to keep
on top of it, but that's a given in the case of eit
-Does ZFS in the current version support LUN extension? With UFS, we have to zero the VTOC, and then adjust the new disk geometry. How does it look like with ZFS?The vdev can handle dynamic lun growth, but the underlying VTOC or EFI labelmay need to be zero'd and reapplied if you setup the initial
Gregory Shaw wrote:
On Tue, 2006-06-27 at 09:09 +1000, Nathan Kroenert wrote:
How would ZFS self heal in this case?
>
You're using hardware raid. The hardware raid controller will rebuild
the volume in the event of a single drive failure. You'd need to keep
on top of it, but that's a given
Eric Schrock wrote:
On Mon, Jun 26, 2006 at 05:26:24PM -0600, Gregory Shaw wrote:
You're using hardware raid. The hardware raid controller will rebuild
the volume in the event of a single drive failure. You'd need to keep
on top of it, but that's a given in the case of either hardware or
softw
On Mon, Jun 26, 2006 at 05:26:24PM -0600, Gregory Shaw wrote:
>
> You're using hardware raid. The hardware raid controller will rebuild
> the volume in the event of a single drive failure. You'd need to keep
> on top of it, but that's a given in the case of either hardware or
> software raid.
T
Roch wrote:
And, ifthe load can accomodate a
reorder, to get top per-spindle read-streaming performance,
a cp(1) of the file should do wonders on the layout.
but there may not be filesystem space for double the data.
Sounds like there is a need for a zfs-defragement-file utility perhaps
On Tue, 2006-06-27 at 09:09 +1000, Nathan Kroenert wrote:
> On Tue, 2006-06-27 at 02:27, Gregory Shaw wrote:
> > On Jun 26, 2006, at 1:15 AM, Mika Borner wrote:
> >
> > > What we need, would be the feature to use JBODs.
> > >
> >
> > If you've got hardware raid-5, why not just run regular (non-ra
On Tue, 2006-06-27 at 02:27, Gregory Shaw wrote:
> On Jun 26, 2006, at 1:15 AM, Mika Borner wrote:
>
> > What we need, would be the feature to use JBODs.
> >
>
> If you've got hardware raid-5, why not just run regular (non-raid)
> pools on top of the raid-5?
>
> I wouldn't go back to JBOD. H
> > -Does ZFS in the current version support LUN extension? With UFS, we
> > have to zero the VTOC, and then adjust the new disk geometry. How does
> > it look like with ZFS?
>
> I don't understand what you're asking. What problem is solved by
> zeroing the vtoc?
When the underlying storage in
On Jun 26, 2006, at 1:15 AM, Mika Borner wrote:
Hi
Now that Solaris 10 06/06 is finally downloadable I have some
questions
about ZFS.
-We have a big storage sytem supporting RAID5 and RAID1. At the
moment,
we only use RAID5 (for non-solaris systems as well). We are thinking
about using
About:
-I've read the threads about zfs and databases. Still I'm not 100%
convenienced about read performance. Doesn't the fragmentation of the
large database files (because of the concept of COW) impact
read-performance?
I do need to get back to this thread. The way I am currently
loo
Hi
Now that Solaris 10 06/06 is finally downloadable I have some questions
about ZFS.
-We have a big storage sytem supporting RAID5 and RAID1. At the moment,
we only use RAID5 (for non-solaris systems as well). We are thinking
about using ZFS on those LUNs instead of UFS. As ZFS on Hardware RAID5
57 matches
Mail list logo