That brings up another interesting idea.
ZFS currently uses a 128-bit checksum for blocks of up to 1048576 bits.
If 20-odd bits of that were a Hamming code, you'd have something slightly
stronger than SECDED, and ZFS could correct any single-bit errors encountered.
This could be done without ch
As most of the zfs recovery problems seem to stem from zfs’s own strict
insistence that data be ideally consistent with its corresponding checksum,
which of course is good when correspondingly consistent data may be
recovered from somewhere, but catastrophic otherwise; it seem clear that
zfs must s
Chris Siebenmann wrote:
> I'm not Anton Rang, but:
> | How would you describe the difference between the data recovery
> | utility and ZFS's normal data recovery process?
>
> The data recovery utility should not panic my entire system if it runs
> into some situation that it utterly cannot handle
Claus Guttesen wrote:
>> | How would you describe the difference between the data recovery
>> | utility and ZFS's normal data recovery process?
>>
>> The data recovery utility should not panic my entire system if it runs
>> into some situation that it utterly cannot handle. Solaris 10 U5 kernel
>>
> | How would you describe the difference between the data recovery
> | utility and ZFS's normal data recovery process?
>
> The data recovery utility should not panic my entire system if it runs
> into some situation that it utterly cannot handle. Solaris 10 U5 kernel
> ZFS code does not have this
I'm not Anton Rang, but:
| How would you describe the difference between the data recovery
| utility and ZFS's normal data recovery process?
The data recovery utility should not panic my entire system if it runs
into some situation that it utterly cannot handle. Solaris 10 U5 kernel
ZFS code doe
On Mon, 11 Aug 2008, Martin Svensson wrote:
>
> Granted, the simple striped configuration is fast, and of course
> with no redundancy. But I don't understand how a mirrored
> configuration can perform as good when you sacrifice half of your
> disks for redundancy. Doesn't a mirror perform as one
Victor Latushkin wrote:
> Hi Tom and all,
>
> Tom Bird wrote:
>> Hi,
>>
>> Have a problem with a ZFS on a single device, this device is 48 1T SATA
>> drives presented as a 42T LUN via hardware RAID 6 on a SAS bus which had
>> a ZFS on it as a single device.
>>
>> There was a problem with the SAS b
>
>Are there any known issues with having 32 bit OS clients using NFS to
>access a NFS server using a 64 bit OS exporting > 2TB filesystem? Are
>there any issues with using NFS v3 over NFS v4?
The problems are not about the size of the data; it's how 32 clients used
the data returned; the stic
Are there any known issues with having 32 bit OS clients using NFS to
access a NFS server using a 64 bit OS exporting > 2TB filesystem? Are
there any issues with using NFS v3 over NFS v4?
Thanks!
Weldon
___
zfs-discuss mailing list
zfs-discuss@open
Ok, I've now reported most of the problems I found, but have additional
information to add to bugs 6667199 and 667208. Can anybody tell me how I go
about reporting that to Sun?
thanks,
Ross
This message posted from opensolaris.org
___
zfs-discuss
ok, so I tried installing 138053-02, and umounting/unsharing for the
entire resilvering process,
meanwhile, onsite support decided to replace the mainboard due to some
reason (not that I was full of confidence here)
... and between us, it has actually been up for 2 hours, and has a clean
"zpoo
Borys Saulyak wrote:
>> Your pools have no redundancy...
> Box is connected to two fabric switches via different HBAs, storage is RAID5,
> MPxIP is ON, and all after that my pools have no redundancy?!?!
Not that ZFS can see and use, all that is just a single disk as far as
ZFS is concerned.
>>
> Your pools have no redundancy...
Box is connected to two fabric switches via different HBAs, storage is RAID5,
MPxIP is ON, and all after that my pools have no redundancy?!?!
> ...and got corrupted, therefore there is nothing ZFS
This is exactly what I would like to know. HOW this could happen
On 11 August, 2008 - Martin Svensson sent me these 0,9K bytes:
> I read this (http://blogs.sun.com/roch/entry/when_to_and_not_to) blog
> regarding when and when not to use raidz. There is an example of a plain
> striped configuration and a mirror configuration. (See below)
>
> M refers to a 2-w
Diskspace may be lost on redundacy, but there's still two or more
devices in the mirror. Read requests can be spread across these.
--
Via iPhone 3G
On 11-août-08, at 11:07, Martin Svensson <[EMAIL PROTECTED]
m> wrote:
> I read this (http://blogs.sun.com/roch/entry/when_to_and_not_to)
> blog
Hi,
I'd like to share our positive experience on a POC.
We created a few iSCSI shares. Mounted on a Windows box. Then took snap shot
of one of them. On the next step we converted the snap shot in to a clone
and then tried to mount to the same Windows server.
We all thought it will not work as d
Dnia 7-08-2008 o godz. 13:20 Borys Saulyak napisał(a):
> Hi,
>
> I have problem with Solaris 10. I know that this forum is for
> OpenSolaris but may be someone will have an idea.
> My box is crashing on any attempt to import zfs pool. First crash
> happened on export operation and since then I can
I read this (http://blogs.sun.com/roch/entry/when_to_and_not_to) blog regarding
when and when not to use raidz. There is an example of a plain striped
configuration and a mirror configuration. (See below)
M refers to a 2-way mirror and S to a simple dynamic stripe.
Config Blocks Available
On 10 August, 2008 - Martin Svensson sent me these 0,9K bytes:
> Hello! I'm new to ZFS and have some configuration questions.
>
> What's the difference, performance wise, in below configurations?
> * In the first configuration, can I loose 1 disk?
Yes.
> And, are the disks striped to gain perfo
Yup, and plenty of other interested spectators ;-)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
21 matches
Mail list logo