Well, thanks your program, I could recover the data on the detach disk. Now I m
copying the data on other disks and resilver it inside the pool.
Warm words aren't enough to express how I feel. This community is great. Thanks
you very much.
bbr
This message posted from opensolaris.org
_
Because this system was in production I had to fairly quickly recover, so I was
unable to play much more with it we had to destroy it and recreate new pool and
then recover data from tapes.
Its a mistery as to why in the middle of a night it rebooted, we could not
figure this out and why pool h
> Oh, and here's the source code, for the curious:
The forensics project will be all over this, I hope, and wrap it up in a
nice command line tool.
-mg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
> Hi...
>
> Here's my system:
>
> 2 Intel 3 Ghz 5160 dual-core cpu's
> 0 SATA 750 GB disks running as a ZFS RAIDZ2 pool
> 8 GB Memory
> SunOS 5.11 snv_79a on a separate UFS mirror
> ZFS pool version 10
> No separate ZIL or ARC cache
> ran into a problem today where the ZFS pool ja
I have moved this saga to storage-discuss now, as this doesn't appear to be a
ZFS issue, and it can be found here:
http://www.opensolaris.org/jive/thread.jspa?threadID=59201
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-d
I have moved this saga to storage-discuss now, as this doesn't appear to be a
ZFS issue, and it can be found here:
http://www.opensolaris.org/jive/thread.jspa?threadID=59201
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-d
On Sun, May 4, 2008 at 11:42 AM, Jeff Bonwick <[EMAIL PROTECTED]> wrote:
> Oh, and here's the source code, for the curious:
>
[snipped]
>
> label_write(fd, offsetof(vdev_label_t, vl_uberblock),
> 1ULL << UBERBLOCK_SHIFT, ub);
>
> label_write(fd, offsetof(vdev_label_t,
> Well, 3510 is even supported as JBOD by Sun. The only
> limitation is to
> use only one FC link.
>
> I have tried both 3510 and 3511 as JBODS - 3510 works
> fins, with 3511
> I had some problems under higher load.
>
> --
> Best regards,
> Robert
>
Hi Robert,
I saw in your post that you had p
Jeff Bonwick wrote:
>> Looking at the txg numbers, it's clear that labels on to devices that
>> are unavailable now may be stale:
>
> Actually, they look OK. The txg values in the label indicate the
> last txg in which the pool configuration changed for devices in that
> top-level vdev (e.g. mirr
Oh, and here's the source code, for the curious:
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
/*
* Write a label block with a ZBT checksum.
*/
static void
label_write(int fd, uint64_t offset, uint64_t size, void *buf)
{
z
> Looking at the txg numbers, it's clear that labels on to devices that
> are unavailable now may be stale:
Actually, they look OK. The txg values in the label indicate the
last txg in which the pool configuration changed for devices in that
top-level vdev (e.g. mirror or raid-z group), not the l
It's OK that you're missing labels 2 and 3 -- there are four copies
precisely so that you can afford to lose a few. Labels 2 and 3
are at the end of the disk. The fact that only they are missing
makes me wonder if someone resized the LUNs. Growing them would
be OK, but shrinking them would indee
OK, here you go. I've successfully recovered a pool from a detached
device using the attached binary. You can verify its integrity
against the following MD5 hash:
# md5sum labelfix
ab4f33d99fdb48d9d20ee62b49f11e20 labelfix
It takes just one argument -- the disk to repair:
# ./labelfix /dev/rd
Hi List,
First of all: S10u4 120011-14
So I have the weird situation. Earlier this week, I finally mirrored up
two iSCSI based pools. I had been wanting to do this for some time,
because the availability of the data in these pools is important. One
pool mirrored just fine, but the other po
14 matches
Mail list logo