I've been running ZFS against EMC Clariion CX-600 and CX-500s in various
configurations, mostly exported disk situations, with a number of kernel
flatlining situations. Most of these situations include Page83 data errors in
/var/adm/messages during kernel crashes.
As we're outgrowing the speed
i've got a little zpool with a naughty raidz vdev that won't take a
replacement that as far as i can tell should be adequate.
a history: this could well be some bizarro edge case, as the pool doesn't
have the cleanest lineage. initial creation happened on NexentaCP inside
vmware in linux. i had gi
I finally managed to have a small zfsroot on a 1 gig disk... with /usr, /var,
/export/home on a secondary pool.
If you follow the zfs boot manual install instruction or use the
'zfs-actual-root-install.sh' script, make sure of the following:
1/ Do not create your zfs boot root right after your
Please can you provide the source code for your test app.
I would like to see if I can reproduce this 'crash'.
Thanks
Nigel
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/m
I haven't heard from any other core contributors, but this sounds like a
worthy project to me. Someone from the ZFS team should follow through
to create the project on os.org[1]
Its sounds like like Domingos and Roland might constitute the initial
"project team".
- Eric
[1]
http://www.opensola
Vincent Fox wrote:
> So the problem in the zfs send/receive thing, is what if your network
> glitches out during the transfers?
zfs doesn't know. It depends on how the pipe tolerates breakage.
> We have these once a day due to some as-yet-undiagnosed switch problem, a
> chop-out of 50 seconds
Would the bootloader have issues here? On x86 I would imagine that you
would have to reload grub, would a similar thing need to be done on SPARC?
Ivan Wang wrote:
>>> Erik Trimble wrote:
>>> After both drives are replaced, you will automatically see the
>>> additional space.
>>>
>> I be
> > Erik Trimble wrote:
> > After both drives are replaced, you will automatically see the
> > additional space.
>
> I believe currently after the last replace an
> import/export sequence
> is needed to force zfs to see the increased size.
What if root fs is also in this pool? will there be any
I've got the box on eval, and just pushing through its paces. Ideally I would
be replicating to another x4500, but I don't have another one and didn't want
to use 22 disks for another pool.
This message posted from opensolaris.org
___
zfs-discuss ma
I've finally gotten around to upgrading my ZFS file server to build 72
and I must say I'm impressed with the performance boost provided by the
new nVidia SATA driver.
I had noticed a drop in bonnie++ performance once the pool started to
fill up, but with the new drive its is better than the origin
I'm not an expert but in /etc/system there is an entry for root-dev (or
root-device?).
E.g. the Solaris Volume Manager SVM uses this to mount the /dev/md/dsk/* device
at boot.
IMHO this is equivalent to Linux pivot-root.
When you put the root FS on ZFS, beware that all the libraries needed are
Rahul Mehta wrote:
> Has there been any solution to the problem discussed above in ZFS version 8??
We expect it to be fixed within a month. See:
http://opensolaris.org/os/community/arc/caselog/2007/555/
--matt
___
zfs-discuss mailing list
zfs-discuss@
12 matches
Mail list logo