[zfs-discuss] fmadm warnings about media erros

2010-07-17 Thread Bruno Sousa
Hi all, Today i notice that one of the ZFS based servers within my company is complaining about disk errors, but i would like to know if this a real physical error or something like a transport error or something. The server in question runs snv_134 attached to 2 J4400 jbods , and the head-node ha

[zfs-discuss] Lost zpool after reboot

2010-07-17 Thread Amit Kulkarni
Hello, I have a dual boot with Windows 7 64 bit enterprise edition and Opensolaris build 134. This is on Sun Ultra 40 M1 workstation. Three hard drives, 2 in ZFS mirror, 1 is shared with Windows. Last 2 days I was working in Windows. I didn't touch the hard drives in any way except I once open

Re: [zfs-discuss] fmadm warnings about media erros

2010-07-17 Thread Bob Friesenhahn
On Sat, 17 Jul 2010, Bruno Sousa wrote: Jul 15 12:30:48 storage01 SOURCE: eft, REV: 1.16 Jul 15 12:30:48 storage01 EVENT-ID: 859b9d9c-1214-4302-8089-b9447619a2a1 Jul 15 12:30:48 storage01 DESC: The command was terminated with a non-recovered error condition that may have been caused by a flaw in

[zfs-discuss] disaster recovery process (replace disks)

2010-07-17 Thread Edward Ned Harvey
I believe I know enough to figure this out on my own, but there's usually some little "gotcha" that you don't think of until you hit it. I'm just betting that Cindy has already a procedure written for just this purpose. ;-) In general, if you've been good about backing up your rpool via "zfs s

Re: [zfs-discuss] fmadm warnings about media erros

2010-07-17 Thread Giovanni Tirloni
On Sat, Jul 17, 2010 at 10:49 AM, Bob Friesenhahn wrote: > On Sat, 17 Jul 2010, Bruno Sousa wrote: >> >> Jul 15 12:30:48 storage01 SOURCE: eft, REV: 1.16 >> Jul 15 12:30:48 storage01 EVENT-ID: 859b9d9c-1214-4302-8089-b9447619a2a1 >> Jul 15 12:30:48 storage01 DESC: The command was terminated with a

Re: [zfs-discuss] fmadm warnings about media erros

2010-07-17 Thread Bruno Sousa
On 17-7-2010 15:49, Bob Friesenhahn wrote: > On Sat, 17 Jul 2010, Bruno Sousa wrote: >> Jul 15 12:30:48 storage01 SOURCE: eft, REV: 1.16 >> Jul 15 12:30:48 storage01 EVENT-ID: 859b9d9c-1214-4302-8089-b9447619a2a1 >> Jul 15 12:30:48 storage01 DESC: The command was terminated with a >> non-recovered

Re: [zfs-discuss] Lost zpool after reboot

2010-07-17 Thread Giovanni Tirloni
On Sat, Jul 17, 2010 at 10:55 AM, Amit Kulkarni wrote: > I did a zpool status and it gave me zfs 8000-3C error, saying my pool is > unavailable. Since I am able to boot & access browser, I tried a zpool import > without arguments, with trying to export my pool, more fiddling. Now I can't > get

Re: [zfs-discuss] disaster recovery process (replace disks)

2010-07-17 Thread Cindy Swearingen
Hi Ned, One of the benefits of using a mirrored ZFS configuration is just replacing each disk with a larger disk, in place, online, and so on... Its probably easiest to use zfs send -R (recursive) to do a recursive snapshot of your root pool. Check out the steps here: http://www.solarisinter

Re: [zfs-discuss] Lost zpool after reboot

2010-07-17 Thread Amit Kulkarni
> > I did a zpool status and it gave me zfs 8000-3C error, > saying my pool is unavailable. Since I am able to boot & > access browser, I tried a zpool import without arguments, > with trying to export my pool, more fiddling. Now I can't > get zpool status to show my pool. > > >    vdev_path = /de

Re: [zfs-discuss] Lost zpool after reboot

2010-07-17 Thread Giovanni Tirloni
On Sat, Jul 17, 2010 at 3:07 PM, Amit Kulkarni wrote: > I don't know if the devices are renumbered. How do you know if the devices > are changed? > > Here is output of format, the middle one is the boot drive and selection 0 & > 2 are the ZFS mirrors > > AVAILABLE DISK SELECTIONS: >       0. c8t

[zfs-discuss] raidz capacity osol vs freebsd

2010-07-17 Thread marco
Im seeing weird differences between 2 raidz pools, 1 created on a recent freebsd 9.0-CURRENT amd64 box containing the zfs v15 bits, the other on a old osol build. The raidz pool on the fbsd box is created from 3 2Tb sata drives. The raidz pool on the osol box was created in the past from 3 smalle

Re: [zfs-discuss] disaster recovery process (replace disks)

2010-07-17 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Cindy Swearingen > > Hi Ned, > > One of the benefits of using a mirrored ZFS configuration is just > replacing each disk with a larger disk, in place, online, and so on... Yes, the autoexpan

Re: [zfs-discuss] raidz capacity osol vs freebsd

2010-07-17 Thread Ian Collins
On 07/18/10 11:19 AM, marco wrote: Im seeing weird differences between 2 raidz pools, 1 created on a recent freebsd 9.0-CURRENT amd64 box containing the zfs v15 bits, the other on a old osol build. The raidz pool on the fbsd box is created from 3 2Tb sata drives. The raidz pool on the osol box

Re: [zfs-discuss] raidz capacity osol vs freebsd

2010-07-17 Thread Logic
Ian Collins (i...@ianshome.com) wrote: > On 07/18/10 11:19 AM, marco wrote: >> *snip* >> >> > Yes, that is correct. zfs list reports usable space, which is 2 out of > the three drives (parity isn't confined to one device). > >> *snip* >> >> > Are you sure? That result looks odd. It is w