[zfs-discuss] degraded pool will stop NFS service when there is no hot spare?

2011-06-11 Thread Fred Liu
Hi, We have met this yesterday. The degraded pool was exported and I had to re-import it manually. Is it a normal case? I assume it should not be but Has anyone met the similar case? Thanks. Fred ___ zfs-discuss mailing list zfs-discuss@opensola

Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-11 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Jim Klimov > > See FEC suggestion from another poster ;) Well, of course, all storage mediums have built-in hardware FEC. At least disk & tape for sure. But naturally you can't always trust

Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-11 Thread David Magda
On Jun 11, 2011, at 08:46, Edward Ned Harvey wrote: > If you simply want to layer on some more FEC, there must be some standard > generic FEC utilities out there, right? > zfs send | fec > /dev/... > Of course this will inflate the size of the data stream somewhat, but > improves the relia

Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-11 Thread Edward Ned Harvey
> From: David Magda [mailto:dma...@ee.ryerson.ca] > Sent: Saturday, June 11, 2011 9:04 AM > > If one is saving streams to a disk, it pay be worth creating parity files for them > (especially if the destination file system is not ZFS): Parity is just a really simple form of error detection. It's

[zfs-discuss] What is my pol writing? :)

2011-06-11 Thread Jim Klimov
While looking over iostats from various programs, I see that my OS HDD is busy writing, about 2Mb/sec stream all the time (at least while the "dcpool" import/recovery attempts are underway, but also now during a mere zdb walk). According to "iostat" this load stands out greatly:

[zfs-discuss] Impact of L2ARC device failure and SSD recommendations

2011-06-11 Thread Edmund White
Posted in greater detail at Server Fault - http://serverfault.com/q/277966/13325 I have an HP ProLiant DL380 G7 system running NexentaStor. The server has 36GB RAM, 2 LSI 9211-8i SAS controllers (no SAS expanders), 2 SAS system drives, 12 SAS data drives, a hot-spare disk, an Intel X25-M L2ARC c

Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-11 Thread David Magda
On Jun 11, 2011, at 09:20, Edward Ned Harvey wrote: > Parity is just a really simple form of error detection. It's not very > useful for error correction. If you look into error correction codes, > you'll see there are many other codes which would be more useful for the > purposes of zfs send da

Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-11 Thread Jim Klimov
2011-06-11 17:20, Edward Ned Harvey пишет: From: David Magda [mailto:dma...@ee.ryerson.ca] Sent: Saturday, June 11, 2011 9:04 AM If one is saving streams to a disk, it pay be worth creating parity files for them (especially if the destination file system is not ZFS): Parity is just a really s

Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-11 Thread Edward Ned Harvey
> From: David Magda [mailto:dma...@ee.ryerson.ca] > Sent: Saturday, June 11, 2011 9:38 AM > > These parity files use a forward error correction-style system that can be > used to perform data verification, and allow recovery when data is lost or > corrupted. > > http://en.wikipedia.org/wiki/Parch

Re: [zfs-discuss] Impact of L2ARC device failure and SSD recommendations

2011-06-11 Thread Pasi Kärkkäinen
On Sat, Jun 11, 2011 at 08:35:19AM -0500, Edmund White wrote: >Posted in greater detail at Server Fault >- [1]http://serverfault.com/q/277966/13325 > >I have an HP ProLiant DL380 G7 system running NexentaStor. The server has >36GB RAM, 2 LSI 9211-8i SAS controllers (no SAS expander

Re: [zfs-discuss] What is my pol writing? :)

2011-06-11 Thread Jim Mauro
Does this reveal anything; dtrace -n 'syscall::*write:entry /fds[arg0].fi_fs == "zfs"/ { @[execname,fds[arg0].fi_pathname]=count(); }' On Jun 11, 2011, at 9:32 AM, Jim Klimov wrote: > While looking over iostats from various programs, I see that > my OS HDD is busy writing, about 2Mb/sec stream

Re: [zfs-discuss] Impact of L2ARC device failure and SSD recommendations

2011-06-11 Thread Edmund White
So, can this be fixed in firmware? How can I determine if the drive is actually bad? -- Edmund White ewwh...@mac.com On 6/11/11 10:15 AM, "Pasi Kärkkäinen" wrote: >On Sat, Jun 11, 2011 at 08:35:19AM -0500, Edmund White wrote: >>Posted in greater detail at Server Fault >>- [1]http://s

Re: [zfs-discuss] Impact of L2ARC device failure and SSD recommendations

2011-06-11 Thread Jim Klimov
2011-06-11 19:15, Pasi Kärkkäinen пишет: On Sat, Jun 11, 2011 at 08:35:19AM -0500, Edmund White wrote: I've had two incidents where performance tanked suddenly, leaving the VM guests and Nexenta SSH/Web consoles inaccessible and requiring a full reboot of the array to restore functio

Re: [zfs-discuss] What is my pol writing? :)

2011-06-11 Thread Jim Klimov
2011-06-11 19:16, Jim Mauro пишет: Does this reveal anything; dtrace -n 'syscall::*write:entry /fds[arg0].fi_fs == "zfs"/ { @[execname,fds[arg0].fi_pathname]=count(); }' Alas, not much. # time dtrace -n 'syscall::*write:entry /fds[arg0].fi_fs == "zfs"/ { @[execname,fds[arg0].fi_pathname]=co

Re: [zfs-discuss] What is my pol writing? :)

2011-06-11 Thread Jim Mauro
Well we may have missed something, because that dtrace will only capture write(2) and pwrite(2) - whatever is generating the writes may be using another interface (writev(2) for example). What about taking it down a layer: dtrace -n 'fsinfo:::write /args[0]->fi_fs == "zfs"/ { @[execname,args[0]-

Re: [zfs-discuss] What is my pol writing? :)

2011-06-11 Thread Jim Klimov
2011-06-11 20:34, Jim Klimov пишет: time dtrace -n 'syscall::*write:entry /fds[arg0].fi_fs == "zfs"/ { @[execname,fds[arg0].fi_pathname]=count(); }' This time I gave it more time, and used the system a bit - this dtrace works indeed, but there are still too few file accesses: # time dtrace -n

Re: [zfs-discuss] What is my pol writing? :)

2011-06-11 Thread Jim Klimov
2011-06-11 20:42, Jim Mauro пишет: Well we may have missed something, because that dtrace will only capture write(2) and pwrite(2) - whatever is generating the writes may be using another interface (writev(2) for example). What about taking it down a layer: dtrace -n 'fsinfo:::write /args[0]->f

Re: [zfs-discuss] What is my pol writing? :)

2011-06-11 Thread Jim Mauro
Hmmmso coming back around to the problem we're trying to solve - You have iostat data and "zpool iostat" data that shows a steady stream of writes to one or more of your zpools, correct? You wish to identify the source of those writes, correct? Try saving this as a file and running it, and p

Re: [zfs-discuss] What is my pol writing? :)

2011-06-11 Thread Jim Mauro
This may be interesting also (still fumbling...); dtrace -n 'fbt:zfs:zio_write:entry, fbt:zfs:zio_rewrite:entry,fbt:zfs:zio_write_override:entry { @[probefunc,stack()] = count(); }' On Jun 11, 2011, at 1:00 PM, Jim Klimov wrote: > 2011-06-11 20:42, Jim Mauro пишет: >> Well we may have missed

[zfs-discuss] Importing existing data

2011-06-11 Thread whitetr6
I have a home server running Solaris 11 Express that I want to move to OpenIndiana. There's nothing I need to retain on rpool, and my "datastore" pool has all of the data. My question is, can I install OpenIndiana oi_151 and be able to import the datastore zpool? When I tried from the live cd

Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-11 Thread David Magda
On Jun 11, 2011, at 10:37, Edward Ned Harvey wrote: >> From: David Magda [mailto:dma...@ee.ryerson.ca] >> Sent: Saturday, June 11, 2011 9:38 AM >> >> These parity files use a forward error correction-style system that can be >> used to perform data verification, and allow recovery when data is lo