Re: [zfs-discuss] Resilver w/o errors vs. scrub with errors

2013-01-20 Thread Stephan Budach
Am 20.01.13 16:51, schrieb Edward Ned Harvey (opensolarisisdeadlongliveopensolaris): From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Stephan Budach I am always experiencing chksum errors while scrubbing my zpool(s), but I never experienced

Re: [zfs-discuss] Resilver w/o errors vs. scrub with errors

2013-01-20 Thread Stephan Budach
Am 21.01.13 00:21, schrieb Jim Klimov: Did you try replacing the patch-cables and/or SFPs on the path between servers and disks, or at least cleaning them? A speck of dust (or, God forbid, a pixel of body fat from a fingerprint) caught between the two optic cable cutoffs might cause any kind of

[zfs-discuss] zpool errors without fmdump or dmesg errors

2013-01-19 Thread Stephan Budach
happens, there must be something in dmesg or fmdump, but there is nothing at all showed by fmdump and dmesg did also show nothing, which I'd regard as a reason. Has anybody seen something like this before? Thanks -- Stephan Budach Jung von Matt/it-services GmbH Glashüttenstraße 79 20357 Hamburg

[zfs-discuss] Resilver w/o errors vs. scrub with errors

2013-01-19 Thread Stephan Budach
Hi, I am always experiencing chksum errors while scrubbing my zpool(s), but I never experienced chksum errors while resilvering. Does anybody know why that would be? This happens on all of my servers, Sun Fire 4170M2, Dell PE 650 and on any FC storage that I have. Currently I had a major

Re: [zfs-discuss] Resilver w/o errors vs. scrub with errors

2013-01-19 Thread Stephan Budach
Am 19.01.13 18:17, schrieb Bob Friesenhahn: On Sat, 19 Jan 2013, Stephan Budach wrote: Now, this zpool is made of 3-way mirrors and currently 13 out of 15 vdevs are resilvering (which they had gone through yesterday as well) and I never got any error while resilvering. I have been all over

Re: [zfs-discuss] Resilver w/o errors vs. scrub with errors

2013-01-19 Thread Stephan Budach
Am 19.01.13 20:18, schrieb Bob Friesenhahn: On Sat, 19 Jan 2013, Stephan Budach wrote: Just ignore the timestamp, as it seems that the time is not set correctly, but the dates match my two issues from today and thursday, which accounts for three days. I didn't catch that before

[zfs-discuss] Data corruption but no faulted drive/vdev

2012-09-20 Thread Stephan Budach
anymore, I could easily remove it. The second error, however, disturbs me - it doesn't seem to point to a file or directory. Thanks, budy -- Stephan Budach Jung von Matt/it-services GmbH Glashüttenstraße 79 20357 Hamburg Tel: +49 40-4321-1353 Fax: +49 40-4321-1114 E-Mail: stephan.bud...@jvm.de

Re: [zfs-discuss] Spare drive inherited cksum errors?

2012-05-29 Thread Stephan Budach
Hi Richard, Am 29.05.12 06:54, schrieb Richard Elling: On May 28, 2012, at 9:21 PM, Stephan Budach wrote: Hi all, just to wrap this issue up: as FMA didn't report any other error than the one which led to the degradation of the one mirror, I detached the original drive from the zpool

Re: [zfs-discuss] Spare drive inherited cksum errors?

2012-05-29 Thread Stephan Budach
Thanks, Cindy On 05/28/12 22:21, Stephan Budach wrote: Hi all, just to wrap this issue up: as FMA didn't report any other error than the one which led to the degradation of the one mirror, I detached the original drive from the zpool which flagged the mirror vdev as ONLINE (although there was still

Re: [zfs-discuss] Spare drive inherited cksum errors?

2012-05-28 Thread Stephan Budach
Am 28.05.12 00:35, schrieb Richard Elling: On May 27, 2012, at 12:52 PM, Stephan Budach wrote: Hi, today I issued a scrub on one of my zpools and after some time I noticed that one of the vdevs became degraded due to some drive having cksum errors. The spare kicked in and the drive got

Re: [zfs-discuss] Spare drive inherited cksum errors?

2012-05-28 Thread Stephan Budach
Hi all, just to wrap this issue up: as FMA didn't report any other error than the one which led to the degradation of the one mirror, I detached the original drive from the zpool which flagged the mirror vdev as ONLINE (although there was still a cksum error count of 23 on the spare drive).

[zfs-discuss] Spare drive inherited cksum errors?

2012-05-27 Thread Stephan Budach
procedure to continue? Would one now first run another scrub and detach the degraded drive afterwards, or detach the degrades drive immediately and run a scrub afterwards? Thanks, budy -- Stephan Budach Jung von Matt/it-services GmbH Glashüttenstraße 79 20357 Hamburg Tel: +49 40-4321-1353 Fax: +49

Re: [zfs-discuss] Hard Drive Choice Question

2012-05-17 Thread Stephan Budach
Am 16.05.12 16:53, schrieb Paul Kraus: I have a small server at home (HP Proliant Micro N36) that I use for file, DNS, DHCP, etc. services. I currently have a zpool of four mirrored 1 TB Seagate ES2 SATA drives. Well, it was a zpool of four until last night when one of the drives died. ZFS

Re: [zfs-discuss] kernel panic during zfs import [UPDATE]

2012-04-17 Thread Stephan Budach
Hi Carsten, Am 17.04.12 17:40, schrieb Carsten John: Hello everybody, just to let you know what happened in the meantime: I was able to open a Service Request at Oracle. The issue is a known bug (Bug 6742788 : assertion panic at: zfs:zap_deref_leaf) The bug has bin fixed (according to

Re: [zfs-discuss] Drive upgrades

2012-04-13 Thread Stephan Budach
Am 13.04.12 19:22, schrieb Tim Cook: On Fri, Apr 13, 2012 at 11:46 AM, Freddie Cash fjwc...@gmail.com mailto:fjwc...@gmail.com wrote: On Fri, Apr 13, 2012 at 9:30 AM, Tim Cook t...@cook.ms mailto:t...@cook.ms wrote: You will however have an issue replacing them if one should

Re: [zfs-discuss] kernel panic during zfs import [ORACLE should notice this]

2012-03-30 Thread Stephan Budach
Am 30.03.12 21:45, schrieb John D Groenveld: In message4f735451.2020...@oracle.com, Deepak Honnalli writes: Thanks for your reply. I would love to take a look at the core file. If there is a way this can somehow be transferred to the internal cores server, I can work on the bug.

[zfs-discuss] Issues with Areca 1680

2011-12-08 Thread Stephan Budach
Hi all, I have a server that is build on top of an Asus board which is equipped with an Areca 1680 HBA. Since ZFS like raw disks, I changed its mode from RAID to JBOD in the firmware and rebootet the host. Now, I do have 16 drives in the chassis and the line out like this: root@vsm01:~#

[zfs-discuss] SOLVED: Issues with Areca 1680

2011-12-08 Thread Stephan Budach
Am 08.12.11 18:14, schrieb Stephan Budach: Hi all, I have a server that is build on top of an Asus board which is equipped with an Areca 1680 HBA. Since ZFS like raw disks, I changed its mode from RAID to JBOD in the firmware and rebootet the host. Now, I do have 16 drives in the chassis

Re: [zfs-discuss] After update to S11, zfs reports some disks as 'corrupted data'

2011-11-21 Thread Stephan Budach
Phew… seems that the S11 update process replaced my modified qlc.conf with a standard one. In SE11 I had to lower the queue depth by setting max_execution_throttle to something lower than 16. Mostly since I am exposing 16 LUNs from each storage and this the qlc.driver flooded the storage

[zfs-discuss] After update to S11, zfs reports some disks as 'corrupted data'

2011-11-19 Thread Stephan Budach
Hi all, I am in the process of updating my SE11 servers to S11. On one server I am having two zpools made out of mirror vdevs and up to now none of these zpools have shown any error. However, after updating to S11 three disks - and unfortuanetly two of the same miorror vdev are shown as

Re: [zfs-discuss] latest zpool version in solaris 11 express

2011-07-21 Thread Stephan Budach
Am 20.07.11 18:31, schrieb Brandon High: On Mon, Jul 18, 2011 at 6:21 AM, Edward Ned Harvey opensolarisisdeadlongliveopensola...@nedharvey.com wrote: Kidding aside, for anyone finding this thread at a later time, here's the answer. It sounds unnecessarily complex at first, but then I went

Re: [zfs-discuss] Move rpool from external hd to internal hd

2011-06-30 Thread Stephan Budach
Am 30.06.11 04:44, schrieb Erik Trimble: On 6/29/2011 12:51 AM, Stephan Budach wrote: Hi, what are the steps necessary to move the OS rpool from an external USB drive to an internal drive? I thought about adding the internal hd as a mirror to the rpool and then detaching the USB drive, but I

[zfs-discuss] Move rpool from external hd to internal hd

2011-06-29 Thread Stephan Budach
Hi, what are the steps necessary to move the OS rpool from an external USB drive to an internal drive? I thought about adding the internal hd as a mirror to the rpool and then detaching the USB drive, but I am unsure if I'll have to mess with Grub as well. Cheers, budy -- Stephan Budach

Re: [zfs-discuss] Disk replacement need to scan full pool ?

2011-06-14 Thread Stephan Budach
Am 14.06.11 15:12, schrieb Rasmus Fauske: Den 14.06.2011 14:06, skrev Edward Ned Harvey: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Rasmus Fauske I want to replace some slow consumer drives with new edc re4 ones but when I do a replace

Re: [zfs-discuss] No write coalescing after upgrade to Solaris 11 Express

2011-04-28 Thread Stephan Budach
Sync was disabled on the main pool and then let to inherrit to everything else. The reason for disabled this in the first place was to fix bad NFS write performance (even with Zil on an X25e SSD it was under 1MB/s). I've also tried setting the logbias to throughput and latency but they both

Re: [zfs-discuss] No write coalescing after upgrade to Solaris 11 Express

2011-04-28 Thread Stephan Budach
Am 28.04.11 11:51, schrieb Markus Kovero: failed: space_map_load(sm, zfs_metaslab_ops, SM_FREE, smo, spa-spa_meta_objset) == 0, file ../zdb.c, line 571, function dump_metaslab Is this something I should worry about? uname -a SunOS E55000 5.11 oi_148 i86pc i386 i86pc Solaris I thought we were

Re: [zfs-discuss] No write coalescing after upgrade to Solaris 11 Express

2011-04-28 Thread Stephan Budach
Am 28.04.11 15:16, schrieb Victor Latushkin: On Apr 28, 2011, at 5:04 PM, Stephan Budach wrote: Am 28.04.11 11:51, schrieb Markus Kovero: failed: space_map_load(sm, zfs_metaslab_ops, SM_FREE, smo, spa-spa_meta_objset) == 0, file ../zdb.c, line 571, function dump_metaslab Is this something I

Re: [zfs-discuss] How to rename rpool. Is that recommended ?

2011-04-08 Thread Stephan Budach
by simply issueing: zpool import oldpool newpool -- Stephan Budach Jung von Matt/it-services GmbH Glashüttenstraße 79 20357 Hamburg Tel: +49 40-4321-1353 Fax: +49 40-4321-1114 E-Mail: stephan.bud...@jvm.de Internet: http://www.jvm.com Geschäftsführer: Ulrich Pallas, Frank Wilhelm AG HH HRB 98380

Re: [zfs-discuss] ZFS send/recv initial data load

2011-02-16 Thread Stephan Budach
Am 16.02.11 16:38, schrieb white...@gmail.com: Hi, I have a very limited amount of bandwidth between main office and a colocated rack of servers in a managed datacenter. My hope is to be able to zfs send/recv small incremental changes on a nightly basis as a secondary offsite backup strategy.

Re: [zfs-discuss] ZFS slows down over a couple of days

2011-01-13 Thread Stephan Budach
Hi all, thanks a lot for your suggestions. I have checked all of them and neither the network itself nor any other check indicated any problem. Alas, I think I know what is going on… ehh… my current zpool has two vdevs that are actually not even sized, as shown by zpool iostat -v: zpool

[zfs-discuss] zpool scalability and performance

2011-01-13 Thread Stephan Budach
Hi, the ZFS_Best_Practises_Guide states this: Keep vdevs belonging to one zpool of similar sizes; Otherwise, as the pool fills up, new allocations will be forced to favor larger vdevs over smaller ones and this will cause subsequent reads to come from a subset of underlying devices leading

Re: [zfs-discuss] zfs send tape autoloaders?

2011-01-13 Thread Stephan Budach
Am 13.01.11 15:00, schrieb David Strom: Moving to a new SAN, both LUNs will not be accessible at the same time. Thanks for the several replies I've received, sounds like the dd to tape mechanism is broken for zfs send, unless someone knows otherwise or has some trick? I'm just going to try

[zfs-discuss] ZFS slows down over a couple of days

2011-01-12 Thread Stephan Budach
Hi all, I have exchanged my Dell R610 in favor of a Sun Fire 4170 M2 which has 32 GB RAM installed. I am running Sol11Expr on this host and I use it to primarily serve Netatalk AFP shares. From day one, I have noticed that the amount of free RAM decereased and along with that decrease the

Re: [zfs-discuss] ZFS slows down over a couple of days

2011-01-12 Thread Stephan Budach
of information is the ZFS evil tuning guide (just Google those words), which has a wealth of information. I hope that helps (for a start at least) Jeff On 01/12/11 08:21 AM, Stephan Budach wrote: Hi all, I have exchanged my Dell R610 in favor of a Sun Fire 4170 M2 which has 32 GB RAM installed. I am

Re: [zfs-discuss] ZFS slows down over a couple of days

2011-01-12 Thread Stephan Budach
think that arc_summary.pl showed exactly that... Cheers, budy -- Stephan Budach Jung von Matt/it-services GmbH Glashüttenstraße 79 20357 Hamburg Tel: +49 40-4321-1353 Fax: +49 40-4321-1114 E-Mail: stephan.bud...@jvm.de Internet: http://www.jvm.com Geschäftsführer: Ulrich Pallas, Frank Wilhelm AG HH

Re: [zfs-discuss] A few questions

2011-01-08 Thread Stephan Budach
Am 08.01.11 18:33, schrieb Edward Ned Harvey: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Garrett D'Amore When you purchase NexentaStor from a top-tier Nexenta Hardware Partner, you get a product that has been through a rigorous

Re: [zfs-discuss] Running on Dell hardware?

2011-01-03 Thread Stephan Budach
Am 22.12.10 18:47, schrieb Lasse Osterild: I've just noticed that Dell has a 6.0.1 firmware upgrade available, at least for my R610's they do (they are about 3 months old). Oddly enough it doesn't show up on support.dell.com when I search using my servicecode, but if I check through System

Re: [zfs-discuss] Disks are unavailable

2011-01-03 Thread Stephan Budach
Am 31.12.10 06:06, schrieb Jeff Ruetten: I am using virtualbox and accessing three 2 tb entire raw disks in a Windows 7 ultimate host. One day, the guest (Nexenta) was stopped and when I restarted it all three disks are showing as unavailable. Is there anyway to recover from this? I would

Re: [zfs-discuss] Running on Dell hardware?

2011-01-03 Thread Stephan Budach
Am 03.01.11 19:41, schrieb Edward Ned Harvey: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Stephan Budach Well a couple of weeks before christmas, I enabled the onboard bcom nics on my R610 again, to use them as IMPI ports - I didn't even

Re: [zfs-discuss] SAS/short stroking vs. SSDs for ZIL

2011-01-02 Thread Stephan Budach
Am 02.01.11 16:52, schrieb Edward Ned Harvey: From: Frank Lahm [mailto:frankl...@googlemail.com] Don't all of those concerns disappear in the event of a reboot? If you stop AFP, you could completely obliterate the BDB database, and restart AFP, and functionally continue from where you left

[zfs-discuss] SAS/short stroking vs. SSDs for ZIL

2010-12-23 Thread Stephan Budach
of such 15k SAS drives would be a good-enough fit for a ZIL on a zpool that is mainly used for file services via AFP and SMB. I'd particulary like to know, if someone has already used such a solution and how it has worked out. Cheers, budy -- Stephan Budach Jung von Matt/it-services GmbH

Re: [zfs-discuss] SAS/short stroking vs. SSDs for ZIL

2010-12-23 Thread Stephan Budach
Am 23.12.10 12:18, schrieb Phil Harman: Sent from my iPhone (which had a lousy user interface which makes it all too easy for a clumsy oaf like me to touch Send before I'm done)... On 23 Dec 2010, at 11:07, Phil Harman phil.har...@gmail.com mailto:phil.har...@gmail.com wrote: Great

Re: [zfs-discuss] SAS/short stroking vs. SSDs for ZIL

2010-12-23 Thread Stephan Budach
Am 23.12.10 13:09, schrieb Phil Harman: On 23 Dec 2010, at 11:53, Stephan Budach stephan.bud...@jvm.de mailto:stephan.bud...@jvm.de wrote: Am 23.12.10 12:18, schrieb Phil Harman: Sent from my iPhone (which had a lousy user interface which makes it all too easy for a clumsy oaf like me

Re: [zfs-discuss] SAS/short stroking vs. SSDs for ZIL

2010-12-23 Thread Stephan Budach
Am 23.12.10 19:05, schrieb Eric D. Mudama: On Thu, Dec 23 at 11:25, Stephan Budach wrote: Hi, as I have learned from the discussion about which SSD to use as ZIL drives, I stumbled across this article, that discusses short stroking for increasing IOPs on SAS and SATA drives: [1

[zfs-discuss] Looking for 3.5 SSD for ZIL

2010-12-22 Thread Stephan Budach
Hello all, I am shopping around for 3.5 SSDs that I can mount into my storage and use as ZIL drives. As of yet, I have only found 3.5 models with the Sandforce 1200, which was not recommended on this list. Does anyone maybe know of a model that has the Sandforce 1500 and is 3.5? Or any other

Re: [zfs-discuss] Looking for 3.5 SSD for ZIL

2010-12-22 Thread Stephan Budach
Am 22.12.10 12:41, schrieb Pasi Kärkkäinen: On Wed, Dec 22, 2010 at 11:36:48AM +0100, Stephan Budach wrote: Hello all, I am shopping around for 3.5 SSDs that I can mount into my storage and use as ZIL drives. As of yet, I have only found 3.5 models with the Sandforce 1200

Re: [zfs-discuss] copy complete zpool via zfs send/recv

2010-12-18 Thread Stephan Budach
Am 18.12.10 05:44, schrieb Edward Ned Harvey: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Stephan Budach Now, I want to use zfs send -R t...@movetank | zfs recv targetTank/... which would place all zfs fs one level down below targetTank

Re: [zfs-discuss] copy complete zpool via zfs send/recv

2010-12-18 Thread Stephan Budach
Am 18.12.10 15:14, schrieb Edward Ned Harvey: From: Stephan Budach [mailto:stephan.bud...@jvm.de] Ehh. well. you answered it. sort of. ;) I think I simply didn't dare to overwrite the root zfs on the destination zpool with -F, but of course you're right, that this is the way to go. What

[zfs-discuss] copy complete zpool via zfs send/recv

2010-12-17 Thread Stephan Budach
Hi, I want to move all the ZFS fs from one pool to another, but I don't want to gain an extra level in the folder structure on the target pool. On the source zpool I used zfs snapshot -r t...@movetank on the root fs and I got a new snapshot in all sub fs, as expected. Now, I want to use zfs

Re: [zfs-discuss] What performance to expect from mirror vdevs?

2010-12-14 Thread Stephan Budach
Am 14.12.10 07:43, schrieb Stephan Budach: Am 14.12.2010 um 03:30 schrieb Bob Friesenhahnbfrie...@simple.dallas.tx.us: On Mon, 13 Dec 2010, Stephan Budach wrote: My current run of bonnie is of course not that satisfactory and I wanted to ask you, if it's safe to turn on at least the drive

Re: [zfs-discuss] What performance to expect from mirror vdevs?

2010-12-13 Thread Stephan Budach
Bob, Ian… thanks for your input. It may be that the fw on the raid really got overloaded and that may had to do with the way the GUI works. I am now testing the same configuration on another host, where I can risk some lockups when running bonnie++. I am able to set some options on the drive

Re: [zfs-discuss] What performance to expect from mirror vdevs?

2010-12-13 Thread Stephan Budach
Am 14.12.2010 um 03:30 schrieb Bob Friesenhahn bfrie...@simple.dallas.tx.us: On Mon, 13 Dec 2010, Stephan Budach wrote: My current run of bonnie is of course not that satisfactory and I wanted to ask you, if it's safe to turn on at least the drive level options, namely the write cache

Re: [zfs-discuss] Running on Dell hardware?

2010-12-11 Thread Stephan Budach
Am 10.12.10 19:13, schrieb Edward Ned Harvey: From: Edward Ned Harvey [mailto:sh...@nedharvey.com] It has been over 3 weeks now, with no crashes, and me doing everything I can to get it to crash again. So I'm going to call this one resolved... All I did was disable the built-in Broadcom

[zfs-discuss] What performance to expect from mirror vdevs?

2010-12-11 Thread Stephan Budach
Hi, on friday I received two of my new fc raids, that I intended to use as my new zpool devices. These devices are from CiDesign and their type/model is iR16FC4ER. These are fc raids, that also allow JBOD operation, which is what I chose. So I configured 16 raid groups on each system and

[zfs-discuss] check zfs?

2010-11-12 Thread Stephan Budach
Hi, I am having a corrupted dataset, that caused a kernel panic upon imporing/mounting the zpool/dataset. (see this thread http://opensolaris.org/jive/thread.jspa?threadID=135269tstart=0) Now, I do have number of snapshots on this dataset and I am wondering, if there's a way to check if a

Re: [zfs-discuss] zpool import panics

2010-11-11 Thread Stephan Budach
guess, since I booted off the live CD I don't have any core dumps at hand. Maybe on the other host, which also has the same issue with another pool, where a core dump should have been written somewhere, although I seem unable to find it anywhere. -- Stephan Budach Jung von Matt/it-services GmbH

Re: [zfs-discuss] zpool import panics

2010-11-11 Thread Stephan Budach
not a Kernel hacker… Thanks -- Stephan Budach Jung von Matt/it-services GmbH Glashüttenstraße 79 20357 Hamburg Tel: +49 40-4321-1353 Fax: +49 40-4321-1114 E-Mail: stephan.bud...@jvm.de Internet: http://www.jvm.com Geschäftsführer: Ulrich Pallas, Frank Wilhelm AG HH HRB 98380

Re: [zfs-discuss] zpool import panics

2010-11-11 Thread Stephan Budach
/dump, offset 65536, content: kernel Thank you -- Stephan Budach Jung von Matt/it-services GmbH Glashüttenstraße 79 20357 Hamburg Tel: +49 40-4321-1353 Fax: +49 40-4321-1114 E-Mail: stephan.bud...@jvm.de Internet: http://www.jvm.com Geschäftsführer: Ulrich Pallas, Frank Wilhelm AG HH HRB 98380

Re: [zfs-discuss] zpool import panics

2010-11-11 Thread Stephan Budach
(-o ro) and set the parameter to the file system to readonly (zfs set readonly=on fs). Dave On 11/11/10 09:37, Stephan Budach wrote: David, thanks so much (and of course to all other helpful souls here as well) for providing such great guidance! Here we go: Am 11.11.10 16:17, schrieb

[zfs-discuss] Good write, but slow read speeds over the network

2010-10-28 Thread Stephan Budach
issue? Cheers, budy -- Stephan Budach Jung von Matt/it-services GmbH Glashüttenstraße 79 20357 Hamburg Tel: +49 40-4321-1353 Fax: +49 40-4321-1114 E-Mail: stephan.bud...@jvm.de Internet: http://www.jvm.com Geschäftsführer: Ulrich Pallas, Frank Wilhelm AG HH HRB 98380

Re: [zfs-discuss] Running on Dell hardware?

2010-10-25 Thread Stephan Budach
Am 25.10.10 21:06, schrieb Ian Collins: On 10/26/10 01:38 AM, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ian Collins Sun hardware? Then you get all your support from one vendor. +1 Sun hardware costs more,

Re: [zfs-discuss] Running on Dell hardware?

2010-10-24 Thread Stephan Budach
Am 24.10.10 16:29, schrieb Edward Ned Harvey: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Stephan Budach I actually have three Dell R610 boxes running OSol snv134 and since I switched from the internal Broadcom NICs to Intel ones, I

Re: [zfs-discuss] When `zpool status' reports bad news

2010-10-24 Thread Stephan Budach
/Vorhang_Innen.eps 0x3b2:0x1bca86 0x3b2:0x1bba92 0x3b2:0x1bbeba I believe that the lower ones were files in snapshots that have been deleted, but why are they still referenced like this? budy -- Stephan Budach Jung von Matt/it-services GmbH Glashüttenstraße 79 20357 Hamburg Tel: +49 40-4321-1353 Fax

Re: [zfs-discuss] When `zpool status' reports bad news

2010-10-24 Thread Stephan Budach
Am 25.10.10 01:48, schrieb Bob Friesenhahn: On Sun, 24 Oct 2010, Stephan Budach wrote: I believe that the lower ones were files in snapshots that have been deleted, but why are they still referenced like this? Have you used 'zpool clear' to clear the errors in the pool? Bob Yes I did

Re: [zfs-discuss] Running on Dell hardware?

2010-10-23 Thread Stephan Budach
I actually have three Dell R610 boxes running OSol snv134 and since I switched from the internal Broadcom NICs to Intel ones, I didn't have any issue with them. budy ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Shared LUN's and ZFS

2010-10-22 Thread Stephan Budach
Hi Tony, Am 22.10.10 14:07, schrieb Tony MacDoodle: Is it possible to have a shared LUN between 2 servers using zfs? The server can see both LUN's but when I do an impoer I get: bash-3.00# zpool import pool: logs id: 3700399958960377217 state: ONLINE status: The pool was last accessed by

[zfs-discuss] Getting rid of RAID6 LUNs in a pool

2010-10-21 Thread Stephan Budach
appropriate vdevs at hand to upgrade the two existing vdevs to mirrors, so I could pull out the RAID6 LUNS. Does anyone know another approach?. Cheers, budy -- Stephan Budach Jung von Matt/it-services GmbH Glashüttenstraße 79 20357 Hamburg Tel: +49 40-4321-1353 Fax: +49 40-4321-1114 E-Mail: stephan.bud

Re: [zfs-discuss] Getting rid of RAID6 LUNs in a pool

2010-10-21 Thread Stephan Budach
Opps... answering myself here... ;) Am 21.10.10 14:08, schrieb Stephan Budach: Hi, my current pool looks like this: config: NAME STATE READ WRITE CKSUM obelixData ONLINE 0 0 0 c4t21D023038FA8d0 ONLINE 0

Re: [zfs-discuss] Finding corrupted files

2010-10-20 Thread Stephan Budach
Am 19.10.2010 um 22:36 schrieb Tuomas Leikola tuomas.leik...@gmail.com: On Mon, Oct 18, 2010 at 4:55 PM, Edward Ned Harvey sh...@nedharvey.com wrote: Thank you, but, the original question was whether a scrub would identify just corrupt blocks, or if it would be able to map corrupt blocks to

Re: [zfs-discuss] Finding corrupted files

2010-10-20 Thread Stephan Budach
From: Stephan Budach [mailto:stephan.bud...@jvm.de] Just in case this wasn't already clear. After scrub sees read or checksum errors, zpool status -v will list filenames that are affected. At least in my experience. -- - Tuomas That didn't do it for me. I used scrub and afterwards

Re: [zfs-discuss] how to replace failed vdev on non redundant pool?

2010-10-20 Thread Stephan Budach
. Even if I forced the command with -f. If this were a raidz pool, would the zpool replace command even work? Yes this would work in raidz pool, since you have a redundancy of 1, so one device may go offline, before the vdev fails. -- Stephan Budach Jung von Matt/it-services GmbH Glashüttenstraße

Re: [zfs-discuss] Finding corrupted files

2010-10-20 Thread Stephan Budach
Am 20.10.10 15:11, schrieb Edward Ned Harvey: From: Stephan Budach [mailto:stephan.bud...@jvm.de] Although, I have to say that I do have exactly 3 files that are corrupt in each snapshot until I finally deleted them and restored them from their original source. zfs send will abort when trying

Re: [zfs-discuss] Finding corrupted files

2010-10-15 Thread Stephan Budach
of single drives then, especially when you want to go with zpool raid-1. Cheers, budy -- Stephan Budach Jung von Matt/it-services GmbH Glashüttenstraße 79 20357 Hamburg Tel: +49 40-4321-1353 Fax: +49 40-4321-1114 E-Mail: stephan.bud...@jvm.de Internet: http://www.jvm.com Geschäftsführer

Re: [zfs-discuss] Finding corrupted files

2010-10-15 Thread Stephan Budach
Am 12.10.10 14:21, schrieb Edward Ned Harvey: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Stephan Budach c3t211378AC0253d0 ONLINE 0 0 0 How many disks are there inside of c3t211378AC0253d0? How

Re: [zfs-discuss] Finding corrupted files

2010-10-14 Thread Stephan Budach
I'd like to see those docs as well. As all HW raids are driven by software, of course - and software can be buggy. I don't want to heat up the discussion about ZFS managed discs vs. HW raids, but if RAID5/6 would be that bad, no one would use it anymore. So… just post the link and I will take a

Re: [zfs-discuss] Finding corrupted files

2010-10-12 Thread Stephan Budach
You are implying that the issues resulted from the H/W raid(s) and I don't think that this is appropriate. I configured a striped pool using two raids - this is exactly the same as using two single hard drives without mirroring them. I simply cannot see what zfs would be able to do in case of

Re: [zfs-discuss] Finding corrupted files

2010-10-12 Thread Stephan Budach
If the case is, as speculated, that one mirror has bad data and one has good, scrub or any IO has 50% chances of seeing the corruption. scrub does verify checksums. Yes, if the vdev would be a mirrored one, which it wasn't. There weren't any mirrors setup. Plus, if the checksums would have been

Re: [zfs-discuss] Finding corrupted files

2010-10-11 Thread Stephan Budach
I think one has to accept that zfs send appearently is able to detect such errors while scrub is not. scrub is operates only on the block level and makes sure that each block can be read and is in line with its's checksum. However, zfs send seems to have detected some errors in the file system

Re: [zfs-discuss] Finding corrupted files

2010-10-08 Thread Stephan Budach
So, I decided to give tar a whirl, after zfs send encountered the next corrupted file, resulting in an I/O error, even though scrub ran successfully w/o any erors. I then issued a /usr/gnu/bin/tar -cf /dev/null /obelixData/…/.zfs/snapshot/actual snapshot/DTP which finished without any issue

Re: [zfs-discuss] Finding corrupted files

2010-10-08 Thread Stephan Budach
So - after 10 hrs and 21 mins. the incremental zfs send/recv finished without a problem. ;) Seems that using tar for checking all files is an appropriate action. Cheers, budy -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Finding corrupted files

2010-10-07 Thread Stephan Budach
Hi Edward, well that was exactly my point, when I raised this question. If zfs send is able to identify corrupted files while it transfers a snapshot, why shouldn't scrub be able to do the same? ZFS send quit with an I/O error and zpool status -v showed my the file that indeed had problems.

Re: [zfs-discuss] Finding corrupted files

2010-10-07 Thread Stephan Budach
Ian, I know - and I will address this, by upgrading the vdevs to mirrors, but there're a lot of other SPOFs around. So I started out by reducing the most common failures and I have found that to be the disc drives, not the chassis. The beauty is: one can work their way up until the point of

Re: [zfs-discuss] Can I upgrade a striped pool of vdevs to mirrored vdevs?

2010-10-07 Thread Stephan Budach
disk at a time, letting it resilver and then run a scrub to ensure that each new disk is functional. Thanks, Cindy On 10/04/10 08:24, Stephan Budach wrote:/dev/dsk/c2t5d0s2 Hi, once I created a zpool of single vdevs not using mirroring of any kind. Now I wonder if it's possible to add vdevs

Re: [zfs-discuss] Can I upgrade a striped pool of vdevs to mirrored vdevs?

2010-10-07 Thread Stephan Budach
09:05, Stephan Budach wrote: Hi Cindy, very well - thanks. I noticed that either the pool you're using and the zpool that is described inb the docs already show a mirror-0 configuration, which isn't the case for my zpool: zpool status obelixData pool: obelixData state: ONLINE scrub: none

[zfs-discuss] Finding corrupted files

2010-10-06 Thread Stephan Budach
Hi, I recently discovered some - or at least one corrupted file on one ofmy ZFS datasets, which caused an I/O error when trying to send a ZFDS snapshot to another host: zpool status -v obelixData pool: obelixData state: ONLINE status: One or more devices has experienced an error resulting

[zfs-discuss] scrub doesn't finally finish?

2010-10-06 Thread Stephan Budach
Hi all, I have issued a scrub on a pool, that consists of two independant FC raids. The scrub has been running for approx. 25 hrs and then showed 100%, but there's still an incredible traffic on one of the FC raids going on, plus zpool statuv -v reports that scrub is still running: zpool

Re: [zfs-discuss] Finding corrupted files

2010-10-06 Thread Stephan Budach
No - not a trick question., but maybe I didn't make myself clear. Is there a way to discover such bad files other than trying to actually read from them one by one, say using cp or by sending a snapshot elsewhere? I am well aware that the file shown in zpool status -v is damaged and I have

Re: [zfs-discuss] scrub doesn't finally finish?

2010-10-06 Thread Stephan Budach
Yes - that may well be. There was data going on to the device while scrub has been running. Especially large zfs receives had been going on. I'd be odd if that was the case, though. Cheers, budy -- This message posted from opensolaris.org ___

Re: [zfs-discuss] Finding corrupted files

2010-10-06 Thread Stephan Budach
Well I think, that answers my question then: after a successful scrub, zpool status -v should then list all damaged files on an entire zpool. I only asked, because I read a thread in this forum that one guy had a problem with different files, aven after a successful scrub. Thanks, budy --

Re: [zfs-discuss] scrub doesn't finally finish?

2010-10-06 Thread Stephan Budach
Seems like it's really the case, that scrub doesn't take traffic that goes onto the zpool while it's scrubbing away. After some more time, the scrub finished and everything looks good so far. Thanks, budy -- This message posted from opensolaris.org

Re: [zfs-discuss] Finding corrupted files

2010-10-06 Thread Stephan Budach
Hi Cindy, thanks for bringing that to my attention. I checked fmdump and found a lot of these entries: Okt 06 2010 17:52:12.862812483 ereport.io.scsi.cmd.disk.tran nvlist version: 0 class = ereport.io.scsi.cmd.disk.tran ena = 0x514dc67d57e1 detector = (embedded

Re: [zfs-discuss] Finding corrupted files

2010-10-06 Thread Stephan Budach
Ian, yes, although these vdevs are FC raids themselves, so the risk is… uhm… calculated. Unfortuanetly, one of the devices seems to have some issues, as stated im my previous post. I will, nevertheless, add redundancy to my pool asap. Thanks, budy -- This message posted from opensolaris.org

Re: [zfs-discuss] Finding corrupted files

2010-10-06 Thread Stephan Budach
Hi Edward, these are interesting points. I have considered a couple of them, when I started playing around with ZFS. I am not sure whether I disagree with all of your points, but I conducted a couple of tests, where I configured my raids as jbods and mapped each drive out as a seperate LUN

[zfs-discuss] Can I upgrade a striped pool of vdevs to mirrored vdevs?

2010-10-04 Thread Stephan Budach
Hi, once I created a zpool of single vdevs not using mirroring of any kind. Now I wonder if it's possible to add vdevs and mirror the currently existing ones. Thanks, budy -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Can I upgrade a striped pool of vdevs to mirrored vdevs?

2010-10-04 Thread Stephan Budach
Hi Darren, gee, thanks. Of course the would be a resilver due for each vdev, but that shouldn't harm, although the vdevs are quite big. Thanks, budy -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] zfs proerty aclmode gone in 147?

2010-09-24 Thread Stephan Budach
Hi, I recently installed oi147 and I noticed that the property aclmode is no longer present and has been nuked from my volumes when I imported a pool that had been previously hosted on a OSol 134 system. Anybody know, if that's a bug or had aclmode been removed on purpose? Seems that my Macs

Re: [zfs-discuss] zfs proerty aclmode gone in 147?

2010-09-24 Thread Stephan Budach
Hi Cindy, thanks for clarifying that. Basically, the problem seems to lie within the Netatalk afpd, which is what I use for our Mac clients. For some reason, putting a new file or folder on a Netatalk-ZFS share, don't pulls the ACEs that this new object should inherit from its parent. I have

[zfs-discuss] ZFS snapshot size vs. ZFS send/recv

2010-09-21 Thread Stephan Budach
Hi all, I wanted to get some clarification about the following issue I am experiencing when performing a zfs send/recv: According to zfs list -r, the filesystem in question has the following sizes for it's snapshots: obelixData/JvMpreprint 7,11T 10,2T 6,89T /obelixData/JvMpreprint

Re: [zfs-discuss] [osol-help] zfs destroy stalls, need to hard reboot

2009-12-30 Thread Stephan Budach
Richard, well… I am willing to experiment with dedup but I am quite unsure on how to share my results effectively. That is, what would be the interesting data, that would help improving ZFS/dedup and how should that data be presented? I reckon that just from sharing general issues, some hard

Re: [zfs-discuss] [osol-help] zfs destroy stalls, need to hard reboot

2009-12-29 Thread Stephan Budach
Hi Brent, what you have noticed makes sense and that behaviour has been present since v127, when dedupe was introduced in OpenSolaris. This also fits into my observations. I thought I had totally messed up one of my OpenSolaris boxes which I used to take my first steps with ZFS/dedupe and

  1   2   >