Re: [zfs-discuss] zfs send not working when i/o errors in pool

2011-07-01 Thread Tuomas Leikola
Rsync with some ignore-errors option, maybe? In any case you've lost some data so make sure to take record of zpool status -v On Jul 1, 2011 12:26 AM, Tom Demo tom.d...@lizard.co.nz wrote: Hi there. I am trying to get my filesystems off a pool that suffered irreparable damage due to 2 disks

Re: [zfs-discuss] about write balancing

2011-07-01 Thread Tuomas Leikola
Sorry everyone, this one was indeed a case of root stupidity. I had forgotten to upgrade to OI 148, which apparently fixed the write balancer. Duh. (didn't find full changelog from google tho.) On Jun 30, 2011 3:12 PM, Tuomas Leikola tuomas.leik...@gmail.com wrote: Thanks for the input

Re: [zfs-discuss] about write balancing

2011-06-30 Thread Tuomas Leikola
Thanks for the input. This was not a case of degraded vdev, but only a missing log device (which i cannot get rid of..). I'll try offlining some vdevs and see what happens - altough this should be automatic atf all times IMO. On Jun 30, 2011 1:25 PM, Markus Kovero markus.kov...@nebula.fi wrote:

[zfs-discuss] about write balancing

2011-06-29 Thread Tuomas Leikola
Hi! I've been monitoring my arrays lately, and to me it seems like the zfs allocator might be misfiring a bit. This is all on OI 147 and if there is a problem and a fix, i'd like to see it in the next image-update =D Here's some 60s iostat cleaned up a bit: tank3.76T 742G

Re: [zfs-discuss] SATA disk perf question

2011-06-01 Thread Tuomas Leikola
I have a resilver running and am seeing about 700-800 writes/sec. on the hot spare as it resilvers. IIRC resilver works in block birth order (write order) which is commonly more-or-less sequential unless the fs is fragmented. So it might or might not be. I think you cannot get that kind of

[zfs-discuss] zfs over iscsi not recovering from timeouts

2011-04-17 Thread Tuomas Leikola
Hei, I'm crossposting this to zfs as i'm not sure which bit is to blame here. I've been having this issue that i cannot really fix myself: I have a OI 148 server, which hosts a log of disks on SATA controllers. Now it's full and needs some data moving work to be done, so i've acquired another

Re: [zfs-discuss] test

2011-04-17 Thread Tuomas Leikola
It's been quiet, seems. On Fri, Apr 15, 2011 at 5:09 PM, Jerry Kemp sun.mail.lis...@oryx.cc wrote: I have not seen any email from this list in a couple of days. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] What drives?

2011-02-25 Thread Tuomas Leikola
I'd pick samsung and use the savings for additional redundancy. Ymmv. On Feb 25, 2011 8:46 AM, Markus Kovero markus.kov...@nebula.fi wrote: So, does anyone know which drives to choose for the next setup? Hitachis look good so far, perhaps also seagates, but right now, I'm dubious about the

Re: [zfs-discuss] raidz recovery

2010-12-18 Thread Tuomas Leikola
On Wed, Dec 15, 2010 at 3:29 PM, Gareth de Vaux z...@lordcow.org wrote: On Mon 2010-12-13 (16:41), Marion Hakanson wrote: After you clear the errors, do another scrub before trying anything else.  Once you get a complete scrub with no new errors (and no checksum errors), you should be

Re: [zfs-discuss] Newbie question : snapshots, replication and recovering failure of Site B

2010-10-27 Thread Tuomas Leikola
On Tue, Oct 26, 2010 at 5:21 PM, Matthieu Fecteau matthieufect...@gmail.com wrote: My question : in the event that there's no more common snapshot between Site A and Site B, how can we replicate again ? (example : Site B has a power failure and then Site A cleanup his snapshots before Site B

Re: [zfs-discuss] Clearing space nearly full zpool

2010-10-26 Thread Tuomas Leikola
On Mon, Oct 25, 2010 at 4:57 PM, Cuyler Dingwell cuy...@gmail.com wrote: It's not just this directory in the example - it's any directory or file.   The system was running fine up until it hit 96%.  Also, a full scrub of the file system was done (took nearly two days). -- I'm just stabbing

Re: [zfs-discuss] Balancing LVOL fill?

2010-10-21 Thread Tuomas Leikola
On Thu, Oct 21, 2010 at 12:06 AM, Peter Jeremy peter.jer...@alcatel-lucent.com wrote: On 2010-Oct-21 01:28:46 +0800, David Dyer-Bennet d...@dd-b.net wrote: On Wed, October 20, 2010 04:24, Tuomas Leikola wrote: I wished for a more aggressive write balancer but that may be too much to ask

Re: [zfs-discuss] Balancing LVOL fill?

2010-10-20 Thread Tuomas Leikola
On Tue, Oct 19, 2010 at 7:13 PM, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote: I have this server with some 50TB disk space. It originally had 30TB on WD Greens, was filled quite full, and another storage chassis was added. Now, space problem gone, fine, but what about speed? Three of the

Re: [zfs-discuss] vdev failure - pool loss ?

2010-10-20 Thread Tuomas Leikola
On Wed, Oct 20, 2010 at 3:50 AM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Tue, 19 Oct 2010, Cindy Swearingen wrote: unless you use copies=2 or 3, in which case your data is still safe for those datasets that have this option set. This advice is a little too optimistic.

Re: [zfs-discuss] Optimal raidz3 configuration

2010-10-20 Thread Tuomas Leikola
On Wed, Oct 20, 2010 at 4:05 PM, Edward Ned Harvey sh...@nedharvey.com wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey 4. Guess what happens if you have 2 or 3 failed disks in your raidz3, and they're trying to

Re: [zfs-discuss] Balancing LVOL fill?

2010-10-20 Thread Tuomas Leikola
On Wed, Oct 20, 2010 at 5:00 PM, Richard Elling richard.ell...@gmail.com wrote: Now, is there a way, manually or automatically, to somehow balance the data across these LVOLs? My first guess is that doing this _automatically_ will require block pointer rewrite, but then, is there way to hack

Re: [zfs-discuss] vdev failure - pool loss ?

2010-10-19 Thread Tuomas Leikola
On Mon, Oct 18, 2010 at 8:18 PM, Simon Breden sbre...@gmail.com wrote: So are we all agreed then, that a vdev failure will cause pool loss ? -- unless you use copies=2 or 3, in which case your data is still safe for those datasets that have this option set. -- - Tuomas

Re: [zfs-discuss] Finding corrupted files

2010-10-19 Thread Tuomas Leikola
On Mon, Oct 18, 2010 at 4:55 PM, Edward Ned Harvey sh...@nedharvey.com wrote: Thank you, but, the original question was whether a scrub would identify just corrupt blocks, or if it would be able to map corrupt blocks to a list of corrupt files. Just in case this wasn't already clear. After

Re: [zfs-discuss] Finding corrupted files

2010-10-12 Thread Tuomas Leikola
On Tue, Oct 12, 2010 at 9:39 AM, Stephan Budach stephan.bud...@jvm.de wrote: You are implying that the issues resulted from the H/W raid(s) and I don't think that this is appropriate. Not exactly. Because the raid is managed in hardware, and not by zfs, is the reason why zfs cannot fix these

[zfs-discuss] free space fragmentation causing slow write speeds

2010-10-11 Thread Tuomas Leikola
Hello everybody. I am experiencing terribly slow writes on my home server. This is from zpool iostat: capacity operationsbandwidth pool alloc free read write read write - - - - -

Re: [zfs-discuss] Resilver endlessly restarting at completion

2010-10-05 Thread Tuomas Leikola
dropping from the SATA bus randomly. Maybe I'll cough together a report and post to storage-discuss. On Wed, Sep 29, 2010 at 8:13 PM, Tuomas Leikola tuomas.leik...@gmail.comwrote: The endless resilver problem still persists on OI b147. Restarts when it should complete. I see no other solution than

Re: [zfs-discuss] Unusual Resilver Result

2010-09-30 Thread Tuomas Leikola
On Thu, Sep 30, 2010 at 9:08 AM, Jason J. W. Williams jasonjwwilli...@gmail.com wrote: Should I be worried about these checksum errors? Maybe. Your disks, cabling or disk controller is probably having some issue which caused them. or maybe sunspots are to blame. Run a scrub often and

Re: [zfs-discuss] Resliver making the system unresponsive

2010-09-30 Thread Tuomas Leikola
On Thu, Sep 30, 2010 at 1:16 AM, Scott Meilicke scott.meili...@craneaerospace.com wrote: Resliver speed has been beaten to death I know, but is there a way to avoid this? For example, is more enterprisy hardware less susceptible to reslivers? This box is used for development VMs, but there is

Re: [zfs-discuss] Resilver endlessly restarting at completion

2010-09-29 Thread Tuomas Leikola
, Tuomas Leikola tuomas.leik...@gmail.comwrote: Hi! My home server had some disk outages due to flaky cabling and whatnot, and started resilvering to a spare disk. During this another disk or two dropped, and were reinserted into the array. So no devices were actually lost, they just were

Re: [zfs-discuss] Resilver endlessly restarting at completion

2010-09-29 Thread Tuomas Leikola
Thanks for taking an interest. Answers below. On Wed, Sep 29, 2010 at 9:01 PM, George Wilson george.r.wil...@oracle.comwrote: On Mon, Sep 27, 2010 at 1:13 PM, Tuomas Leikola tuomas.leik...@gmail.commailto: tuomas.leik...@gmail.com wrote: (continuous resilver loop) has been going

[zfs-discuss] Resilver endlessly restarting at completion

2010-09-27 Thread Tuomas Leikola
Hi! My home server had some disk outages due to flaky cabling and whatnot, and started resilvering to a spare disk. During this another disk or two dropped, and were reinserted into the array. So no devices were actually lost, they just were intermittently away for a while each. The situation is

[zfs-discuss] zfs log on another zfs pool

2010-05-01 Thread Tuomas Leikola
Hi. I have a simple question. Is it safe to place log device on another zfs disk? I'm planning on placing the log on my mirrored root partition. Using latest opensolaris. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] ZFS over multiple iSCSI targets

2008-09-08 Thread Tuomas Leikola
On Mon, Sep 8, 2008 at 8:35 PM, Miles Nordin [EMAIL PROTECTED] wrote: ps iSCSI with respect to write barriers? +1. Does anyone even know of a good way to actually test it? So far it seems the only way to know if your OS is breaking write barriers is to trade gossip and guess. Write a

Re: [zfs-discuss] ZFS RAIDZ vs. RAID5.

2007-09-30 Thread Tuomas Leikola
On 9/20/07, Roch - PAE [EMAIL PROTECTED] wrote: Next application modifies D0 - D0' and also writes other data D3, D4. Now you have Disk0 Disk1 Disk2 Disk3 D0 D1 D2 P0,1,2 D0' D3 D4 P0',3,4 But if D1 and D2 stays immutable for

Re: [zfs-discuss] ZFS RAIDZ vs. RAID5.

2007-09-15 Thread Tuomas Leikola
On 9/10/07, Pawel Jakub Dawidek [EMAIL PROTECTED] wrote: The problem with RAID5 is that different blocks share the same parity, which is not the case for RAIDZ. When you write a block in RAIDZ, you write the data and the parity, and then you switch the pointer in uberblock. For RAID5, you

Re: [zfs-discuss] zfs on entire disk?

2007-08-11 Thread Tuomas Leikola
On 8/11/07, Russ Petruzzelli [EMAIL PROTECTED] wrote: Is it possible/recommended to create a zpool and zfs setup such that the OS itself (in root /) is in its own zpool? Yes. You're looking for zfs root and it's easiest if your installer does that for you. At least latest nexenta unstable

Re: [zfs-discuss] Force ditto block on different vdev?

2007-08-10 Thread Tuomas Leikola
We call that a mirror :-) Mirror and raidz suffer from the classic blockdevice abstraction problem in that they need disks of equal size. Not that I'm aware of. Mirror and raid-z will simply use the smallest size of your available disks. Exactly. The rest is not usable.

Re: [zfs-discuss] Force ditto block on different vdev?

2007-08-10 Thread Tuomas Leikola
On 8/10/07, Darren Dunham [EMAIL PROTECTED] wrote: For instance, it might be nice to create a mirror with a 100G disk and two 50G disks. Right now someone has to create slices on the big disk manually and feed them to zpool. Letting ZFS handle everything itself might be a win for some cases.

Re: [zfs-discuss] Force ditto block on different vdev?

2007-08-10 Thread Tuomas Leikola
On 8/10/07, Darren J Moffat [EMAIL PROTECTED] wrote: Tuomas Leikola wrote: We call that a mirror :-) Mirror and raidz suffer from the classic blockdevice abstraction problem in that they need disks of equal size. Not that I'm aware of. Mirror and raid-z will simply use the smallest

Re: [zfs-discuss] Force ditto block on different vdev?

2007-08-10 Thread Tuomas Leikola
On 8/10/07, Moore, Joe [EMAIL PROTECTED] wrote: Wishlist: It would be nice to put the whole redundancy definitions into the zfs filesystem layer (rather than the pool layer): Imagine being able to set copies=5+2 for a filesystem... (requires a 7-VDEV pool, and stripes via RAIDz2, otherwise

Re: [zfs-discuss] Force ditto block on different vdev?

2007-08-10 Thread Tuomas Leikola
On 8/9/07, Richard Elling [EMAIL PROTECTED] wrote: What I'm looking for is a disk full error if ditto cannot be written to different disks. This would guarantee that a mirror is written on a separate disk - and the entire filesystem can be salvaged from a full disk failure. We call that

Re: [zfs-discuss] Force ditto block on different vdev?

2007-08-10 Thread Tuomas Leikola
On 8/9/07, Mario Goebbels [EMAIL PROTECTED] wrote: If you're that bent on having maximum redundancy, I think you should consider implementing real redundancy. I'm also biting the bullet and going mirrors (cheaper than RAID-Z for home, less disks needed to start with). Currently I am, and as

[zfs-discuss] Force ditto block on different vdev?

2007-08-09 Thread Tuomas Leikola
Hi! I'm having hard time finding out if it's possible to force ditto blocks on different devices. This mode has many benefits, the least not being that is practically creates a fully dynamic mode of mirroring (replacing raid1 and raid10 variants), especially when combined with the upcoming vdev

Re: [zfs-discuss] Force ditto block on different vdev?

2007-08-09 Thread Tuomas Leikola
Actually, ZFS is already supposed to try to write the ditto copies of a block on different vdevs if multiple are available. *TRY* being the keyword here. What I'm looking for is a disk full error if ditto cannot be written to different disks. This would guarantee that a mirror is written on