Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-04 Thread Willard Korfhage
Looks like it was RAM. I ran memtest+ 4.00, and it found no problems. I removed 2 of the 3 sticks of RAM, ran a backup, and had no errors. I'm running more extensive tests, but it looks like that was it. A new motherboard, CPU and ECC RAM are on the way to me now. -- This message posted from op

[zfs-discuss] ZFS getting slower over time

2010-04-04 Thread Marcus Wilhelmsson
I have a problem with my zfs system, it's getting slower and slower over time. When the OpenSolaris machine is rebooted and just started I get about 30-35MB/s in read and write but after 4-8 hours I'm down to maybe 10MB/s and it varies between 4-18MB/s. Now, if i reboot the machine it's all gone

Re: [zfs-discuss] To slice, or not to slice

2010-04-04 Thread Richard Elling
On Apr 4, 2010, at 8:11 PM, Edward Ned Harvey wrote: >>> There is some question about performance. Is there any additional >> overhead caused by using a slice instead of the whole physical device? >> >> No. >> >> If the disk is only used for ZFS, then it is ok to enable volatile disk >> write ca

[zfs-discuss] writeback vs writethrough [was: Sun Flash Accelerator F20 numbers]

2010-04-04 Thread Richard Elling
On Apr 2, 2010, at 5:03 AM, Edward Ned Harvey wrote: >>> Seriously, all disks configured WriteThrough (spindle and SSD disks >>> alike) >>> using the dedicated ZIL SSD device, very noticeably faster than >>> enabling the >>> WriteBack. >> >> What do you get with both SSD ZIL and WriteBack disks e

Re: [zfs-discuss] To slice, or not to slice

2010-04-04 Thread Edward Ned Harvey
I haven't taken that approach, but I guess I'll give it a try. From: Tim Cook [mailto:t...@cook.ms] Sent: Sunday, April 04, 2010 11:00 PM To: Edward Ned Harvey Cc: Richard Elling; zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] To slice, or not to slice On Sun, Apr 4, 2010

Re: [zfs-discuss] To slice, or not to slice

2010-04-04 Thread Edward Ned Harvey
> > There is some question about performance. Is there any additional > overhead caused by using a slice instead of the whole physical device? > > No. > > If the disk is only used for ZFS, then it is ok to enable volatile disk > write caching > if the disk also supports write cache flush request

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-04 Thread Edward Ned Harvey
> Actually, It's my experience that Sun (and other vendors) do exactly > that for you when you buy their parts - at least for rotating drives, I > have no experience with SSD's. > > The Sun disk label shipped on all the drives is setup to make the drive > the standard size for that sun part number

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-04 Thread Edward Ned Harvey
> Hmm, when you did the write-back test was the ZIL SSD included in the > write-back? > > What I was proposing was write-back only on the disks, and ZIL SSD > with no write-back. The tests I did were: All disks write-through All disks write-back With/without SSD for ZIL All the permutations of th

Re: [zfs-discuss] To slice, or not to slice

2010-04-04 Thread Tim Cook
On Sun, Apr 4, 2010 at 9:46 PM, Edward Ned Harvey wrote: > > CR 6844090, zfs should be able to mirror to a smaller disk > > http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6844090 > > b117, June 2009 > > Awesome. Now if someone would only port that to solaris, I'd be a happy > man. ;

Re: [zfs-discuss] To slice, or not to slice

2010-04-04 Thread Edward Ned Harvey
> CR 6844090, zfs should be able to mirror to a smaller disk > http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6844090 > b117, June 2009 Awesome. Now if someone would only port that to solaris, I'd be a happy man. ;-) ___ zfs-discuss maili

Re: [zfs-discuss] To slice, or not to slice

2010-04-04 Thread Edward Ned Harvey
> Your experience is exactly why I suggested ZFS start doing some "right > sizing" if you will.  Chop off a bit from the end of any disk so that > we're guaranteed to be able to replace drives from different > manufacturers.  The excuse being "no reason to, Sun drives are always > of identical size

Re: [zfs-discuss] Problems with zfs and a "STK RAID INT" SAS HBA

2010-04-04 Thread Edward Ned Harvey
> When running the card in copyback write cache mode, I got horrible > performance (with zfs), much worse than with copyback disabled > (which I believe should mean it does write-through), when tested > with filebench. When I benchmark my disks, I also find that the system is slower with WriteBack

Re: [zfs-discuss] mpxio load-balancing...it doesn't work??

2010-04-04 Thread Tim Cook
On Sun, Apr 4, 2010 at 8:55 PM, Brad wrote: > I had always thought that with mpxio, it load-balances IO request across > your storage ports but this article > http://christianbilien.wordpress.com/2007/03/23/storage-array-bottlenecks/has > got me thinking its not true. > > "The available bandwidt

[zfs-discuss] mpxio load-balancing...it doesn't work??

2010-04-04 Thread Brad
I had always thought that with mpxio, it load-balances IO request across your storage ports but this article http://christianbilien.wordpress.com/2007/03/23/storage-array-bottlenecks/ has got me thinking its not true. "The available bandwidth is 2 or 4Gb/s (200 or 400MB/s – FC frames are 10 byt

[zfs-discuss] It's alive, and thank you all for the help.

2010-04-04 Thread R.G. Keen
I finally achieved critical mass on enough parts to put my zfs server together. It basically ran the first time, any non-function being my own misunderstandings. I wanted to issue a thank you to those of you who suffered through my questions and pointed me in the right direction. Many pieces of

[zfs-discuss] Which zfs options are replicated

2010-04-04 Thread Lutz Schumann
Hello list, I started playing aroud with Comstar in snv_134. In snv_116 version of ZFS, a new hidden property for the Comstar MetaData has been intoduced (stmf_sbd_lu). This makes it possible to migrate from legacy (iscsi target daemon) to Comstar without data loss, which is great. Before th

Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-04 Thread Willard Korfhage
Yeah, this morning I concluded I really should be running ECC ram. I sometimes wonder why people people don't run ECC ram more frequently. I remember a decade ago, when ram was much, much less dense, people fretted about alpha particles randomly flipping bits, but that seems to have died down.

Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-04 Thread Frank Middleton
On 04/ 4/10 10:00 AM, Willard Korfhage wrote: What should I make of this? All the disks are bad? That seems unlikely. I found another thread http://opensolaris.org/jive/thread.jspa?messageID=399988 where it finally came down to bad memory, so I'll test that. Any other suggestions? It could b

[zfs-discuss] vPool unavailable but RaidZ1 is online

2010-04-04 Thread Kevin
I am trying to recover a raid set, there are only three drives that are part of the set. I attached a disk and discovered it was bad. It was never part of the raid set. The disk is now gone and when I try to import the pool I get the error listed below. Is there a chance to recover? TIA! S

[zfs-discuss] Diagnosing Permanent Errors

2010-04-04 Thread Willard Korfhage
I would like to get some help diagnosing permanent errors on my files. The machine in question has 12 1TB disks connected to an Areca raid card. I installed OpenSolaris build 134 and according to zpool history, created a pool with zpool create bigraid raidz2 c4t0d0 c4t0d1 c4t0d2 c4t0d3 c4t0d4 c

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-04 Thread Ragnar Sundblad
On 4 apr 2010, at 06.01, Richard Elling wrote: Thank you for your reply! Just wanted to make sure. > Do not assume that power outages are the only cause of unclean shutdowns. > -- richard Thanks, I have seen that mistake several times with other (file)systems, and hope I'll never ever make it m