Re: [zfs-discuss] zvol being charged for double space

2010-01-27 Thread Daniel Carosone
On Wed, Jan 27, 2010 at 09:57:08PM -0800, Bill Sommerfeld wrote: Hi Bill! :-) > On 01/27/10 21:17, Daniel Carosone wrote: >> This is as expected. Not expected is that: >> >> usedbyrefreservation = refreservation >> >> I would expect this to be 0, since all the reserved space has been >> alloca

Re: [zfs-discuss] backing this up

2010-01-27 Thread Gregory Durham
Yep Dan, Thank you very much for the idea, and helping me with my implementation issues. haha. I can see that raidz2 is not needed in this case. My question now lies as to full system recovery. Say all hell brakes loose and all is lost except tapes. If I use what you said and just add snapshots to

Re: [zfs-discuss] Building big cheap storage system. What hardware to use?

2010-01-27 Thread Jason Fortezzo
On Wed, Jan 27, 2010 at 08:25:48PM -0800, borov wrote: > SAS disks more expensive. Besides, there is no 2Tb SAS 7200 drives on market > yet. Seagate released a 2 TB SAS drive last year. http://www.seagate.com/ww/v/index.jsp?locale=en-US&vgnextoid=c7712f655373f110VgnVCM10f5ee0a0aRCRD -- Jaso

Re: [zfs-discuss] zvol being charged for double space

2010-01-27 Thread Bill Sommerfeld
On 01/27/10 21:17, Daniel Carosone wrote: This is as expected. Not expected is that: usedbyrefreservation = refreservation I would expect this to be 0, since all the reserved space has been allocated. This would be the case if the volume had no snapshots. As a result, used is over twice

Re: [zfs-discuss] Building big cheap storage system. What hardware to use?

2010-01-27 Thread Freddie Cash
We use the following for our storage servers: Chenbro 5U chassis (24 hot-swap drive bays) 1350 watt 4-way redundant PSU Tyan h200M motherboard (S3992) 2x dual-core AMD Opteron 2200-series CPUs 8 GB ECC DDR2-SDRAM 4-port Intel PRO/1000MT NIC (PCIe) 3Ware 9550SXU PCI-X RAID controller (12-port, mult

[zfs-discuss] ZPOOL somehow got same physical drive assigned twice

2010-01-27 Thread TheJay
Guys, Need your help. My DEV131 OSOL build with my 21TB disk system somehow got really screwed: This is what my zpool status looks like: NAME STATE READ WRITE CKSUM rzpool2 DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0

[zfs-discuss] zvol being charged for double space

2010-01-27 Thread Daniel Carosone
In a thread elsewhere, trying to analyse why the zfs auto-snapshot cleanup code was cleaning up more aggressively than expected, I discovered some interesting properties of a zvol. http://mail.opensolaris.org/pipermail/zfs-auto-snapshot/2010-January/000232.html The zvol is not thin-provisioned.

Re: [zfs-discuss] Building big cheap storage system. What hardware to use?

2010-01-27 Thread borov
> I have Supermicro 936E1 (X28 expander chip) and LSI > 1068 HBA. I never got timeout issue but I'm using > Seagate 15K.7 SAS. SATA might be different as it > handles error and io timeout differently. If you > still want volume, you make take a look at 7200 RPM > SAS version. SAS disks more expensi

Re: [zfs-discuss] zfs streams

2010-01-27 Thread Lori Alt
On 01/25/10 16:08, Daniel Carosone wrote: On Mon, Jan 25, 2010 at 05:42:59PM -0500, Miles Nordin wrote: et> You cannot import a stream into a zpool of earlier revision, et> thought the reverse is possible. This is very bad, because it means if your backup server is pool version 22, t

Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-27 Thread Simon Breden
> On 1/25/2010 6:23 PM, Simon Breden wrote: > > By mixing randomly purchased drives of unknown > quality, people are > > taking unnecessary chances. But often, they refuse > to see that, > > thinking that all drives are the same and they will > all fail one day > > anyway... My use of the word

Re: [zfs-discuss] Strange random errors getting automatically repaired

2010-01-27 Thread Mark Bennett
Hi Giovanni, I have seen these while testing the mpt timeout issue, and on other systems during resilvering of failed disks and while running a scrub. Once so far on this test scrub, and several on yesterdays. I checked the iostat errors, and they weren't that high on that device, compared to

Re: [zfs-discuss] raidz using partitions

2010-01-27 Thread A Darren Dunham
On Wed, Jan 27, 2010 at 10:55:21AM -0800, Albert Frenz wrote: > hi there, > > maybe this is a stupid question, yet i haven't found an answer anywhere ;) > let say i got 3x 1,5tb hdds, can i create equal partitions out of each and > make a raid5 out of it? sure the safety would drop, but that is n

Re: [zfs-discuss] primarycache=off, secondarycache=all

2010-01-27 Thread Daniel Carosone
On Wed, Jan 27, 2010 at 02:47:47PM -0800, Christo Kutrovsky wrote: > In the case of a ZVOL with the following settings: > > primarycache=off, secondarycache=all > > How does the L2ARC get populated if the data never makes it to ARC ? > Is this even a valid configuration? It's valid, I assume, i

Re: [zfs-discuss] zero out block / sectors

2010-01-27 Thread Ragnar Sundblad
On 27 jan 2010, at 10.44, Björn JACKE wrote: > On 2010-01-25 at 08:31 -0600 Mike Gerdts sent off: >> You are missing the point. Compression and dedup will make it so that >> the blocks in the devices are not overwritten with zeroes. The goal >> is to overwrite the blocks so that a back-end stor

Re: [zfs-discuss] Instructions for ignoring ZFS write cache flushing on intelligent arrays

2010-01-27 Thread Cindy Swearingen
Hi Brad, You should see better performance on the dev box running 10/09 with the sd and ssd drivers as is because they should properly handle the SYNC_NV bit in this release. If you have determined that the 11/06 system is affected by this issue, then the best method is to set this parameter i

[zfs-discuss] primarycache=off, secondarycache=all

2010-01-27 Thread Christo Kutrovsky
In the case of a ZVOL with the following settings: primarycache=off, secondarycache=all How does the L2ARC get populated if the data never makes it to ARC ? Is this even a valid configuration? The reason I ask is I have iSCSI volumes for NTFS, I intend to use an SSD for l2arc. If something is

Re: [zfs-discuss] Instructions for ignoring ZFS write cache flushing on intelligent arrays

2010-01-27 Thread Brad
We're running 10/09 on the dev box but 11/06 is prodqa. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-27 Thread Cindy Swearingen
Hi Dick, Based on this message: cannot attach c5d0s0 to c4d0s0: device is too small c5d0s0 is the disk you are trying to attach so it must be smaller than c4d0s0. Is it possible that c5d0s0 is just partitioned so that the s0 is smaller than s0 on c4d0s0? On some disks, the default partitioni

Re: [zfs-discuss] ARC Ghost lists, why have them and how much ram is used to keep track of them? [long]

2010-01-27 Thread Christo Kutrovsky
I have the exact same questions. I am very interested in the answers of those. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-27 Thread dick hoogendijk
cannot attach c5d0s0 to c4d0s0: device is too small So I guess I installed OpenSolaris onto the smallest disk. Now I cannot create a mirrored root, because the device is smaller. What is the best way to correct this except starting all over with two disks of the same size (which I don't have)? Do

Re: [zfs-discuss] Performance of partition based SWAP vs. ZFS zvol SWAP

2010-01-27 Thread Miles Nordin
> "ag" == Andrew Gabriel writes: ag> is your working set size bigger than memory (thrashing), n...no, not...not exactly. :) ag> or is swapping likely to be just a once-off event or ag> infrequently repeated? once-off! or...well...repeated, every time the garbage collector runs

Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-27 Thread Daniel Carosone
On Wed, Jan 27, 2010 at 02:34:29PM -0600, David Dyer-Bennet wrote: > Google is working heavily with the philosophy that things WILL fail, so > they plan for it, and have enough redundance to survive it -- and then > save lots of money by not paying for premium components. I like that > appro

Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-27 Thread Richard Elling
On Jan 27, 2010, at 12:34 PM, David Dyer-Bennet wrote: > > Google is working heavily with the philosophy that things WILL fail, so they > plan for it, and have enough redundance to survive it -- and then save lots > of money by not paying for premium components. I like that approach. Yes, it d

Re: [zfs-discuss] backing this up

2010-01-27 Thread Daniel Carosone
On Wed, Jan 27, 2010 at 12:01:36PM -0800, Gregory Durham wrote: > Hello All, > I read through the attached threads and found a solution by a poster and > decided to try it. That may have been mine - good to know it helped, or at least started to. > The solution was to use 3 files (in my case I ma

Re: [zfs-discuss] Performance of partition based SWAP vs. ZFS zvol SWAP

2010-01-27 Thread Richard Elling
On Jan 27, 2010, at 12:25 PM, RayLicon wrote: > Ok ... > > Given that ... yes, we all know that swapping is bad (thanks for the > enlightenment). > > To Swap or not to Swap isn't releated to this question, and besides, even if > you don't page swap, other mechanisms can still claim swap space,

Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-27 Thread David Dyer-Bennet
On 1/27/2010 7:29 AM, Simon Breden wrote: And cables are here: http://supermicro.com/products/accessories/index.cfm http://64.174.237.178/products/accessories/index.cfm (DNS failed so I gave IP address version too) Then select 'cables' from the list. From the cables listed, search for 'IPASS to

Re: [zfs-discuss] Performance of partition based SWAP vs. ZFS zvol SWAP

2010-01-27 Thread Andrew Gabriel
LICON, RAY (ATTPB) wrote: Thanks for the reply. In many situations, the hardware design isn't up to me and budgets tend to dictate everything these days. True, nobody wants to swap, but the question is "if" you had to -- what design serves you best. Independent swap slices or putting it all unde

Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-27 Thread David Dyer-Bennet
On 1/25/2010 6:23 PM, Simon Breden wrote: By mixing randomly purchased drives of unknown quality, people are taking unnecessary chances. But often, they refuse to see that, thinking that all drives are the same and they will all fail one day anyway... I would say, though, that buying differen

Re: [zfs-discuss] Performance of partition based SWAP vs. ZFS zvol SWAP

2010-01-27 Thread RayLicon
Ok ... Given that ... yes, we all know that swapping is bad (thanks for the enlightenment). To Swap or not to Swap isn't releated to this question, and besides, even if you don't page swap, other mechanisms can still claim swap space, such as the tmp file system. The question is simple, "I

Re: [zfs-discuss] ARC not using all available RAM?

2010-01-27 Thread Christo Kutrovsky
I am interested in this as well. My machine is with 5 gb ram, and will soon have an 80gb SSD device. My free memory hovers around 750 Mb, and the arc around 3GB. This machine doesn't do anything other than iSCSI/CIFS, I wouldn't mind using some extra 500 Mb for caching. And this becomes especi

Re: [zfs-discuss] Instructions for ignoring ZFS write cache flushing on intelligent arrays

2010-01-27 Thread Cindy Swearingen
Brad, It depends on the Solaris release. What Solaris release are you running? Thanks, Cindy On 01/27/10 11:43, Brad wrote: Cindy, It does not list our SAN (LSI/STK/NetApp)...I'm confused about disabling cache from the wiki entries. Should we disable it by turning off zfs cache syncs via "

[zfs-discuss] Performance of partition based SWAP vs. ZFS zvol SWAP

2010-01-27 Thread LICON, RAY (ATTPB)
Has anyone done research into the performance of SWAP on the traditional partitioned based SWAP device as compared to a SWAP area set up on ZFS with a zvol? I can find no best practices for this issue. In the old days it was considered important to separate the swap devices onto individual disks

Re: [zfs-discuss] backing this up

2010-01-27 Thread Gregory Durham
Hello All, I read through the attached threads and found a solution by a poster and decided to try it. The solution was to use 3 files (in my case I made them sparse), I then created a raidz2 pool across these 3 files and started a zfs send | recv. The performance is horrible, it was 5.62mb/s. When

Re: [zfs-discuss] zfs destroy hangs machine if snapshot exists- workaround found

2010-01-27 Thread Tonmaus
> This sounds like yet another instance of > > 6910767 deleting large holey objects hangs other I/Os > > I have a module based on 130 that includes this fix > if you would like to try it. > > -tim Hi Tim, 6910767 seems to be about ZVOLs. The dataset here was not a ZVOL. I had a 1,4 TB ZVOL on

Re: [zfs-discuss] Performance of partition based SWAP vs. ZFS zvol SWAP

2010-01-27 Thread Andrew Gabriel
RayLicon wrote: Has anyone done research into the performance of SWAP on the traditional partitioned based SWAP device as compared to a SWAP area set up on ZFS with a zvol? I can find no best practices for this issue. In the old days it was considered important to separate the swap devices o

Re: [zfs-discuss] raidz using partitions

2010-01-27 Thread Albert Frenz
ok nice to know :) thank you very much for your quick answer -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Performance of partition based SWAP vs. ZFS zvol SWAP

2010-01-27 Thread RayLicon
Has anyone done research into the performance of SWAP on the traditional partitioned based SWAP device as compared to a SWAP area set up on ZFS with a zvol? I can find no best practices for this issue. In the old days it was considered important to separate the swap devices onto individual dis

[zfs-discuss] zpool import failure

2010-01-27 Thread Amer Ather
IHAC who is having problem in importing zpool. When tried to import zpool, it is failing with error message: # zpool import -f Backup1 cannot import 'Backup1': invalid vdev configuration fma log reports a bad label on vdev_guid 0xd51633a1766882ad. fma errors: Jan 23 2010 09:18:27.374175886 erep

Re: [zfs-discuss] raidz using partitions

2010-01-27 Thread Thomas Burgess
On Wed, Jan 27, 2010 at 1:55 PM, Albert Frenz wrote: > hi there, > > maybe this is a stupid question, yet i haven't found an answer anywhere ;) > let say i got 3x 1,5tb hdds, can i create equal partitions out of each and > make a raid5 out of it? sure the safety would drop, but that is not that >

[zfs-discuss] raidz using partitions

2010-01-27 Thread Albert Frenz
hi there, maybe this is a stupid question, yet i haven't found an answer anywhere ;) let say i got 3x 1,5tb hdds, can i create equal partitions out of each and make a raid5 out of it? sure the safety would drop, but that is not that important to me. with roughly 500gb partitions and the raid5 fo

Re: [zfs-discuss] Instructions for ignoring ZFS write cache flushing on intelligent arrays

2010-01-27 Thread Brad
Cindy, It does not list our SAN (LSI/STK/NetApp)...I'm confused about disabling cache from the wiki entries. Should we disable it by turning off zfs cache syncs via "echo zfs_nocacheflush/W0t1 | mdb -kw " or specify it by storage device via the sd.conf method where the array ignores cache flus

Re: [zfs-discuss] Building big cheap storage system. What hardware to use?

2010-01-27 Thread Chris Du
I have Supermicro 936E1 (X28 expander chip) and LSI 1068 HBA. I never got timeout issue but I'm using Seagate 15K.7 SAS. SATA might be different as it handles error and io timeout differently. If you can wait, better wait for 6Gb SAS expander based product. BTW. I'd get Supermicro X8DTH-6F moth

Re: [zfs-discuss] zfs destroy hangs machine if snapshot exists- workaround found

2010-01-27 Thread Tim Haley
On 01/27/10 04:39 AM, erik.ableson wrote: On 27 janv. 2010, at 12:10, Georg S. Duck wrote: Hi, I was suffering for weeks from the following problem: a zfs dataset contained an automatic snapshot (monthly) that used 2.8 TB of data. The dataset was deprecated, so I chose to destroy it after

Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboard

2010-01-27 Thread Donald Murray, P.Eng.
Hi David, On Mon, Jan 25, 2010 at 11:16 AM, David Dyer-Bennet wrote: > My current home fileserver (running Open Solaris 111b and ZFS) has an ASUS > M2N-SLI DELUXE motherboard.  This has 6 SATA connections, which are > currently all in use (mirrored pair of 80GB for system zfs pool, two > mirrors

Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-27 Thread Simon Breden
If you choose the AOC-USAS-L8i controller route, don't worry too much about the exotic looking nature of these SAS/SATA controllers. These controllers drive SAS drives and also SATA drives. As you will be using SATA drives, you'll just get cables that plug into the card. The card has 2 ports. Yo

Re: [zfs-discuss] zero out block / sectors

2010-01-27 Thread Björn JACKE
On 2010-01-27 at 09:50 + Darren J Moffat sent off: > The whole point of the original question wasn't about consumers of > ZFS but where ZFS is the consumer of block storage provided by > something else that expects to see "zeros" on disk. > > This thread is about "thin" provisioning *to* ZFS n

Re: [zfs-discuss] zfs destroy hangs machine if snapshot exists- workaround found

2010-01-27 Thread Georg S. Duck
> Server responds to pings, but that's it. All iSCSI, NFS and ssh connections > are cut. That's consistent with my findings, adding that SMB is cut as well. At one vain attempt to destroy the data...@snapshot I got a "[ID 224711 kern.warning] WARNING: Memory pressure: TCP defensive mode on". If

[zfs-discuss] Strange random errors getting automatically repaired

2010-01-27 Thread gtirloni
Hello, Has anyone ever seen vdev's getting removed and added back to the pool very quickly ? That seems to be what's happening here. This has started to happen on dozens of machines at different locations since a few days ago. They are running OpenSolaris b111 and a few b126. Could this be bi

Re: [zfs-discuss] zfs destroy hangs machine if snapshot exists- workaround found

2010-01-27 Thread erik.ableson
On 27 janv. 2010, at 12:10, Georg S. Duck wrote: > Hi, > I was suffering for weeks from the following problem: > a zfs dataset contained an automatic snapshot (monthly) that used 2.8 TB of > data. The dataset was deprecated, so I chose to destroy it after I had > deleted some files; eventually

[zfs-discuss] zfs destroy hangs machine if snapshot exists- workaround found

2010-01-27 Thread Georg S. Duck
Hi, I was suffering for weeks from the following problem: a zfs dataset contained an automatic snapshot (monthly) that used 2.8 TB of data. The dataset was deprecated, so I chose to destroy it after I had deleted some files; eventually it was completely blank besides the snapshot that still lock

Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-27 Thread Mirko
I use a Sil3132 based card. It's a 2 port PCI-e 1x supported by OpenSolaris 2009.06 and latest Solaris 10 natively. It's cheap ($25) support SATA 2. I use it to boot my boot disk. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] zero out block / sectors

2010-01-27 Thread Darren J Moffat
On 27/01/2010 09:44, Björn JACKE wrote: On 2010-01-25 at 08:31 -0600 Mike Gerdts sent off: You are missing the point. Compression and dedup will make it so that the blocks in the devices are not overwritten with zeroes. The goal is to overwrite the blocks so that a back-end storage device or b

Re: [zfs-discuss] zero out block / sectors

2010-01-27 Thread Björn JACKE
On 2010-01-25 at 08:31 -0600 Mike Gerdts sent off: > You are missing the point. Compression and dedup will make it so that > the blocks in the devices are not overwritten with zeroes. The goal > is to overwrite the blocks so that a back-end storage device or > back-end virtualization platform can

[zfs-discuss] Building big cheap storage system. What hardware to use?

2010-01-27 Thread borov
Hello. We need big "cheap" storage. Looking to Supermicro systems. Something based on SC846E1-R900 case http://www.supermicro.com/products/chassis/4U/846/SC846E1-R900.cfm with 24 disc bays. This case with 3 GBit LSI SASX36 expander. But the problem with LSI based HBA timeouts really confuses me