Re: [zfs-discuss] 3TB HDD in ZFS

2010-12-06 Thread Roy Sigurd Karlsbakk
- Original Message - Hi, Anyone who has experience with 3TB HDD in ZFS? Can solaris recognize this new HDD? I haven't tested them, but we're using multi-terabyte iscsi volumes now, so I don't really see what could be different. The only possible issue I know of, is that 3TB

Re: [zfs-discuss] 3TB HDD in ZFS

2010-12-06 Thread Fred Liu
I haven't tested them, but we're using multi-terabyte iscsi volumes now, so I don't really see what could be different. The only possible issue I know of, is that 3TB drives uses 4k sectors, which might not be optimal in all environments. Vennlige hilsener / Best regards 3TB HDD needs UEFI not

[zfs-discuss] snaps lost in space?

2010-12-06 Thread Joost Mulders
Hi, I've output of space allocation which I can't explain. I hope someone can point me at the right direction. The allocation of my home filesystem looks like this: jo...@onix$ zfs list -o space p0/home NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD p0/home 31.0G 156G

Re: [zfs-discuss] 3TB HDD in ZFS

2010-12-06 Thread taemun
On 6 December 2010 21:43, Fred Liu fred_...@issi.com wrote: 3TB HDD needs UEFI not the traditional BIOS and OS support. Fred Fred: http://www.anandtech.com/show/3858/the-worlds-first-3tb-hdd-seagate-goflex-desk-3tb-review/2 Namely: a feature of GPT is 64-bit LBA support. With 64-bit LBAs

Re: [zfs-discuss] 3TB HDD in ZFS

2010-12-06 Thread Sandon Van Ness
On 12/06/2010 05:17 AM, taemun wrote: On 6 December 2010 21:43, Fred Liu fred_...@issi.com mailto:fred_...@issi.com wrote: 3TB HDD needs UEFI not the traditional BIOS and OS support. Fred Fred:

[zfs-discuss] accidentally added a drive?

2010-12-06 Thread chris vanderhousen
I'm trying to individually upgrade drives in my raid z configuration, but I accidentally added my replacement drive to the root rank instead of the raidz1 under it.. Right now things look like this.. NAME STATE READ WRITE CKSUM tank DEGRADED 0 0

Re: [zfs-discuss] accidentally added a drive?

2010-12-06 Thread chris vanderhousen
here it is properly formatted! --NAME STATE READ WRITE CKSUM --tank DEGRADED 0 0 0 --ad4s1d ONLINE 0 0 0 --raidz1 DEGRADED 0 0 0 -ad6s1d ONLINE 0 0 0 -ad8s1d UNAVAIL 0 0 0 cannot annot open

Re: [zfs-discuss] accidentally added a drive?

2010-12-06 Thread Chris Gerhard
Alas you are hosed. There is at the moment no way to shrink a pool which is what you now need to be able to do. back up and restore I am afraid. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] accidentally added a drive?

2010-12-06 Thread Tomas Ögren
On 05 December, 2010 - Chris Gerhard sent me these 0,3K bytes: Alas you are hosed. There is at the moment no way to shrink a pool which is what you now need to be able to do. back up and restore I am afraid. .. or add a mirror to that drive, to keep some redundancy. /Tomas -- Tomas

Re: [zfs-discuss] accidentally added a drive?

2010-12-06 Thread Freddie Cash
On Mon, Dec 6, 2010 at 7:49 AM, Tomas Ögren st...@acc.umu.se wrote: On 05 December, 2010 - Chris Gerhard sent me these 0,3K bytes: Alas you are hosed.  There is at the moment no way to shrink a pool which is what you now need to be able to do. back up and restore I am afraid. .. or add a

Re: [zfs-discuss] snaps lost in space?

2010-12-06 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Joost Mulders This tells me that *86,7G* is used by *snapshots* of this filesystem. However, when I look at the space allocation of the snapshots, I don't see the 86,7G back! jo...@onix$

Re: [zfs-discuss] snaps lost in space?

2010-12-06 Thread Joost Mulders
Thanks for the pointer. AFAIK there are no clones involved. The output of zdb -d p0 is below. I found no differences with that and the output of the zfs list command. r...@onix# zdb -d p0 | egrep 'p0\/home'|sort Dataset p0/home [ZPL], ID 33, cr_txg 432, 69.7G, 192681 objects Dataset p0/h...@s1

[zfs-discuss] How to safely parse zpool get all output?

2010-12-06 Thread Peter Taps
Folks, Command zpool get all poolName does not provide any option to generate parsable output. The returned output contains 4 fields - name, property, value and source. These fields seems to be separated by spaces. I am wondering if it is safe to assume that there are no spaces in the field

Re: [zfs-discuss] How to safely parse zpool get all output?

2010-12-06 Thread Roy Sigurd Karlsbakk
Command zpool get all poolName does not provide any option to generate parsable output. The returned output contains 4 fields - name, property, value and source. These fields seems to be separated by spaces. I am wondering if it is safe to assume that there are no spaces in the field values.

Re: [zfs-discuss] How to safely parse zpool get all output?

2010-12-06 Thread Roy Sigurd Karlsbakk
Command zpool get all poolName does not provide any option to generate parsable output. The returned output contains 4 fields - name, property, value and source. These fields seems to be separated by spaces. I am wondering if it is safe to assume that there are no spaces in the field values.

[zfs-discuss] iops...

2010-12-06 Thread Roy Sigurd Karlsbakk
Hi all The numbers I've heard say the number of iops for a raidzn volume should be about the number of iops for the slowest drive in the set. While this might sound like a good base point, I tend to disagree. I've been doing some testing on some raidz2 volumes with various sizes and similar

Re: [zfs-discuss] How to safely parse zpool get all output?

2010-12-06 Thread Peter Taps
Hi, Thank you for your help. I actually had the script working. However, I just wanted to make sure that spaces are not permitted within the field value itself. Otherwise, the regular expression would break. Regards, Peter -- This message posted from opensolaris.org

Re: [zfs-discuss] iops...

2010-12-06 Thread Eric D. Mudama
On Mon, Dec 6 at 23:22, Roy Sigurd Karlsbakk wrote: Hi all The numbers I've heard say the number of iops for a raidzn volume should be about the number of iops for the slowest drive in the set. While this might sound like a good base point, I tend to disagree. I've been doing some testing on

Re: [zfs-discuss] How to safely parse zpool get all output?

2010-12-06 Thread Richard Elling
Spaces are permitted in the value field. We (myself and Nexenta) use them extensively. -- richard On Dec 6, 2010, at 1:40 PM, Peter Taps wrote: Folks, Command zpool get all poolName does not provide any option to generate parsable output. The returned output contains 4 fields - name,

[zfs-discuss] ZFS ACL's broken over NFS

2010-12-06 Thread Paul B. Henson
As is altogether far too common an occurance, we were having a problem where a file was not inheriting the correct ACL, but rather a horribly munged one resulting in incorrect permissions and security problems. It appeared something was chmod'ing the file after creation, but despite best efforts

Re: [zfs-discuss] 3TB HDD in ZFS

2010-12-06 Thread Brandon High
On Sun, Dec 5, 2010 at 9:35 PM, Fred Liu fred_...@issi.com wrote: Anyone who has experience with 3TB HDD in ZFS? Can solaris  recognize this new HDD? There shouldn't be any problems using a 3TB drive with Solaris, so long as you're using a 64-bit kernel. Recent versions of zfs should properly

Re: [zfs-discuss] 3TB HDD in ZFS

2010-12-06 Thread taemun
On 7 December 2010 13:25, Brandon High bh...@freaks.com wrote: There shouldn't be any problems using a 3TB drive with Solaris, so long as you're using a 64-bit kernel. Recent versions of zfs should properly recognize the 4k sector size as well. I think you'll find that these 3TB, 4KiB

Re: [zfs-discuss] 3TB HDD in ZFS

2010-12-06 Thread Tim Cook
It's based on a jumper on most new drives. On Dec 6, 2010 8:41 PM, taemun tae...@gmail.com wrote: On 7 December 2010 13:25, Brandon High bh...@freaks.com wrote: There shouldn't be any problems using a 3TB drive with Solaris, so long as you're using a 64-bit kernel. Recent versions of zfs

Re: [zfs-discuss] 3TB HDD in ZFS

2010-12-06 Thread Brandon High
On Mon, Dec 6, 2010 at 6:40 PM, taemun tae...@gmail.com wrote: I think you'll find that these 3TB, 4KiB physical sector drives are still exporting logical sectors of 512B (this is what Anandtech has indicated, anyway). ZFS assumes that the drives logical sectors are directly mapped to physical

Re: [zfs-discuss] 3TB HDD in ZFS

2010-12-06 Thread taemun
On 7 December 2010 13:55, Tim Cook t...@cook.ms wrote: It's based on a jumper on most new drives. Can you back that up with anything? I've never seen anything but requests for a jumper that forces the firmware to export 4KiB sectors. WD EARS at launch provided the ability to force the

Re: [zfs-discuss] Problem with a failed replace.

2010-12-06 Thread Curtis Schiewek
Hi Mark, I've tried running zpool attach media ad24 ad12 (ad12 being the new disk) and I get no response. I tried leaving the command run for an extended period of time and nothing happens. Thoughts? On Fri, Dec 3, 2010 at 2:09 PM, Mark J Musante mark.musa...@oracle.comwrote: On Fri, 3 Dec