Re: [zfs-discuss] Zpool resize
W dniu 01.04.2011 14:50, Richard Elling pisze: On Apr 1, 2011, at 4:23 AM, For@ll wrote: Hi, LUN is connected to solaris 10u9 from NETAP FAS2020a with ISCSI. I'm changing LUN size on netapp and solaris format see new value but zpool still have old value. I tryed zpool export and zpool import but it didn't resolve my problem. bash-3.00# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d1DEFAULT cyl 6523 alt 2 hd 255 sec 63 /pci@0,0/pci-ide@1,1/ide@0/cmdk@1,0 1. c2t1d0NETAPP-LUN-7340-22.00GB /iscsi/d...@iqn.1992-08.com.netapp%3Asn.13510595203E9,0 Specify disk (enter its number): ^C bash-3.00# zpool list NAME SIZE ALLOC FREECAP HEALTH ALTROOT TEST 9,94G93K 9,94G 0% ONLINE - What can I do that zpool show new value? zpool set autoexpand=on TEST zpool set autoexpand=off TEST -- richard I tried your suggestion, but no effect. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Zpool resize
W dniu 04.04.2011 11:56, eXeC001er pisze: try to export and next import your volume. This is already doing, but effect is the same, no effect. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Zpool resize
W dniu 04.04.2011 12:44, Fajar A. Nugraha pisze: On Mon, Apr 4, 2011 at 4:49 PM, For@llfor...@stalowka.info wrote: What can I do that zpool show new value? zpool set autoexpand=on TEST zpool set autoexpand=off TEST -- richard I tried your suggestion, but no effect. Did you modify the partition table? IIRC if you pass a DISK to zpool create, it would create partition/slice on it, either with SMI (the default for rpool) or EFI (the default for other pool). When the disk size changes (like when you change LUN size on storage node side), you PROBABLY need to resize the partition/slice as well. When I test with openindiana b148, simply setting zpool set autoexpand=on is enough (I tested with Xen, and openinidiana reboot is required). Again, you might need to set both autoexpand=on and resize partition slice. As a first step, try choosing c2t1d0 in format, and see what the size of this first slice is. I choosed format and change type to the auto-configure and now I see new value if I choosed partition - print, but when I exit from format and reboot the old value is stay. How I can write new settings? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Zpool resize
Hi, LUN is connected to solaris 10u9 from NETAP FAS2020a with ISCSI. I'm changing LUN size on netapp and solaris format see new value but zpool still have old value. I tryed zpool export and zpool import but it didn't resolve my problem. bash-3.00# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d1 DEFAULT cyl 6523 alt 2 hd 255 sec 63 /pci@0,0/pci-ide@1,1/ide@0/cmdk@1,0 1. c2t1d0 NETAPP-LUN-7340-22.00GB /iscsi/d...@iqn.1992-08.com.netapp%3Asn.13510595203E9,0 Specify disk (enter its number): ^C bash-3.00# zpool list NAME SIZE ALLOC FREECAP HEALTH ALTROOT TEST 9,94G93K 9,94G 0% ONLINE - What can I do that zpool show new value? Albert ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Zpool resize
W dniu 01.04.2011 14:28, Mailing Lists pisze: Hi, LUN is connected to solaris 10u9 from NETAP FAS2020a with ISCSI. I'm changing LUN size on netapp and solaris format see new value but zpool still have old value. I tryed zpool export and zpool import but it didn't resolve my problem. bash-3.00# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d1DEFAULT cyl 6523 alt 2 hd 255 sec 63 /pci@0,0/pci-ide@1,1/ide@0/cmdk@1,0 1. c2t1d0NETAPP-LUN-7340-22.00GB /iscsi/d...@iqn.1992-08.com.netapp%3Asn.13510595203E9,0 Specify disk (enter its number): ^C bash-3.00# zpool list NAME SIZE ALLOC FREECAP HEALTH ALTROOT TEST 9,94G93K 9,94G 0% ONLINE - What can I do that zpool show new value? Albert Tried a scrub? I tried, no effect. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS snapshot limit?
W dniu 2010-12-01 15:19, Menno Lageman pisze: f...@ll wrote: Hi, I must send zfs snaphost from one server to another. Snapshot have size 130GB. Now I have question, the zfs have any limit of sending file? If you are sending the snapshot to another zpool (i.e. using 'zfs send | zfs recv') then no, there is no limit. If you however send the snapshot to a file on the other system (i.e. 'zfs send somefile') then you are limited by what the file system you are creating the file on supports. Menno Hi, In my situation is first option, I send snapshot to another server using zfs send | zfs recv and I have problem when data send is completed, after reboot the zpool have error or have state: faulted. First server is physical, second is a virtual machine running under xenserver 5.6 f...@ll ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] ZFS snapshot limit?
Hi, I must send zfs snaphost from one server to another. Snapshot have size 130GB. Now I have question, the zfs have any limit of sending file? f...@ll ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss