Re: [zfs-discuss] raidz DEGRADED state

2011-05-10 Thread Krzys
Ah, did not see your follow up. Thanks.

Chris


On Thu, 30 Nov 2006, Cindy Swearingen wrote:

 Sorry, Bart, is correct:

 If  new_device  is  not  specified,   it   defaults   to
  old_device.  This form of replacement is useful after an
  existing  disk  has  failed  and  has  been   physically
  replaced.  In  this case, the new disk may have the same
  /dev/dsk path as the old device, even though it is actu-
  ally a different disk. ZFS recognizes this.

 cs

 Cindy Swearingen wrote:
 One minor comment is to identify the replacement drive, like this:
 
 # zpool replace mypool2 c3t6d0 c3t7d0
 
 Otherwise, zpool will error...
 
 cs
 
 Bart Smaalders wrote:
 
 Krzys wrote:
 
 
 my drive did go bad on me, how do I replace it? I am sunning solaris 10 
 U2 (by the way, I thought U3 would be out in November, will it be out 
 soon? does anyone know?
 
 
 [11:35:14] server11: /export/home/me  zpool status -x
   pool: mypool2
  state: DEGRADED
 status: One or more devices could not be opened.  Sufficient replicas 
 exist for
 the pool to continue functioning in a degraded state.
 action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-D3
  scrub: none requested
 config:

 NAMESTATE READ WRITE CKSUM
 mypool2 DEGRADED 0 0 0
   raidz DEGRADED 0 0 0
 c3t0d0  ONLINE   0 0 0
 c3t1d0  ONLINE   0 0 0
 c3t2d0  ONLINE   0 0 0
 c3t3d0  ONLINE   0 0 0
 c3t4d0  ONLINE   0 0 0
 c3t5d0  ONLINE   0 0 0
 c3t6d0  UNAVAIL  0   679 0  cannot open
 
 errors: No known data errors
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 
 
 Shut down the machine, replace the drive, reboot
 and type:
 
 zpool replace mypool2 c3t6d0
 
 
 On earlier versions of ZFS I found it useful to do this
 at the login prompt; it seemed fairly memory intensive.
 
 - Bart
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


 !DSPAM:122,456f1b0c21174266247132!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs panic

2009-01-12 Thread Krzys
any idea what could cause my system to panic? I get my system rebooted daily at 
various times. very strange, but its pointing to zfs. I have U6 with all latest 
patches.


Jan 12 05:47:12 chrysek unix: [ID 836849 kern.notice]
Jan 12 05:47:12 chrysek ^Mpanic[cpu1]/thread=30002c8d4e0:
Jan 12 05:47:12 chrysek unix: [ID 799565 kern.notice] BAD TRAP: type=28 
rp=2a10285c790 addr=7b76a0a8 mmu_fsr=0
Jan 12 05:47:12 chrysek unix: [ID 10 kern.notice]
Jan 12 05:47:12 chrysek unix: [ID 839527 kern.notice] zfs:
Jan 12 05:47:12 chrysek unix: [ID 983713 kern.notice] integer divide zero trap:
Jan 12 05:47:12 chrysek unix: [ID 381800 kern.notice] addr=0x7b76a0a8
Jan 12 05:47:12 chrysek unix: [ID 101969 kern.notice] pid=18941, pc=0x7b76a0a8, 
sp=0x2a10285c031, tstate=0x4480001606, context=0x1
Jan 12 05:47:12 chrysek unix: [ID 743441 kern.notice] g1-g7: 7b76a07c, 1, 0, 0, 
241b2a, 16, 30002c8d4e0
Jan 12 05:47:12 chrysek unix: [ID 10 kern.notice]
Jan 12 05:47:12 chrysek genunix: [ID 723222 kern.notice] 02a10285c4b0 
unix:die+9c (28, 2a10285c790, 7b76a0a8, 0, 2a10285c570, 1)
Jan 12 05:47:12 chrysek genunix: [ID 179002 kern.notice]   %l0-3: 
000a 0028 000a 0801
Jan 12 05:47:12 chrysek   %l4-7: 02a10285cd18 02a10285cd3c 
0006 0109a000
Jan 12 05:47:13 chrysek genunix: [ID 723222 kern.notice] 02a10285c590 
unix:trap+644 (2a10285c790, 1, 0, 0, 180c000, 30002c8d4e0)
Jan 12 05:47:13 chrysek genunix: [ID 179002 kern.notice]   %l0-3: 
 06002c5b9130 0028 0600118fa088
Jan 12 05:47:13 chrysek   %l4-7:  00db 
004480001606 00010200
Jan 12 05:47:13 chrysek genunix: [ID 723222 kern.notice] 02a10285c6e0 
unix:ktl0+48 (0, 70021d50, 349981, 180c000, 10394e8, 2a10285c8e8)
Jan 12 05:47:13 chrysek genunix: [ID 179002 kern.notice]   %l0-3: 
0007 1400 004480001606 0101bedc
Jan 12 05:47:13 chrysek   %l4-7: 0600110bd630 0600110be400 
 02a10285c790
Jan 12 05:47:13 chrysek genunix: [ID 723222 kern.notice] 02a10285c830 
zfs:spa_get_random+c (0, 0, d15c4746ef9ddd65, 0, , 8)
Jan 12 05:47:13 chrysek genunix: [ID 179002 kern.notice]   %l0-3: 
01ff 7b772a00 000e 
Jan 12 05:47:13 chrysek   %l4-7: 00020801 ee00 
060031b23680 
Jan 12 05:47:13 chrysek genunix: [ID 723222 kern.notice] 02a10285c8f0 
zfs:vdev_mirror_map_alloc+b8 (60012ec20e0, 30006a9a3c8, 1, 30006a9a370, 0, 
ff)
Jan 12 05:47:13 chrysek genunix: [ID 179002 kern.notice]   %l0-3: 
   
Jan 12 05:47:13 chrysek   %l4-7:   
 0600112cc080
Jan 12 05:47:14 chrysek genunix: [ID 723222 kern.notice] 02a10285c9a0 
zfs:vdev_mirror_io_start+4 (30006a9a370, 0, 0, 30006a9a3c8, 0, 7b772bc4)
Jan 12 05:47:14 chrysek genunix: [ID 179002 kern.notice]   %l0-3: 
 0001  7b7a4688
Jan 12 05:47:14 chrysek   %l4-7: 7b7a4400  
 
Jan 12 05:47:14 chrysek genunix: [ID 723222 kern.notice] 02a10285ca80 
zfs:zio_execute+74 (30006a9a370, 7b783f70, 78, f, 1, 70496c00)
Jan 12 05:47:14 chrysek genunix: [ID 179002 kern.notice]   %l0-3: 
030083edb728 00c44002 00038000 70496d88
Jan 12 05:47:14 chrysek   %l4-7: 00efc006  
0801 8000
Jan 12 05:47:14 chrysek genunix: [ID 723222 kern.notice] 02a10285cb30 
zfs:arc_read+724 (1, 600112cc080, 30075baba00, 200, 0, 300680b9288)
Jan 12 05:47:14 chrysek genunix: [ID 179002 kern.notice]   %l0-3: 
0001 70496060 0006 0801
Jan 12 05:47:14 chrysek   %l4-7: 02a10285cd18  
030083edb728 02a10285cd3c
Jan 12 05:47:14 chrysek genunix: [ID 723222 kern.notice] 02a10285cc40 
zfs:dbuf_prefetch+13c (60035ce1050, 70496c00, 30075baba00, 0, 0, 3007578b0a0)
Jan 12 05:47:14 chrysek genunix: [ID 179002 kern.notice]   %l0-3: 
000a 0028 000a 0801
Jan 12 05:47:14 chrysek   %l4-7: 02a10285cd18 02a10285cd3c 
0006 
Jan 12 05:47:15 chrysek genunix: [ID 723222 kern.notice] 02a10285cd50 
zfs:dmu_zfetch_fetch+2c (60035ce1050, 8b67, 100, 100, cd, 8c34)
Jan 12 05:47:15 chrysek genunix: [ID 179002 kern.notice]   %l0-3: 
7049d098 4000 7049d000 7049d188
Jan 12 05:47:15 chrysek   %l4-7: 06d8 00db 
7049d178 7049d0f8
Jan 12 05:47:15 chrysek genunix: [ID 723222 kern.notice] 02a10285ce00 
zfs:dmu_zfetch_dofetch+b8 (60035ce12a0, 6002f87c260, 8b67, 8a67, 8b68, 0)
Jan 12 05:47:15 chrysek genunix: [ID 179002 kern.notice]   %l0-3: 
0001 

[zfs-discuss] growing vdev or zfs volume

2008-12-16 Thread Krzys

I was wondering, if I have vdev setup and I do present it to another box via 
iscsi, is there any way to grow that vdev?

for example when I do this:
zfs create -V 100G mypool6/v1
zfs set shareiscsi=on mypool6/v1

can I then expand 100G pool to lets say 150G?
I do not care about file system on the other end, I was just wondering if it 
works like SAN where I can change LUN size on a fly to whatever I want to 
depending on the needs... Yes I know, I can create mypool6/v2 which is 50G and 
then add it to pool on the other side, but I do not want to go this route.


How about if I have SAN presented disk to my solaris server and I do increase 
it 
from 100G to 150G, can I update zfs somehoe to see it? I did run that and when 
I 
did issue format I was able to see my disk changed sice but when I did go to 
view partition I was seeing it as 100G only and no extra space available. Under 
UFS I had issues with such operations and usualy my UFS partitions were 
destroyed.

On my Windows servers I can change drive geometry (add more space) and windows 
will recognize it... WOnder that do you think or is there any sollutions.

Reason why I want to do this is to reduce amount of volumes/pools/luns that I 
have in my environment.

Ah, yeah I can also replace smaller disk with larger disk, I did that in the 
past by creating temporary mirror and then removing disk from that mirror, 
virtually replacing disk with larger disk.. but as I said, on windows its 
handled so much better and I was seriously wondering if zfs gave us similar 
easy 
tools of expanding pools in this way.

Regards,

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] replacing disk

2008-11-25 Thread Krzys
Anyway I did not get any help but I was able to figure it out.

[12:58:08] [EMAIL PROTECTED]: /root  zpool status mypooladas
   pool: mypooladas
  state: DEGRADED
status: One or more devices could not be used because the label is missing or
 invalid.  Sufficient replicas exist for the pool to continue
 functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-4J
  scrub: resilver completed after 0h34m with 0 errors on Tue Nov 25 03:59:23 
2008
config:

 NAME  STATE READ WRITE CKSUM
 mypooladasDEGRADED 0 0 0
   raidz2  DEGRADED 0 0 0
 c4t2d0ONLINE   0 0 0
 c4t3d0ONLINE   0 0 0
 c4t4d0ONLINE   0 0 0
 c4t5d0ONLINE   0 0 0
 c4t8d0ONLINE   0 0 0
 c4t9d0ONLINE   0 0 0
 c4t10d0   ONLINE   0 0 0
 c4t11d0   ONLINE   0 0 0
 c4t12d0   ONLINE   0 0 0
 16858115878292111089  FAULTED  0 0 0  was 
/dev/dsk/c4t13d0s0
 c4t14d0   ONLINE   0 0 0
 c4t15d0   ONLINE   0 0 0

errors: No known data errors
[12:58:23] [EMAIL PROTECTED]: /root 


Anyway the way I fixed my problem was that I did export my pool so it did not 
exist, then I took that disk which manually had to be imported and I just 
created a test pool out of it with -f option on just that one disk. then I did 
destroy that test pool, then I imported my original pool and I was able to 
replace my bad disk with old disk from that particulat pool... It is kind of 
work around but sucks that there is no easy way of getting it rather than going 
around this way. and format -e, changing label on that disk did not help, I 
even 
recreated partition table and I did make a huge file, I was trying to dd to 
that 
disk hoping it would overwrite any zfs info, but I was unable to do any of 
that... so my work around trick did work and I have one extra disk to go, just 
need to buy it as I am short on one disk at this moment.

On Mon, 24 Nov 2008, Krzys wrote:


 somehow I have issue replacing my disk.

 [20:09:29] [EMAIL PROTECTED]: /root  zpool status mypooladas
   pool: mypooladas
  state: DEGRADED
 status: One or more devices could not be opened.  Sufficient replicas exist 
 for
 the pool to continue functioning in a degraded state.
 action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-2Q
  scrub: resilver completed after 0h0m with 0 errors on Mon Nov 24 20:06:48 
 2008
 config:

 NAME  STATE READ WRITE CKSUM
 mypooladasDEGRADED 0 0 0
   raidz2  DEGRADED 0 0 0
 c4t2d0ONLINE   0 0 0
 c4t3d0ONLINE   0 0 0
 c4t4d0ONLINE   0 0 0
 c4t5d0ONLINE   0 0 0
 c4t8d0UNAVAIL  0 0 0  cannot open
 c4t9d0ONLINE   0 0 0
 c4t10d0   ONLINE   0 0 0
 c4t11d0   ONLINE   0 0 0
 c4t12d0   ONLINE   0 0 0
 16858115878292111089  FAULTED  0 0 0  was
 /dev/dsk/c4t13d0s0
 c4t14d0   ONLINE   0 0 0
 c4t15d0   ONLINE   0 0 0

 errors: No known data errors
 [20:09:38] [EMAIL PROTECTED]: /root 

 I am trying to replace c4t13d0 disk.

 [20:09:38] [EMAIL PROTECTED]: /root  zpool replace -f mypooladas c4t13d0
 invalid vdev specification
 the following errors must be manually repaired:
 /dev/dsk/c4t13d0s0 is part of active ZFS pool mypooladas. Please see 
 zpool(1M).
 [20:10:13] [EMAIL PROTECTED]: /root  zpool online mypooladas c4t13d0
 zpool replace -f mypooladas c4t13d0
 warning: device 'c4t13d0' onlined, but remains in faulted state
 use 'zpool replace' to replace devices that are no longer present
 [20:11:14] [EMAIL PROTECTED]: /root  zpool replace -f mypooladas c4t13d0
 invalid vdev specification
 the following errors must be manually repaired:
 /dev/dsk/c4t13d0s0 is part of active ZFS pool mypooladas. Please see 
 zpool(1M).
 [20:11:45] [EMAIL PROTECTED]: /root  zpool replace -f mypooladas c4t8d0 
 c4t13d0
 invalid vdev specification
 the following errors must be manually repaired:
 /dev/dsk/c4t13d0s0 is part of active ZFS pool mypooladas. Please see 
 zpool(1M).
 [20:13:24] [EMAIL

[zfs-discuss] replacing disk

2008-11-24 Thread Krzys

somehow I have issue replacing my disk.

[20:09:29] [EMAIL PROTECTED]: /root  zpool status mypooladas
   pool: mypooladas
  state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
 the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-2Q
  scrub: resilver completed after 0h0m with 0 errors on Mon Nov 24 20:06:48 2008
config:

 NAME  STATE READ WRITE CKSUM
 mypooladasDEGRADED 0 0 0
   raidz2  DEGRADED 0 0 0
 c4t2d0ONLINE   0 0 0
 c4t3d0ONLINE   0 0 0
 c4t4d0ONLINE   0 0 0
 c4t5d0ONLINE   0 0 0
 c4t8d0UNAVAIL  0 0 0  cannot open
 c4t9d0ONLINE   0 0 0
 c4t10d0   ONLINE   0 0 0
 c4t11d0   ONLINE   0 0 0
 c4t12d0   ONLINE   0 0 0
 16858115878292111089  FAULTED  0 0 0  was 
/dev/dsk/c4t13d0s0
 c4t14d0   ONLINE   0 0 0
 c4t15d0   ONLINE   0 0 0

errors: No known data errors
[20:09:38] [EMAIL PROTECTED]: /root 

I am trying to replace c4t13d0 disk.

[20:09:38] [EMAIL PROTECTED]: /root  zpool replace -f mypooladas c4t13d0
invalid vdev specification
the following errors must be manually repaired:
/dev/dsk/c4t13d0s0 is part of active ZFS pool mypooladas. Please see zpool(1M).
[20:10:13] [EMAIL PROTECTED]: /root  zpool online mypooladas c4t13d0
zpool replace -f mypooladas c4t13d0
warning: device 'c4t13d0' onlined, but remains in faulted state
use 'zpool replace' to replace devices that are no longer present
[20:11:14] [EMAIL PROTECTED]: /root  zpool replace -f mypooladas c4t13d0
invalid vdev specification
the following errors must be manually repaired:
/dev/dsk/c4t13d0s0 is part of active ZFS pool mypooladas. Please see zpool(1M).
[20:11:45] [EMAIL PROTECTED]: /root  zpool replace -f mypooladas c4t8d0 c4t13d0
invalid vdev specification
the following errors must be manually repaired:
/dev/dsk/c4t13d0s0 is part of active ZFS pool mypooladas. Please see zpool(1M).
[20:13:24] [EMAIL PROTECTED]: /root 


what am I doing wrong?

I originally had this disk as ZFS disk in that pool, but somehow my connection 
got lost and it did apear of faulty, I was unable to reconnect it. Anyway I did 
try to format this disk and I was able to setup ufs file system with data on 
it, 
and then I did try to re-add back to my zpool and I am still unable to do it... 
is there any way to force it? What can I do?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-11-07 Thread Krzys
I was wondering if this ever made to zfs as a fix for bad labels?

On Wed, 7 May 2008, Jeff Bonwick wrote:

 Yes, I think that would be useful.  Something like 'zpool revive'
 or 'zpool undead'.  It would not be completely general-purpose --
 in a pool with multiple mirror devices, it could only work if
 all replicas were detached in the same txg -- but for the simple
 case of a single top-level mirror vdev, or a clean 'zpool split',
 it's actually pretty straightforward.

 Jeff

 On Tue, May 06, 2008 at 11:16:25AM +0100, Darren J Moffat wrote:
 Great tool, any chance we can have it integrated into zpool(1M) so that
 it can find and fixup on import detached vdevs as new pools ?

 I'd think it would be reasonable to extend the meaning of
 'zpool import -D' to list detached vdevs as well as destroyed pools.

 --
 Darren J Moffat
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


 !DSPAM:122,482161a8460825014478!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-06 Thread Krzys
Seems like core.vold.* are not being created until I try to boot from zfsBE, 
just creating zfsBE gets onlu core.cpio created.



[10:29:48] @adas: /var/crash  mdb core.cpio.5545
Loading modules: [ libc.so.1 libavl.so.1 ld.so.1 ]
 ::status
debugging core file of cpio (32-bit) from adas
file: /usr/bin/cpio
initial argv: /usr/bin/cpio -pPcdum /.alt.tmp.b-Prb.mnt
threading model: multi-threaded
status: process terminated by SIGBUS (Bus Error)
 $C
ffbfe5b0 libc.so.1`_malloc_unlocked+0x164(30, 0, 39c28, ff, 2e2f2e2f, 0)
ffbfe610 libc.so.1`malloc+0x4c(30, 1, e8070, 0, ff33e3c0, ff3485b8)
ffbfe670 libsec.so.1`cacl_get+0x138(ffbfe7c4, 2, 0, 35bc0, 0, 35f98)
ffbfe768 libsec.so.1`acl_get+0x14(37fe2, 2, 35bc0, 354c0, 1000, 1)
ffbfe7d0 0x183b4(1, 35800, 359e8, 346b0, 34874, 34870)
ffbfec30 main+0x28c(34708, 1, 35bc0, 166fc, 35800, 34400)
ffbfec90 _start+0x108(0, 0, 0, 0, 0, 0)
 $r
%g0 = 0x %l0 = 0x
%g1 = 0xff25638c libc.so.1`malloc+0x44 %l1 = 0x00039c28
%g2 = 0x00037fe0 %l2 = 0x2e2f2e2f
%g3 = 0x8000 %l3 = 0x03c8
%g4 = 0x %l4 = 0x2e2f2e2f
%g5 = 0x %l5 = 0x
%g6 = 0x %l6 = 0xdc00
%g7 = 0xff382a00 %l7 = 0xff347344 libc.so.1`Lfree
%o0 = 0x %i0 = 0x0030
%o1 = 0x %i1 = 0x
%o2 = 0x000e70c4 %i2 = 0x00039c28
%o3 = 0x %i3 = 0x00ff
%o4 = 0xff33e3c0 %i4 = 0x2e2f2e2f
%o5 = 0xff347344 libc.so.1`Lfree %i5 = 0x
%o6 = 0xffbfe5b0 %i6 = 0xffbfe610
%o7 = 0xff2564a4 libc.so.1`_malloc_unlocked+0xf4 %i7 = 0xff256394
libc.so.1`malloc+0x4c

  %psr = 0xfe001002 impl=0xf ver=0xe icc=nzvc
ec=0 ef=4096 pil=0 s=0 ps=0 et=0 cwp=0x2
%y = 0x
   %pc = 0xff256514 libc.so.1`_malloc_unlocked+0x164
  %npc = 0xff2564d8 libc.so.1`_malloc_unlocked+0x128
   %sp = 0xffbfe5b0
   %fp = 0xffbfe610

  %wim = 0x
  %tbr = 0x








On Thu, 6 Nov 2008, Enda O'Connor wrote:

 Hi
 try and get the stack trace from the core
 ie mdb core.vold.24978
 ::status
 $C
 $r

 also run the same 3 mdb commands on the cpio core dump.

 also if you could extract some data from the truss log, ie a few hundred 
 lines before the first SIGBUS


 Enda

 On 11/06/08 01:25, Krzys wrote:
 THis is so bizare, I am unable to pass this problem. I though I had not 
 enough space on my hard drive (new one) so I replaced it with 72gb drive, 
 but still getting that bus error. Originally when I restarted my server it 
 did not want to boot, do I had to power it off and then back on and it then 
 booted up. But constantly I am getting this Bus Error - core dumped
 
 anyway in my /var/crash I see hundreds of core.void files and 3 core.cpio 
 files. I would imagine core.cpio are the ones that are direct result of 
 what I am probably eperiencing.
 
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24854
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24867
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24880
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24893
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24906
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24919
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24932
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24950
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24978
 drwxr-xr-x   3 root root   81408 Nov  5 20:06 .
 -rw---   1 root root 31351099 Nov  5 20:06 core.cpio.6208
 
 
 
 On Wed, 5 Nov 2008, Enda O'Connor wrote:
 
 Hi
 Looks ok, some mounts left over from pervious fail.
 In regards to swap and dump on zpool you can set them
 zfs set volsize=1G rootpool/dump
 zfs set volsize=1G rootpool/swap
 
 for instance, of course above are only an example of how to do it.
 or make the zvol doe rootpool/dump etc before lucreate, in which case it 
 will take the swap and dump size you have preset.
 
 But I think we need to see the coredump/truss at this point to get an idea 
 of where things went wrong.
 Enda
 
 On 11/05/08 15:38, Krzys wrote:
 I did upgrade my U5 to U6 from DVD, went trough the upgrade process.
 my file system is setup as follow:
 [10:11:54] [EMAIL PROTECTED]: /root  df -h | egrep -v 
 platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr
 Filesystem size   used  avail capacity  Mounted on
 /dev/dsk/c1t0d0s0   16G   7.2G   8.4G47%/
 swap   8.3G   1.5M   8.3G 1%/etc/svc/volatile
 /dev/dsk/c1t0d0s6   16G   8.7G   6.9G56%/usr
 /dev/dsk/c1t0d0s1   16G   2.5G13G17%/var
 swap   8.5G   229M   8.3G 3%/tmp
 swap   8.3G40K   8.3G 1%/var/run
 /dev/dsk/c1t0d0s7   78G   1.2G76G 2

Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-06 Thread Krzys
-bit
Copyright 1983-2008 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Hardware watchdog enabled
Hostname: adas
Configuring devices.
/dev/rdsk/c1t0d0s7 is clean
Reading ZFS config: done.
Mounting ZFS filesystems: (3/3)
Nov  6 13:22:23 squid[380]: Squid Parent: child process 383 started

adas console login: root
Password:
Nov  6 13:22:38 adas login: ROOT LOGIN /dev/console
Last login: Thu Nov  6 10:44:17 from kasiczynka.ny.p
Sun Microsystems Inc.   SunOS 5.10  Generic January 2005
You have mail.
# bash
[13:22:40] @adas: /root  df -h
Filesystem size   used  avail capacity  Mounted on
rootpool/ROOT/zfsBE 67G12G43G23%/
/devices 0K 0K 0K 0%/devices
ctfs 0K 0K 0K 0%/system/contract
proc 0K 0K 0K 0%/proc
mnttab   0K 0K 0K 0%/etc/mnttab
swap   7.8G   360K   7.8G 1%/etc/svc/volatile
objfs0K 0K 0K 0%/system/object
sharefs  0K 0K 0K 0%/etc/dfs/sharetab
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1
 55G12G43G23% 
/platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
 55G12G43G23% 
/platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd   0K 0K 0K 0%/dev/fd
swap   7.8G72K   7.8G 1%/tmp
swap   7.8G56K   7.8G 1%/var/run
/dev/dsk/c1t0d0s7   78G   1.2G76G 2%/export/home
rootpool67G21K43G 1%/rootpool
rootpool/ROOT   67G18K43G 1%/rootpool/ROOT
[13:22:42] @adas: /root  starting NetWorker daemons:
  nsrexecd






On Thu, 6 Nov 2008, Enda O'Connor wrote:

 Hi
 Wierd, almost like some kind of memory corruption.

 Could I see the upgrade logs, that got you to u6
 ie
 /var/sadm/system/logs/upgrade_log
 for the u6 env.
 What kind of upgrade did you do, liveupgrade, text based etc?

 Enda

 On 11/06/08 15:41, Krzys wrote:
 Seems like core.vold.* are not being created until I try to boot from 
 zfsBE, just creating zfsBE gets onlu core.cpio created.
 
 
 
 [10:29:48] @adas: /var/crash  mdb core.cpio.5545
 Loading modules: [ libc.so.1 libavl.so.1 ld.so.1 ]
 ::status
 debugging core file of cpio (32-bit) from adas
 file: /usr/bin/cpio
 initial argv: /usr/bin/cpio -pPcdum /.alt.tmp.b-Prb.mnt
 threading model: multi-threaded
 status: process terminated by SIGBUS (Bus Error)
 $C
 ffbfe5b0 libc.so.1`_malloc_unlocked+0x164(30, 0, 39c28, ff, 2e2f2e2f, 0)
 ffbfe610 libc.so.1`malloc+0x4c(30, 1, e8070, 0, ff33e3c0, ff3485b8)
 ffbfe670 libsec.so.1`cacl_get+0x138(ffbfe7c4, 2, 0, 35bc0, 0, 35f98)
 ffbfe768 libsec.so.1`acl_get+0x14(37fe2, 2, 35bc0, 354c0, 1000, 1)
 ffbfe7d0 0x183b4(1, 35800, 359e8, 346b0, 34874, 34870)
 ffbfec30 main+0x28c(34708, 1, 35bc0, 166fc, 35800, 34400)
 ffbfec90 _start+0x108(0, 0, 0, 0, 0, 0)
 $r
 %g0 = 0x %l0 = 0x
 %g1 = 0xff25638c libc.so.1`malloc+0x44 %l1 = 0x00039c28
 %g2 = 0x00037fe0 %l2 = 0x2e2f2e2f
 %g3 = 0x8000 %l3 = 0x03c8
 %g4 = 0x %l4 = 0x2e2f2e2f
 %g5 = 0x %l5 = 0x
 %g6 = 0x %l6 = 0xdc00
 %g7 = 0xff382a00 %l7 = 0xff347344 libc.so.1`Lfree
 %o0 = 0x %i0 = 0x0030
 %o1 = 0x %i1 = 0x
 %o2 = 0x000e70c4 %i2 = 0x00039c28
 %o3 = 0x %i3 = 0x00ff
 %o4 = 0xff33e3c0 %i4 = 0x2e2f2e2f
 %o5 = 0xff347344 libc.so.1`Lfree %i5 = 0x
 %o6 = 0xffbfe5b0 %i6 = 0xffbfe610
 %o7 = 0xff2564a4 libc.so.1`_malloc_unlocked+0xf4 %i7 = 0xff256394
 libc.so.1`malloc+0x4c

   %psr = 0xfe001002 impl=0xf ver=0xe icc=nzvc
 ec=0 ef=4096 pil=0 s=0 ps=0 et=0 cwp=0x2
 %y = 0x
%pc = 0xff256514 libc.so.1`_malloc_unlocked+0x164
   %npc = 0xff2564d8 libc.so.1`_malloc_unlocked+0x128
%sp = 0xffbfe5b0
%fp = 0xffbfe610

   %wim = 0x
   %tbr = 0x
 
 
 
 
 
 
 
 On Thu, 6 Nov 2008, Enda O'Connor wrote:
 
 Hi
 try and get the stack trace from the core
 ie mdb core.vold.24978
 ::status
 $C
 $r
 
 also run the same 3 mdb commands on the cpio core dump.
 
 also if you could extract some data from the truss log, ie a few hundred 
 lines before the first SIGBUS
 
 
 Enda
 
 On 11/06/08 01:25, Krzys wrote:
 THis is so bizare, I am unable to pass this problem. I though I had not 
 enough space on my hard drive (new one) so I replaced it with 72gb drive, 
 but still getting that bus error. Originally when I restarted my server 
 it did not want to boot, do I had to power it off and then back on and it 
 then booted up. But constantly I am

[zfs-discuss] copies set to greater than 1

2008-11-06 Thread Krzys
WHen property value copies is set to value greater than 1 how does it work? 
Will 
it store second copy of data on different disk? or does it store it on the same 
disk? Also when this setting is changed at some point on file system, will it 
make copies of existing data or just new data thats being written from now on?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] root zpool question

2008-11-06 Thread Krzys

Currently I have the following:

# zpool status
   pool: rootpool
  state: ONLINE
  scrub: none requested
config:

 NAMESTATE READ WRITE CKSUM
 rootpoolONLINE   0 0 0
   c1t1d0s0  ONLINE   0 0 0

errors: No known data errors
#

I would like to put c1t0d0s0 disk in and setup mirroring for my root disk. I am 
just addraid that if I do add my disk that instead of creating mirror I will 
add 
it to a pool to have it concat/stripe rather than mirror. How can I add disk to 
this pool and have mirroring instead of striping it?

Regards,

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] root zpool question

2008-11-06 Thread Krzys
never mind I did figure it out, I did play around with my settings and spare 
disk and it seems like its working

# zpool create -f testroot c1t0d0s0
Nov  6 20:02:10 adas ufs: NOTICE: mount: not a UFS magic number (0x0)
# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
rootpool 29.4G  37.6G  21.5K  /rootpool
rootpool/ROOT17.4G  37.6G18K  /rootpool/ROOT
rootpool/ROOT/zfsBE  17.4G  37.6G  17.4G  /
rootpool/dump4.02G  37.6G  4.02G  -
rootpool/swap8.00G  44.4G  1.13G  -
testroot 89.5K  9.78G 1K  /testroot
# zpool attach testroot c1t0d0s0 c1t0d0s1
# zpool list
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
rootpool68G  22.6G  45.4G33%  ONLINE  -
testroot  9.94G   112K  9.94G 0%  ONLINE  -
# zpool status
   pool: rootpool
  state: ONLINE
  scrub: none requested
config:

 NAMESTATE READ WRITE CKSUM
 rootpoolONLINE   0 0 0
   c1t1d0s0  ONLINE   0 0 0

errors: No known data errors

   pool: testroot
  state: ONLINE
  scrub: resilver completed after 0h0m with 0 errors on Thu Nov  6 20:03:01 2008
config:

 NAME  STATE READ WRITE CKSUM
 testroot  ONLINE   0 0 0
   mirror  ONLINE   0 0 0
 c1t0d0s0  ONLINE   0 0 0
 c1t0d0s1  ONLINE   0 0 0

errors: No known data errors
#





On Thu, 6 Nov 2008, Krzys wrote:


 Currently I have the following:

 # zpool status
   pool: rootpool
  state: ONLINE
  scrub: none requested
 config:

 NAMESTATE READ WRITE CKSUM
 rootpoolONLINE   0 0 0
   c1t1d0s0  ONLINE   0 0 0

 errors: No known data errors
 #

 I would like to put c1t0d0s0 disk in and setup mirroring for my root disk. I 
 am
 just addraid that if I do add my disk that instead of creating mirror I will 
 add
 it to a pool to have it concat/stripe rather than mirror. How can I add disk 
 to
 this pool and have mirroring instead of striping it?

 Regards,

 Chris

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


 !DSPAM:122,491393cf9027197925582!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] boot -L

2008-11-06 Thread Krzys

What am I doing wrong? I have sparc V210 and I am having difficulty with boot 
-L, I was under the impression that boot -L will give me options to which zfs 
mirror I could boot my root disk?

Anyway but even not that, I am seeing some strange behavior anyway... After 
trying boot -L I am unabl eto boot my system unless I do reset-all, is that 
normal? I have Solaris 10 U6 that I just upgraded my box to and I wanted to try 
all the cool things about zfs root disk mirroring and so on, but so far its 
quite strange experience with this whole thing...

[22:21:25] @adas: /root  init 0
[22:21:51] @adas: /root  stopping NetWorker daemons:
  nsr_shutdown -q
svc.startd: The system is coming down.  Please wait.
svc.startd: 90 system services are now being stopped.
svc.startd: The system is down.
syncing file systems... done
Program terminated
{0} ok boot -L

SC Alert: Host System has Reset
Probing system devices
Probing memory
Probing I/O buses

Sun Fire V210, No Keyboard
Copyright 2007 Sun Microsystems, Inc.  All rights reserved.
OpenBoot 4.22.33, 4096 MB memory installed, Serial #64938415.
Ethernet address 0:3:ba:de:e1:af, Host ID: 83dee1af.



Rebooting with command: boot -L
Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a  
File and args: -L

Can't open bootlst

Evaluating:
The file just loaded does not appear to be executable.
{1} ok boot disk0
Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0  
File and args:
ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast Data Access MMU Miss

{1} ok boot disk1
Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0  
File and args:
ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast Data Access MMU Miss

{1} ok boot
ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast Data Access MMU Miss

{1} ok reset-all
Probing system devices
Probing memory
Probing I/O buses

Sun Fire V210, No Keyboard
Copyright 2007 Sun Microsystems, Inc.  All rights reserved.
OpenBoot 4.22.33, 4096 MB memory installed, Serial #64938415.
Ethernet address 0:3:ba:de:e1:af, Host ID: 83dee1af.



Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a  
File and args:
SunOS Release 5.10 Version Generic_137137-09 64-bit
Copyright 1983-2008 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Hardware watchdog enabled
Hostname: adas
Reading ZFS config: done.
Mounting ZFS filesystems: (3/3)

adas console login: Nov  6 22:27:13 squid[361]: Squid Parent: child process 363 
started
Nov  6 22:27:18 adas ufs: NOTICE: mount: not a UFS magic number (0x0)
starting NetWorker daemons:
  nsrexecd

console login:


Does anyone have any idea why is that happening? what am I doing wrong?

Thanks for help.

Regards,

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] boot -L

2008-11-06 Thread Krzys
Great, thank you.

Chris


On Fri, 7 Nov 2008, Nathan Kroenert wrote:

 A quick google shows that it's not so much about the mirror, but the BE...

 http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/

 Might help?

 Nathan.

 On  7/11/08 02:39 PM, Krzys wrote:
 What am I doing wrong? I have sparc V210 and I am having difficulty with 
 boot -L, I was under the impression that boot -L will give me options to 
 which zfs mirror I could boot my root disk?
 
 Anyway but even not that, I am seeing some strange behavior anyway... After 
 trying boot -L I am unabl eto boot my system unless I do reset-all, is that 
 normal? I have Solaris 10 U6 that I just upgraded my box to and I wanted to 
 try all the cool things about zfs root disk mirroring and so on, but so far 
 its quite strange experience with this whole thing...
 
 [22:21:25] @adas: /root  init 0
 [22:21:51] @adas: /root  stopping NetWorker daemons:
   nsr_shutdown -q
 svc.startd: The system is coming down.  Please wait.
 svc.startd: 90 system services are now being stopped.
 svc.startd: The system is down.
 syncing file systems... done
 Program terminated
 {0} ok boot -L
 
 SC Alert: Host System has Reset
 Probing system devices
 Probing memory
 Probing I/O buses
 
 Sun Fire V210, No Keyboard
 Copyright 2007 Sun Microsystems, Inc.  All rights reserved.
 OpenBoot 4.22.33, 4096 MB memory installed, Serial #64938415.
 Ethernet address 0:3:ba:de:e1:af, Host ID: 83dee1af.
 
 
 
 Rebooting with command: boot -L
 Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
 PROTECTED],0:a  File and args: -L
 
 Can't open bootlst
 
 Evaluating:
 The file just loaded does not appear to be executable.
 {1} ok boot disk0
 Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
  File and args:
 ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast Data Access MMU Miss
 
 {1} ok boot disk1
 Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
  File and args:
 ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast Data Access MMU Miss
 
 {1} ok boot
 ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast Data Access MMU Miss
 
 {1} ok reset-all
 Probing system devices
 Probing memory
 Probing I/O buses
 
 Sun Fire V210, No Keyboard
 Copyright 2007 Sun Microsystems, Inc.  All rights reserved.
 OpenBoot 4.22.33, 4096 MB memory installed, Serial #64938415.
 Ethernet address 0:3:ba:de:e1:af, Host ID: 83dee1af.
 
 
 
 Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
 PROTECTED],0:a  File and args:
 SunOS Release 5.10 Version Generic_137137-09 64-bit
 Copyright 1983-2008 Sun Microsystems, Inc.  All rights reserved.
 Use is subject to license terms.
 Hardware watchdog enabled
 Hostname: adas
 Reading ZFS config: done.
 Mounting ZFS filesystems: (3/3)
 
 adas console login: Nov  6 22:27:13 squid[361]: Squid Parent: child process 
 363 started
 Nov  6 22:27:18 adas ufs: NOTICE: mount: not a UFS magic number (0x0)
 starting NetWorker daemons:
   nsrexecd
 
 console login:
 
 
 Does anyone have any idea why is that happening? what am I doing wrong?
 
 Thanks for help.
 
 Regards,
 
 Chris
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 -- 


 //
 // Nathan Kroenert[EMAIL PROTECTED]   //
 // Senior Systems EngineerPhone:  +61 3 9869 6255 //
 // Global Systems Engineering Fax:+61 3 9869 6288 //
 // Level 7, 476 St. Kilda Road//
 // Melbourne 3004   Victoria  Australia   //
 //


 !DSPAM:122,4913bbea177061025419720!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Krzys
Yes, I did notice that error too, but when I did lustatus it did show as it was 
ok, so I guess I did asume it was safe to start from it, but even booting up 
from original disk caused problems and I was unable to boot my system...


ANyway I did poweroff my system for few minutes, and then started it up and it 
did boot without any problems to original disk, I just had to do hard reset on 
the box for some reason.


On Wed, 5 Nov 2008, Tomas Ögren wrote:


On 05 November, 2008 - Krzys sent me these 18K bytes:



I am not sure what I did wrong but I did follow up all the steps to get my
system moved from ufs to zfs and not I am unable to boot it... can anyone
suggest what I could do to fix it?

here are all my steps:

[00:26:38] @adas: /root  zpool create rootpool c1t1d0s0
[00:26:57] @adas: /root  lucreate -c ufsBE -n zfsBE -p rootpool
Analyzing system configuration.
Comparing source boot environment ufsBE file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device /dev/dsk/c1t1d0s0 is not a root device for any boot environment;
cannot get BE ID.
Creating configuration for boot environment zfsBE.
Source boot environment is ufsBE.
Creating boot environment zfsBE.
Creating file systems on boot environment zfsBE.
Creating zfs file system for / in zone global on rootpool/ROOT/zfsBE.
Populating file systems on boot environment zfsBE.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point /.
Copying.
Bus Error - core dumped


This should have cought both your attention and lucreate's attention..

If the copying process core dumps, then I guess most bets are off..


Creating shared file system mount points.
Creating compare databases for boot environment zfsBE.
Creating compare database for file system /var.
Creating compare database for file system /usr.
Creating compare database for file system /rootpool/ROOT.
Creating compare database for file system /.
Updating compare databases on boot environment zfsBE.
Making boot environment zfsBE bootable.
Population of boot environment zfsBE successful.
Creation of boot environment zfsBE successful.


/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se


!DSPAM:122,49119ff929530021468!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Krzys
Sorry its Solaris 10 U6, not Nevada. I just upgraded to U6 and was hoping I 
could take advantage of the zfs boot mirroring.

On Wed, 5 Nov 2008, Enda O'Connor wrote:

 On 11/05/08 13:02, Krzys wrote:
 I am not sure what I did wrong but I did follow up all the steps to get my 
 system moved from ufs to zfs and not I am unable to boot it... can anyone 
 suggest what I could do to fix it?
 
 here are all my steps:
 
 [00:26:38] @adas: /root  zpool create rootpool c1t1d0s0
 [00:26:57] @adas: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining which
 file systems should be in the new boot environment.
 Updating boot environment description database on all BEs.
 Updating system configuration files.
 The device /dev/dsk/c1t1d0s0 is not a root device for any boot 
 environment; cannot get BE ID.
 Creating configuration for boot environment zfsBE.
 Source boot environment is ufsBE.
 Creating boot environment zfsBE.
 Creating file systems on boot environment zfsBE.
 Creating zfs file system for / in zone global on 
 rootpool/ROOT/zfsBE.
 Populating file systems on boot environment zfsBE.
 Checking selection integrity.
 Integrity check OK.
 Populating contents of mount point /.
 Copying.
 Bus Error - core dumped
 hmm above might be relevant I'd guess.

 What release are you on , ie is this Solaris 10, or is this Nevada build?

 Enda
 Creating shared file system mount points.
 Creating compare databases for boot environment zfsBE.
 Creating compare database for file system /var.
 Creating compare database for file system /usr.
 Creating compare database for file system /rootpool/ROOT.
 Creating compare database for file system /.
 Updating compare databases on boot environment zfsBE.
 Making boot environment zfsBE bootable.
 Population of boot environment zfsBE successful.
 Creation of boot environment zfsBE successful.
 [01:19:36] @adas: /root 
 Nov  5 02:44:16 adas root:  = 
 [EMAIL PROTECTED] 
 =com.sun.cc.platform.clientsignature.CNSSignException: Error reading 
 private key
 Nov  5 02:44:16 adas 7470:error:0906D06C:PEM routines:PEM_read_bio:no start 
 line:/on10/build-nd/F10U6B7A/usr/src/common/openssl/crypto/pem/pem_lib.c:637:Expecting:
  
 ANY PRIVATE KEY
 Nov  5 02:44:16 adasat 
 com.sun.cc.platform.clientsignature.CNSClientSignature.throwError(Unknown 
 Source)
 Nov  5 02:44:16 adasat 
 com.sun.cc.platform.clientsignature.CNSClientSignature.init(Unknown 
 Source)
 Nov  5 02:44:16 adasat 
 com.sun.cc.platform.clientsignature.CNSClientSignature.genSigString(Unknown 
 Source)
 Nov  5 02:44:16 adas root:  = 
 [EMAIL PROTECTED] = at 
 com.sun.patchpro.util.Downloader.connectToURL(Downloader.java:430)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.util.CachingDownloader.establishConnection(CachingDownloader.java:618)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.util.CachingDownloader.setSourceURL(CachingDownloader.java:282)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.util.CachingDownloader.setupCache(CachingDownloader.java:208)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.util.CachingDownloader.init(CachingDownloader.java:187)
 Nov  5 02:44:16 adas root:  = 
 [EMAIL PROTECTED] = at 
 com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadFile(UnifiedServerPatchServiceProvider.java:1242)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadDatabaseFile(UnifiedServerPatchServiceProvider.java:928)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadPatchDB(UnifiedServerPatchServiceProvider.java:468)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.server.PatchServerProxy.downloadPatchDB(PatchServerProxy.java:156)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDBWithPOST(MemoryPatchDBBuilder.java:163)
 Nov  5 02:44:16 adas root:  = 
 [EMAIL PROTECTED] = at 
 com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDB(MemoryPatchDBBuilder.java:752)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:108)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:181)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.database.GroupPatchDBBuilder.buildDB(GroupPatchDBBuilder.java:108)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.model.PatchProModel.downloadPatchDB(PatchProModel.java:1849)
 Nov  5 02:44:16 adas root:  = 
 [EMAIL PROTECTED] =nullat 
 com.sun.patchpro.model.PatchProStateMachine$5.run(PatchProStateMachine.java:277)
 Nov  5 02:44:16 adasat com.sun.patchpro.util.State.run(State.java:266)
 Nov  5 02:44:16 adasat java.lang.Thread.run(Thread.java:595)
 Nov  5 02:44:17 adas root:  = 
 [EMAIL PROTECTED] 
 =com.sun.cc.platform.clientsignature.CNSSignException: Error reading

Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Krzys


On Wed, 5 Nov 2008, Enda O'Connor wrote:

 On 11/05/08 13:02, Krzys wrote:
 I am not sure what I did wrong but I did follow up all the steps to get my 
 system moved from ufs to zfs and not I am unable to boot it... can anyone 
 suggest what I could do to fix it?
 
 here are all my steps:
 
 [00:26:38] @adas: /root  zpool create rootpool c1t1d0s0
 [00:26:57] @adas: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining which
 file systems should be in the new boot environment.
 Updating boot environment description database on all BEs.
 Updating system configuration files.
 The device /dev/dsk/c1t1d0s0 is not a root device for any boot 
 environment; cannot get BE ID.
 Creating configuration for boot environment zfsBE.
 Source boot environment is ufsBE.
 Creating boot environment zfsBE.
 Creating file systems on boot environment zfsBE.
 Creating zfs file system for / in zone global on 
 rootpool/ROOT/zfsBE.
 Populating file systems on boot environment zfsBE.
 Checking selection integrity.
 Integrity check OK.
 Populating contents of mount point /.
 Copying.
 Bus Error - core dumped
 hmm above might be relevant I'd guess.

 What release are you on , ie is this Solaris 10, or is this Nevada build?

 Enda
 Creating shared file system mount points.
 Creating compare databases for boot environment zfsBE.
 Creating compare database for file system /var.
 Creating compare database for file system /usr.
 Creating compare database for file system /rootpool/ROOT.
 Creating compare database for file system /.
 Updating compare databases on boot environment zfsBE.
 Making boot environment zfsBE bootable.

Anyway I did restart the whole process again, and I got again that Bus Error

[07:59:01] [EMAIL PROTECTED]: /root  zpool create rootpool c1t1d0s0
[07:59:22] [EMAIL PROTECTED]: /root  zfs set compression=on rootpool/ROOT
cannot open 'rootpool/ROOT': dataset does not exist
[07:59:27] [EMAIL PROTECTED]: /root  zfs set compression=on rootpool
[07:59:31] [EMAIL PROTECTED]: /root  lucreate -c ufsBE -n zfsBE -p rootpool
Analyzing system configuration.
Comparing source boot environment ufsBE file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device /dev/dsk/c1t1d0s0 is not a root device for any boot environment; 
cannot get BE ID.
Creating configuration for boot environment zfsBE.
Source boot environment is ufsBE.
Creating boot environment zfsBE.
Creating file systems on boot environment zfsBE.
Creating zfs file system for / in zone global on rootpool/ROOT/zfsBE.
Populating file systems on boot environment zfsBE.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point /.
Copying.
Bus Error - core dumped
Creating shared file system mount points.
Creating compare databases for boot environment zfsBE.
Creating compare database for file system /var.
Creating compare database for file system /usr.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Krzys
Great, I will follow this, but I was wondering maybe I did not setup my disc 
correctly? from what I do understand zpool cannot be setup on whole disk as 
other pools are so I did partition my disk so all the space is in s0 slice. 
Maybe I thats not correct?

[10:03:45] [EMAIL PROTECTED]: /root  format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c1t0d0 SEAGATE-ST3146807LC-0007 cyl 49780 alt 2 hd 8 sec 720
   /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
1. c1t1d0 SUN36G cyl 24620 alt 2 hd 27 sec 107
   /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
Specify disk (enter its number): 1
selecting c1t1d0
[disk formatted]
/dev/dsk/c1t1d0s0 is part of active ZFS pool rootpool. Please see zpool(1M).
/dev/dsk/c1t1d0s2 is part of active ZFS pool rootpool. Please see zpool(1M).


FORMAT MENU:
 disk   - select a disk
 type   - select (define) a disk type
 partition  - select (define) a partition table
 current- describe the current disk
 format - format and analyze the disk
 repair - repair a defective sector
 label  - write label to the disk
 analyze- surface analysis
 defect - defect list management
 backup - search for backup labels
 verify - read and display labels
 save   - save new disk/partition definitions
 inquiry- show vendor, product and revision
 volname- set 8-character volume name
 !cmd - execute cmd, then return
 quit
format verify

Primary label contents:

Volume name = 
ascii name  = SUN36G cyl 24620 alt 2 hd 27 sec 107
pcyl= 24622
ncyl= 24620
acyl=2
nhead   =   27
nsect   =  107
Part  TagFlag Cylinders SizeBlocks
   0   rootwm   0 - 24619   33.92GB(24620/0/0) 71127180
   1 unassignedwu   00 (0/0/0)0
   2 backupwm   0 - 24619   33.92GB(24620/0/0) 71127180
   3 unassignedwu   00 (0/0/0)0
   4 unassignedwu   00 (0/0/0)0
   5 unassignedwu   00 (0/0/0)0
   6 unassignedwu   00 (0/0/0)0
   7 unassignedwu   00 (0/0/0)0

format


On Wed, 5 Nov 2008, Enda O'Connor wrote:

 Hi
 did you get a core dump?
 would be nice to see the core file to get an idea of what dumped core,
 might configure coreadm if not already done
 run coreadm first, if the output looks like

 # coreadm
 global core file pattern: /var/crash/core.%f.%p
 global core file content: default
   init core file pattern: core
   init core file content: default
global core dumps: enabled
   per-process core dumps: enabled
  global setid core dumps: enabled
 per-process setid core dumps: disabled
 global core dump logging: enabled

 then all should be good, and cores should appear in /var/crash

 otherwise the following should configure coreadm:
 coreadm -g /var/crash/core.%f.%p
 coreadm -G all
 coreadm -e global
 coreadm -e per-process


 coreadm -u to load the new settings without rebooting.

 also might need to set the size of the core dump via
 ulimit -c unlimited
 check ulimit -a first.

 then rerun test and check /var/crash for core dump.

 If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c ufsBE 
 -n zfsBE -p rootpool

 might give an indication, look for SIGBUS in the truss log

 NOTE, that you might want to reset the coreadm and ulimit for coredumps after 
 this, in order to not risk filling the system with coredumps in the case of 
 some utility coredumping in a loop say.


 Enda
 On 11/05/08 13:46, Krzys wrote:
 
 On Wed, 5 Nov 2008, Enda O'Connor wrote:
 
 On 11/05/08 13:02, Krzys wrote:
 I am not sure what I did wrong but I did follow up all the steps to get 
 my system moved from ufs to zfs and not I am unable to boot it... can 
 anyone suggest what I could do to fix it?
 
 here are all my steps:
 
 [00:26:38] @adas: /root  zpool create rootpool c1t1d0s0
 [00:26:57] @adas: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining which
 file systems should be in the new boot environment.
 Updating boot environment description database on all BEs.
 Updating system configuration files.
 The device /dev/dsk/c1t1d0s0 is not a root device for any boot 
 environment; cannot get BE ID.
 Creating configuration for boot environment zfsBE.
 Source boot environment is ufsBE.
 Creating boot environment zfsBE.
 Creating file systems on boot environment zfsBE.
 Creating zfs file system for / in zone global

Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Krzys
I did upgrade my U5 to U6 from DVD, went trough the upgrade process.
my file system is setup as follow:
[10:11:54] [EMAIL PROTECTED]: /root  df -h | egrep -v 
platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr
Filesystem size   used  avail capacity  Mounted on
/dev/dsk/c1t0d0s0   16G   7.2G   8.4G47%/
swap   8.3G   1.5M   8.3G 1%/etc/svc/volatile
/dev/dsk/c1t0d0s6   16G   8.7G   6.9G56%/usr
/dev/dsk/c1t0d0s1   16G   2.5G13G17%/var
swap   8.5G   229M   8.3G 3%/tmp
swap   8.3G40K   8.3G 1%/var/run
/dev/dsk/c1t0d0s7   78G   1.2G76G 2%/export/home
rootpool33G19K21G 1%/rootpool
rootpool/ROOT   33G18K21G 1%/rootpool/ROOT
rootpool/ROOT/zfsBE 33G31M21G 1%/.alt.tmp.b-UUb.mnt
/export/home78G   1.2G76G 2% 
/.alt.tmp.b-UUb.mnt/export/home
/rootpool   21G19K21G 1%/.alt.tmp.b-UUb.mnt/rootpool
/rootpool/ROOT  21G18K21G 1% 
/.alt.tmp.b-UUb.mnt/rootpool/ROOT
swap   8.3G 0K   8.3G 0%/.alt.tmp.b-UUb.mnt/var/run
swap   8.3G 0K   8.3G 0%/.alt.tmp.b-UUb.mnt/tmp
[10:12:00] [EMAIL PROTECTED]: /root 


so I have /, /usr, /var and /export/home on that primary disk. Original disk is 
140gb, this new one is only 36gb, but disk utilization on that primary disk is 
much less utilized so easily should fit on it.

/ 7.2GB
/usr 8.7GB
/var 2.5GB
/export/home 1.2GB
total space 19.6GB
I did notice that lucreate did alocate 8GB to SWAP and 4GB to DUMP
total space needed 31.6GB
seems like total available disk space on my disk should be 33.92GB
so its quite close as both numbers do approach. So to make sure I will change 
disk for 72gb and will try again. I do not beleive that I need to match my main 
disk size as 146gb as I am not using that much disk space on it. But let me try 
this and it might be why I am getting this problem...



On Wed, 5 Nov 2008, Enda O'Connor wrote:

 Hi Krzys
 Also some info on the actual system
 ie what was it upgraded to u6 from and how.
 and an idea of how the filesystems are laid out, ie is usr seperate from / 
 and so on ( maybe a df -k ). Don't appear to have any zones installed, just 
 to confirm.
 Enda

 On 11/05/08 14:07, Enda O'Connor wrote:
 Hi
 did you get a core dump?
 would be nice to see the core file to get an idea of what dumped core,
 might configure coreadm if not already done
 run coreadm first, if the output looks like
 
 # coreadm
  global core file pattern: /var/crash/core.%f.%p
  global core file content: default
init core file pattern: core
init core file content: default
 global core dumps: enabled
per-process core dumps: enabled
   global setid core dumps: enabled
  per-process setid core dumps: disabled
  global core dump logging: enabled
 
 then all should be good, and cores should appear in /var/crash
 
 otherwise the following should configure coreadm:
 coreadm -g /var/crash/core.%f.%p
 coreadm -G all
 coreadm -e global
 coreadm -e per-process
 
 
 coreadm -u to load the new settings without rebooting.
 
 also might need to set the size of the core dump via
 ulimit -c unlimited
 check ulimit -a first.
 
 then rerun test and check /var/crash for core dump.
 
 If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c 
 ufsBE -n zfsBE -p rootpool
 
 might give an indication, look for SIGBUS in the truss log
 
 NOTE, that you might want to reset the coreadm and ulimit for coredumps 
 after this, in order to not risk filling the system with coredumps in the 
 case of some utility coredumping in a loop say.
 
 
 Enda
 On 11/05/08 13:46, Krzys wrote:
 
 On Wed, 5 Nov 2008, Enda O'Connor wrote:
 
 On 11/05/08 13:02, Krzys wrote:
 I am not sure what I did wrong but I did follow up all the steps to get 
 my system moved from ufs to zfs and not I am unable to boot it... can 
 anyone suggest what I could do to fix it?
 
 here are all my steps:
 
 [00:26:38] @adas: /root  zpool create rootpool c1t1d0s0
 [00:26:57] @adas: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining which
 file systems should be in the new boot environment.
 Updating boot environment description database on all BEs.
 Updating system configuration files.
 The device /dev/dsk/c1t1d0s0 is not a root device for any boot 
 environment; cannot get BE ID.
 Creating configuration for boot environment zfsBE.
 Source boot environment is ufsBE.
 Creating boot environment zfsBE.
 Creating file systems on boot environment zfsBE.
 Creating zfs file system for / in zone global on 
 rootpool/ROOT/zfsBE.
 Populating file systems on boot environment zfsBE.
 Checking selection

Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Krzys
THis is so bizare, I am unable to pass this problem. I though I had not enough 
space on my hard drive (new one) so I replaced it with 72gb drive, but still 
getting that bus error. Originally when I restarted my server it did not want 
to 
boot, do I had to power it off and then back on and it then booted up. But 
constantly I am getting this Bus Error - core dumped

anyway in my /var/crash I see hundreds of core.void files and 3 core.cpio 
files. 
I would imagine core.cpio are the ones that are direct result of what I am 
probably eperiencing.

-rw---   1 root root 4126301 Nov  5 19:22 core.vold.24854
-rw---   1 root root 4126301 Nov  5 19:22 core.vold.24867
-rw---   1 root root 4126301 Nov  5 19:22 core.vold.24880
-rw---   1 root root 4126301 Nov  5 19:22 core.vold.24893
-rw---   1 root root 4126301 Nov  5 19:22 core.vold.24906
-rw---   1 root root 4126301 Nov  5 19:22 core.vold.24919
-rw---   1 root root 4126301 Nov  5 19:22 core.vold.24932
-rw---   1 root root 4126301 Nov  5 19:22 core.vold.24950
-rw---   1 root root 4126301 Nov  5 19:22 core.vold.24978
drwxr-xr-x   3 root root   81408 Nov  5 20:06 .
-rw---   1 root root 31351099 Nov  5 20:06 core.cpio.6208



On Wed, 5 Nov 2008, Enda O'Connor wrote:

 Hi
 Looks ok, some mounts left over from pervious fail.
 In regards to swap and dump on zpool you can set them
 zfs set volsize=1G rootpool/dump
 zfs set volsize=1G rootpool/swap

 for instance, of course above are only an example of how to do it.
 or make the zvol doe rootpool/dump etc before lucreate, in which case it will 
 take the swap and dump size you have preset.

 But I think we need to see the coredump/truss at this point to get an idea of 
 where things went wrong.
 Enda

 On 11/05/08 15:38, Krzys wrote:
 I did upgrade my U5 to U6 from DVD, went trough the upgrade process.
 my file system is setup as follow:
 [10:11:54] [EMAIL PROTECTED]: /root  df -h | egrep -v 
 platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr
 Filesystem size   used  avail capacity  Mounted on
 /dev/dsk/c1t0d0s0   16G   7.2G   8.4G47%/
 swap   8.3G   1.5M   8.3G 1%/etc/svc/volatile
 /dev/dsk/c1t0d0s6   16G   8.7G   6.9G56%/usr
 /dev/dsk/c1t0d0s1   16G   2.5G13G17%/var
 swap   8.5G   229M   8.3G 3%/tmp
 swap   8.3G40K   8.3G 1%/var/run
 /dev/dsk/c1t0d0s7   78G   1.2G76G 2%/export/home
 rootpool33G19K21G 1%/rootpool
 rootpool/ROOT   33G18K21G 1%/rootpool/ROOT
 rootpool/ROOT/zfsBE 33G31M21G 1%/.alt.tmp.b-UUb.mnt
 /export/home78G   1.2G76G 2% 
 /.alt.tmp.b-UUb.mnt/export/home
 /rootpool   21G19K21G 1% 
 /.alt.tmp.b-UUb.mnt/rootpool
 /rootpool/ROOT  21G18K21G 1% 
 /.alt.tmp.b-UUb.mnt/rootpool/ROOT
 swap   8.3G 0K   8.3G 0% 
 /.alt.tmp.b-UUb.mnt/var/run
 swap   8.3G 0K   8.3G 0%/.alt.tmp.b-UUb.mnt/tmp
 [10:12:00] [EMAIL PROTECTED]: /root 
 
 
 so I have /, /usr, /var and /export/home on that primary disk. Original 
 disk is 140gb, this new one is only 36gb, but disk utilization on that 
 primary disk is much less utilized so easily should fit on it.
 
 / 7.2GB
 /usr 8.7GB
 /var 2.5GB
 /export/home 1.2GB
 total space 19.6GB
 I did notice that lucreate did alocate 8GB to SWAP and 4GB to DUMP
 total space needed 31.6GB
 seems like total available disk space on my disk should be 33.92GB
 so its quite close as both numbers do approach. So to make sure I will 
 change disk for 72gb and will try again. I do not beleive that I need to 
 match my main disk size as 146gb as I am not using that much disk space on 
 it. But let me try this and it might be why I am getting this problem...
 
 
 
 On Wed, 5 Nov 2008, Enda O'Connor wrote:
 
 Hi Krzys
 Also some info on the actual system
 ie what was it upgraded to u6 from and how.
 and an idea of how the filesystems are laid out, ie is usr seperate from / 
 and so on ( maybe a df -k ). Don't appear to have any zones installed, 
 just to confirm.
 Enda
 
 On 11/05/08 14:07, Enda O'Connor wrote:
 Hi
 did you get a core dump?
 would be nice to see the core file to get an idea of what dumped core,
 might configure coreadm if not already done
 run coreadm first, if the output looks like
 
 # coreadm
  global core file pattern: /var/crash/core.%f.%p
  global core file content: default
init core file pattern: core
init core file content: default
 global core dumps: enabled
per-process core dumps: enabled
   global setid core dumps: enabled
  per-process setid core dumps: disabled
  global core dump logging: enabled
 
 then all should be good, and cores should appear in /var/crash

Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Krzys
what makes me wonder is why I am not even able to see anything under boot -L ? 
and it is just not seeing this disk as a boot device? so strange.

On Wed, 5 Nov 2008, Krzys wrote:

 THis is so bizare, I am unable to pass this problem. I though I had not enough
 space on my hard drive (new one) so I replaced it with 72gb drive, but still
 getting that bus error. Originally when I restarted my server it did not want 
 to
 boot, do I had to power it off and then back on and it then booted up. But
 constantly I am getting this Bus Error - core dumped

 anyway in my /var/crash I see hundreds of core.void files and 3 core.cpio 
 files.
 I would imagine core.cpio are the ones that are direct result of what I am
 probably eperiencing.

 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24854
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24867
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24880
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24893
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24906
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24919
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24932
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24950
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24978
 drwxr-xr-x   3 root root   81408 Nov  5 20:06 .
 -rw---   1 root root 31351099 Nov  5 20:06 core.cpio.6208



 On Wed, 5 Nov 2008, Enda O'Connor wrote:

 Hi
 Looks ok, some mounts left over from pervious fail.
 In regards to swap and dump on zpool you can set them
 zfs set volsize=1G rootpool/dump
 zfs set volsize=1G rootpool/swap

 for instance, of course above are only an example of how to do it.
 or make the zvol doe rootpool/dump etc before lucreate, in which case it will
 take the swap and dump size you have preset.

 But I think we need to see the coredump/truss at this point to get an idea of
 where things went wrong.
 Enda

 On 11/05/08 15:38, Krzys wrote:
 I did upgrade my U5 to U6 from DVD, went trough the upgrade process.
 my file system is setup as follow:
 [10:11:54] [EMAIL PROTECTED]: /root  df -h | egrep -v
 platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr
 Filesystem size   used  avail capacity  Mounted on
 /dev/dsk/c1t0d0s0   16G   7.2G   8.4G47%/
 swap   8.3G   1.5M   8.3G 1%/etc/svc/volatile
 /dev/dsk/c1t0d0s6   16G   8.7G   6.9G56%/usr
 /dev/dsk/c1t0d0s1   16G   2.5G13G17%/var
 swap   8.5G   229M   8.3G 3%/tmp
 swap   8.3G40K   8.3G 1%/var/run
 /dev/dsk/c1t0d0s7   78G   1.2G76G 2%/export/home
 rootpool33G19K21G 1%/rootpool
 rootpool/ROOT   33G18K21G 1%/rootpool/ROOT
 rootpool/ROOT/zfsBE 33G31M21G 1%/.alt.tmp.b-UUb.mnt
 /export/home78G   1.2G76G 2%
 /.alt.tmp.b-UUb.mnt/export/home
 /rootpool   21G19K21G 1%
 /.alt.tmp.b-UUb.mnt/rootpool
 /rootpool/ROOT  21G18K21G 1%
 /.alt.tmp.b-UUb.mnt/rootpool/ROOT
 swap   8.3G 0K   8.3G 0%
 /.alt.tmp.b-UUb.mnt/var/run
 swap   8.3G 0K   8.3G 0%/.alt.tmp.b-UUb.mnt/tmp
 [10:12:00] [EMAIL PROTECTED]: /root 


 so I have /, /usr, /var and /export/home on that primary disk. Original
 disk is 140gb, this new one is only 36gb, but disk utilization on that
 primary disk is much less utilized so easily should fit on it.

 / 7.2GB
 /usr 8.7GB
 /var 2.5GB
 /export/home 1.2GB
 total space 19.6GB
 I did notice that lucreate did alocate 8GB to SWAP and 4GB to DUMP
 total space needed 31.6GB
 seems like total available disk space on my disk should be 33.92GB
 so its quite close as both numbers do approach. So to make sure I will
 change disk for 72gb and will try again. I do not beleive that I need to
 match my main disk size as 146gb as I am not using that much disk space on
 it. But let me try this and it might be why I am getting this problem...



 On Wed, 5 Nov 2008, Enda O'Connor wrote:

 Hi Krzys
 Also some info on the actual system
 ie what was it upgraded to u6 from and how.
 and an idea of how the filesystems are laid out, ie is usr seperate from /
 and so on ( maybe a df -k ). Don't appear to have any zones installed,
 just to confirm.
 Enda

 On 11/05/08 14:07, Enda O'Connor wrote:
 Hi
 did you get a core dump?
 would be nice to see the core file to get an idea of what dumped core,
 might configure coreadm if not already done
 run coreadm first, if the output looks like

 # coreadm
  global core file pattern: /var/crash/core.%f.%p
  global core file content: default
init core file pattern: core
init core file content: default
 global core dumps: enabled
per-process core dumps: enabled
   global setid

[zfs-discuss] ufs to zfs root file system migration

2008-11-04 Thread Krzys

I did upgrade my Solaris and wanted to move from ufs to zfs, I did read about 
it 
a little but I am not sure about all the steps...

Anyway I do understand that I cannot use whole disk as zpool, so I cannot use 
c1t1d0 but I do have to use c1t1d0s0 instead is that correct?

also all documents that I've found say how to use LU to do this task, but I was 
wondering how can I do my migration when I have few partitions.

Those are my partitions:
/dev/dsk/c1t0d0s016524410 11581246 477792071%/
/dev/dsk/c1t0d0s616524410 9073610 728555656%/usr
/dev/dsk/c1t0d0s116524410 1997555 1436161113%/var
/dev/dsk/c1t0d0s781287957 1230221 79244857 2%/export/home

when I create root zpool do I need to create pool for all of those partitions?
do I need to format my disk and give my slice s0 all the space?

in LU environment how would I specify to move / /usr /var and possibly 
/export/home to go to that one pool? how about swap and dump pools? I did not 
see examples or info how that could be acomplished. I would appreciate some 
hints or maybe there is already someplace document out there, I just was unable 
to locate it...

Greatly appreciate your help in pointing me to the right direction.

Regards,

Chris


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] compression on for zpool boot disk?

2008-11-04 Thread Krzys
compression is not supported for rootpool?

# zpool create rootpool c1t1d0s0
# zfs set compression=gzip-9 rootpool
# lucreate -c ufsBE -n zfsBE -p rootpool
Analyzing system configuration.
ERROR: ZFS pool rootpool does not support boot environments
#

why? are there any plans to have compression on that disk available? how about 
encryption will that be available on zfs boot disk at some point too?

Thank you.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-15 Thread Krzys
I was hoping that in U5 at least ZFS version 5 would be included but it was 
not, 
do you think that will be in U6?

On Fri, 16 May 2008, Robin Guo wrote:

 Hi, Paul

  The most feature and bugfix so far towards Navada 87 (or 88? ) will
 backport into s10u6.
 It's about the same (I mean from outside viewer, not inside) with
 openSolaris 05/08,
 but certainly, some other features as CIFS has no plan to backport to
 s10u6 yet, so ZFS
 will has fully ready but no effect on these kind of area. That depend on
 how they co-operate.

  At least, s10u6 will contain L2ARC cache, ZFS as root filesystem, etc..

 Paul B. Henson wrote:
 We've been working on a prototype of a ZFS file server for a while now,
 based on Solaris 10. Now that official support is available for
 openSolaris, we are looking into that as a possible option as well.
 openSolaris definitely has a greater feature set, but is still a bit rough
 around the edges for production use.

 I've heard that a considerable amount of ZFS improvements are slated to
 show up in S10U6. I was wondering if anybody could give an unofficial list
 of what will probably be deployed in S10U6, and how that will compare
 feature wise to openSolaris 05/08. Some rough guess at an ETA would also be
 nice :).

 Thanks...




 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


 !DSPAM:122,482ce24518355742411484!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] question regarding gzip compression in S10

2008-05-12 Thread Krzys
I just upgraded to Sol 10 U5 and I was hoping that gzip compression will be 
there, but when I do upgrade it only does show v4

[10:05:36] [EMAIL PROTECTED]: /export/home  zpool upgrade
This system is currently running ZFS version 4.

Do you know when Version 5 will be included in Solaris 10? are there any plans 
for it or will it be in Sol 11 only?

Regards,

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] lost zpool when server restarted.

2008-05-04 Thread Krzys
Because this system was in production I had to fairly quickly recover, so I was 
unable to play much more with it we had to destroy it and recreate new pool and 
then recover data from tapes.

Its a mistery as to why in the middle of a night it rebooted, we could not 
figure this out and why pool had this problem... so unfortunatelly I will not 
be 
able to follow what you Victor and Jeff were suggesting.


before we destroyed that pool I did get output of fmdump on that system to see 
what failed etc. As you can see it happend at around 3:54 am on Sunday morning 
there was no one on the system from admin perspective to break anything, only 
thing that I might think of would be the backups running which could generate 
more traffic, but then I had that system for over a year setup this way, and no 
changes were made to it from storage perspective.

yes I did see this URL: 
http://www.opensolaris.org/jive/thread.jspa?messageID=220125
but unfortunately I was unable to apply it in my situation as I had no idea 
what 
values to apply... :(

anyway here is fmdump

bash-3.00# fmdump -eV

TIME   CLASS

Apr 27 2008 03:54:05.605369200 ereport.fs.zfs.vdev.open_failed

nvlist version: 0

class = ereport.fs.zfs.vdev.open_failed

ena = 0x18594234ea1

detector = (embedded nvlist)

nvlist version: 0

version = 0x0

scheme = zfs

pool = 0x39918ce32491d000

vdev = 0xc40696f31f78fd48

(end detector)



pool = mypool

pool_guid = 0x39918ce32491d000

pool_context = 1

vdev_guid = 0xc40696f31f78fd48

vdev_type = disk

vdev_path = /dev/dsk/emcpower0a

parent_guid = 0x39918ce32491d000

parent_type = root

prev_state = 0x1

__ttl = 0x1

__tod = 0x4814311d 0x24153370



Apr 27 2008 03:54:05.605369725 ereport.fs.zfs.vdev.open_failed

nvlist version: 0

class = ereport.fs.zfs.vdev.open_failed

ena = 0x18594234ea1

detector = (embedded nvlist)

nvlist version: 0

version = 0x0

scheme = zfs

pool = 0x39918ce32491d000

vdev = 0xd56fa2d7686dae8c

(end detector)



pool = mypool

pool_guid = 0x39918ce32491d000

pool_context = 1

vdev_guid = 0xd56fa2d7686dae8c

vdev_type = disk

vdev_path = /dev/dsk/emcpower2a

parent_guid = 0x39918ce32491d000

parent_type = root

prev_state = 0x1

__ttl = 0x1

__tod = 0x4814311d 0x2415357d



Apr 27 2008 03:54:05.605369225 ereport.fs.zfs.zpool

nvlist version: 0

class = ereport.fs.zfs.zpool

ena = 0x18594234ea1

detector = (embedded nvlist)

nvlist version: 0

version = 0x0

scheme = zfs

pool = 0x39918ce32491d000

(end detector)



pool = mypool

pool_guid = 0x39918ce32491d000

pool_context = 1

__ttl = 0x1

__tod = 0x4814311d 0x24153389



Apr 27 2008 03:56:28.180698100 ereport.fs.zfs.vdev.open_failed

nvlist version: 0

class = ereport.fs.zfs.vdev.open_failed

ena = 0x398b69181e00401

detector = (embedded nvlist)

nvlist version: 0

version = 0x0

scheme = zfs

pool = 0x39918ce32491d000

vdev = 0xc40696f31f78fd48

(end detector)



pool = mypool

pool_guid = 0x39918ce32491d000

pool_context = 1

vdev_guid = 0xc40696f31f78fd48

vdev_type = disk

vdev_path = /dev/dsk/emcpower0a

parent_guid = 0x39918ce32491d000

parent_type = root

prev_state = 0x1

__ttl = 0x1

__tod = 0x481431ac 0xac53bf4



Apr 27 2008 03:56:28.180698375 ereport.fs.zfs.vdev.open_failed

nvlist version: 0

class = ereport.fs.zfs.vdev.open_failed

ena = 0x398b69181e00401

detector = (embedded nvlist)

nvlist version: 0

version = 0x0

scheme = zfs

pool = 0x39918ce32491d000

vdev = 0xd56fa2d7686dae8c

(end detector)



pool = mypool

pool_guid = 0x39918ce32491d000

pool_context = 1

vdev_guid = 0xd56fa2d7686dae8c

vdev_type = disk

vdev_path = /dev/dsk/emcpower2a

parent_guid = 0x39918ce32491d000

parent_type = root

prev_state = 0x1

__ttl = 0x1

__tod = 0x481431ac 0xac53d07



Apr 27 2008 03:56:28.180698500 ereport.fs.zfs.zpool

nvlist version: 0

class = ereport.fs.zfs.zpool

ena = 0x398b69181e00401

detector = (embedded nvlist)

nvlist version: 0

version = 0x0

scheme = zfs

pool = 0x39918ce32491d000

(end detector)




[zfs-discuss] lost zpool when server restarted.

2008-04-29 Thread Krzys



I have a problem on one of my systems with zfs. I used to have zpool created 
with 3 luns on SAN. I did not have to put any raid or anything on it since it 
was already using raid on SAN. Anyway server rebooted and I cannot zee my 
pools. 
When I do try to import it it does fail. I am using EMC Clarion as SAN and 
powerpath
# zpool list
no pools available
# zpool import -f
  pool: mypool
  id: 4148251638983938048
state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
  devices and try again.
  see: http://www.sun.com/msg/ZFS-8000-3C
config:
  mypool UNAVAIL insufficient replicas
  emcpower0a UNAVAIL cannot open
  emcpower2a UNAVAIL cannot open
  emcpower3a ONLINE

I think I am able to see all the luns and I should be able to access them on my 
sun box.
# powermt display dev=all
Pseudo name=emcpower0a
CLARiiON ID=APM00070202835 [NRHAPP02]
Logical device ID=6006016045201A001264FB20990FDC11 [LUN 13]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B
==
 Host --- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==
3074 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c2t5006016041E035A4d0s0 SP A4 active 
alive 0 0
3074 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c2t5006016941E035A4d0s0 SP B5 active 
alive 0 0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c3t5006016141E035A4d0s0 SP A5 
active alive 0 0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c3t5006016841E035A4d0s0 SP B4 
active alive 0 0


Pseudo name=emcpower1a
CLARiiON ID=APM00070202835 [NRHAPP02]
Logical device ID=6006016045201A004C1388343C10DC11 [LUN 14]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B
==
 Host --- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==
3074 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c2t5006016041E035A4d1s0 SP A4 active 
alive 0 0
3074 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c2t5006016941E035A4d1s0 SP B5 active 
alive 0 0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c3t5006016141E035A4d1s0 SP A5 
active alive 0 0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c3t5006016841E035A4d1s0 SP B4 
active alive 0 0


Pseudo name=emcpower3a
CLARiiON ID=APM00070202835 [NRHAPP02]
Logical device ID=6006016045201A00A82C68514E86DC11 [LUN 7]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B
==
 Host --- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==
3074 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c2t5006016041E035A4d3s0 SP A4 active 
alive 0 0
3074 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c2t5006016941E035A4d3s0 SP B5 active 
alive 0 0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c3t5006016141E035A4d3s0 SP A5 
active alive 0 0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c3t5006016841E035A4d3s0 SP B4 
active alive 0 0


Pseudo name=emcpower2a
CLARiiON ID=APM00070202835 [NRHAPP02]
Logical device ID=600601604B141B00C2F6DB2AC349DC11 [LUN 24]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B
==
 Host --- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==
3074 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c2t5006016041E035A4d2s0 SP A4 active 
alive 0 0
3074 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c2t5006016941E035A4d2s0 SP B5 active 
alive 0 0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c3t5006016141E035A4d2s0 SP A5 
active alive 0 0
3072 [EMAIL 

[zfs-discuss] zfs mounting

2007-10-30 Thread Krzys

It would be nice to be able to mount zfs file system by its mountpoit also and 
not just by the pool... For example I have the following:

mypool5 257G   199G  24.5K  /mypool5
mypool5/d5  257G   199G   257G  /d/d5

the only way to mount it is by zfs mount mypool5 and zfs mount mypool5/d5, but 
it would be nice to be able to mount mypool5/d5 by issuing zfs mount /d/d5

Just a suggestion to make zfs even easier to use... but they why stop there, 
why 
not be able to mount using just mount command?
mount /d/d5

Just my thought as I was in need to mount this usb drive after beeing 
disconnected and it took me few minutes to figure it out... Sorry if that was 
covered in the past, I di dnot take my time to search  archives...

Regards,

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool question

2007-10-29 Thread Krzys

hello folks, I am running Solaris 10 U3 and I have small problem that I dont 
know how to fix...

I had a pool of two drives:

bash-3.00# zpool status
   pool: mypool
  state: ONLINE
  scrub: none requested
config:

 NAME  STATE READ WRITE CKSUM
 mypoolONLINE   0 0 0
   emcpower0a  ONLINE   0 0 0
   emcpower1a  ONLINE   0 0 0

errors: No known data errors

I added another drive

so now I have pool of 3 drives

bash-3.00# zpool status
   pool: mypool
  state: ONLINE
  scrub: none requested
config:

 NAME  STATE READ WRITE CKSUM
 mypoolONLINE   0 0 0
   emcpower0a  ONLINE   0 0 0
   emcpower1a  ONLINE   0 0 0
   emcpower2a  ONLINE   0 0 0

errors: No known data errors

everything is great but I've made a mistake and I would like to remove 
emcpower2a from my pool and I cannot do that...

Well the mistake that I made is that I did not format my device correctly so 
instead of adding 125gig I added 128meg

here is my partition on that disk:
partition print
Current partition table (original):
Total disk cylinders available: 63998 + 2 (reserved cylinders)

Part  TagFlag Cylinders SizeBlocks
   0   rootwm   0 -63  128.00MB(64/0/0)   262144
   1   swapwu  64 -   127  128.00MB(64/0/0)   262144
   2 backupwu   0 - 63997  125.00GB(63998/0/0) 262135808
   3 unassignedwm   00 (0/0/0) 0
   4 unassignedwm   00 (0/0/0) 0
   5 unassignedwm   00 (0/0/0) 0
   6usrwm 128 - 63997  124.75GB(63870/0/0) 261611520
   7 unassignedwm   00 (0/0/0) 0

partition

what I would like to do is to remove my emcpower2a device, format it and then 
add 125gig one instead of the 128meg. Is it possible to do this in Solaris 10 
U3? If not what are my options?

Regards,

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs/zpools iscsi

2007-10-12 Thread Krzys
Hello all, sorry if somebody already asked this or not. I was playing today 
with 
iSCSI and I was able to create zpool and then via iSCSI I can see it on two 
other hosts. I was courious if I could use zfs to have it shared on those two 
hosts but aparently I was unable to do it for obvious reasons. On my linuc 
oracle rac I was using ocfs which works just as I need it, does anyone know if 
such could be acheived with zfs maybe? maybe if not now but in the future? is 
there anything that I could do at this moment to be able to have my two other 
solaris clients see my zpool that I am presenting via iscsi to them both? Is 
there any solutions out there of this kind?

Thanks so much for your help.

Regards,

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool upgrade to more storage

2007-08-11 Thread Krzys

Hello everyone, I am slowly running out of space in my zpool.. so I wanted to 
replace my zpool with a different zpool..

my current zpool is
 zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool  278G263G   14.7G94%  ONLINE -

 zpool status mypool
   pool: mypool
  state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
 continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scrub: resilver in progress, 11.37% done, 10h0m to go
config:

 NAMESTATE READ WRITE CKSUM
 mypool  ONLINE   0 0 0
   mirrorONLINE   0 0 0
 c1t2d0  ONLINE   0 0 0
 c1t3d0  ONLINE   0 0 0

errors: No known data errors

(yes I know its resilvering one of the disks...)


Anyway that is a simple mirror zpool, I would like to create another pool lets 
say mypool2 with few more disks and use raidz2 instead... What would be my 
options to do this transfer? I cannot attach to this existing pool disks, I don 
tthink thats an option because thats mirror and not raidz2... can I create 
raidz2 and just add it to mypool using zpool add option? and then when its 
added 
is there any way to remove originall mirror out of it? Now the tricky part is I 
have lots of snapshots on that mypool and I would like to keep them... Another 
option that I think I have is just create mypool2 as I want it to be which is 
raidz2 and then use zfs send and receive to move data around and then restroy 
original mirror when I am done replacing it with this one...

What do you think? what would you recommend? with the second option I probably 
would need to take system offline and do it and I dont even if first option 
would even work where I would just add newly created raidz2 to mypool and then 
remove original mirror out of it...

Regards,

Chris



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-06-01 Thread Krzys
Yes by my goal is to replace exisiting disk which is internal disk 72gb with SAN 
storage disk which is 100GB in size... As long as I will be able to detach the 
old one then its going to be great... otherwise I will be stuck with one 
internal disk and oneSAN disk which I do not like that much to have.


Regards,

Chris


On Fri, 1 Jun 2007, Will Murnane wrote:


On 5/31/07, Krzys [EMAIL PROTECTED] wrote:

so I do run replace command and I get and error:
bash-3.00# zpool replace mypool c1t2d0 emcpower0a
cannot replace c1t2d0 with emcpower0a: device is too small

Try zpool attach mypool emcpower0a; see
http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qrt?a=view .

Will


!DSPAM:122,465fa1d813332148481500!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-06-01 Thread Krzys

Never the less I get the following error:
bash-3.00# zpool attach mypool emcpower0a
missing new_device specification
usage:
attach [-f] pool device new_device

bash-3.00# zpool status
  pool: mypool
 state: ONLINE
 scrub: resilver completed with 0 errors on Thu May 31 23:01:09 2007
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0

errors: No known data errors
bash-3.00# zpool attach mypool c1t2d0 emcpower0a
cannot attach emcpower0a to c1t2d0: device is too small
bash-3.00#

Is there anyway to add that emc san to zfs at all? It seems like that emcpower0a 
cannot be added in any way...


but check this out, I did try to add it in as a new pool and here is what I got:
bash-3.00# zpool create mypool2 emcpower0a


bash-3.00# zpool status
  pool: mypool
 state: ONLINE
 scrub: resilver completed with 0 errors on Thu May 31 23:01:09 2007
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0

errors: No known data errors

  pool: mypool2
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
mypool2   ONLINE   0 0 0
  emcpower0a  ONLINE   0 0 0

errors: No known data errors
bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool   68G   53.1G   14.9G78%  ONLINE -
mypool2 123M   83.5K123M 0%  ONLINE -
bash-3.00#







On Fri, 1 Jun 2007, Will Murnane wrote:


On 5/31/07, Krzys [EMAIL PROTECTED] wrote:

so I do run replace command and I get and error:
bash-3.00# zpool replace mypool c1t2d0 emcpower0a
cannot replace c1t2d0 with emcpower0a: device is too small

Try zpool attach mypool emcpower0a; see
http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qrt?a=view .

Will


!DSPAM:122,465fa1d813332148481500!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-06-01 Thread Krzys

ok, I think I did figure out what is the problem
well what zpool does for that emc powerpath is it takes parition 0 from disk and 
is trying to attach it to my pool, so when I added emcpower0a I got the 
following:

bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool   68G   53.1G   14.9G78%  ONLINE -
mypool2 123M   83.5K123M 0%  ONLINE -

because my emcpower0a structure looked like this:
format verify

Primary label contents:

Volume name = 
ascii name  = DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16
pcyl= 51200
ncyl= 51198
acyl=2
nhead   =  256
nsect   =   16
Part  TagFlag Cylinders SizeBlocks
  0   rootwm   0 -63  128.00MB(64/0/0)   262144
  1   swapwu  64 -   127  128.00MB(64/0/0)   262144
  2 backupwu   0 - 51197  100.00GB(51198/0/0) 209707008
  3 unassignedwm   00 (0/0/0) 0
  4 unassignedwm   00 (0/0/0) 0
  5 unassignedwm   00 (0/0/0) 0
  6usrwm 128 - 51197   99.75GB(51070/0/0) 209182720
  7 unassignedwm   00 (0/0/0) 0


So what I did I changed my layout to look like this:
Part  TagFlag Cylinders SizeBlocks
  0   rootwm   0 - 51197  100.00GB(51198/0/0) 209707008
  1   swapwu   00 (0/0/0) 0
  2 backupwu   0 - 51197  100.00GB(51198/0/0) 209707008
  3 unassignedwm   00 (0/0/0) 0
  4 unassignedwm   00 (0/0/0) 0
  5 unassignedwm   00 (0/0/0) 0
  6usrwm   00 (0/0/0) 0
  7 unassignedwm   00 (0/0/0) 0


created new pool and I have the following:
bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool   68G   53.1G   14.9G78%  ONLINE -
mypool299.5G 80K   99.5G 0%  ONLINE -

so now I will try to replace it... I guess zpool does treat differently devices 
and in particular the ones that are under emc powerpath controll which is using 
the first slice of that disk to create pool and not the whole device...


Anyway thanks to everyone for help, now that replace should work... I am going 
to try it now.


Chris



On Fri, 1 Jun 2007, Will Murnane wrote:


On 5/31/07, Krzys [EMAIL PROTECTED] wrote:

so I do run replace command and I get and error:
bash-3.00# zpool replace mypool c1t2d0 emcpower0a
cannot replace c1t2d0 with emcpower0a: device is too small

Try zpool attach mypool emcpower0a; see
http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qrt?a=view .

Will


!DSPAM:122,465fa1d813332148481500!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-06-01 Thread Krzys
yeah it does something funky that I did not expect, zpool seems like its taking 
slice 0 of that emc lun rather than taking the whole device...



so when I did create that lun, I formated disk and it looked like this:
format verify

Primary label contents:

Volume name = 
ascii name  = DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16
pcyl= 51200
ncyl= 51198
acyl=2
nhead   =  256
nsect   =   16
Part  TagFlag Cylinders SizeBlocks
  0   rootwm   0 -63  128.00MB(64/0/0)   262144
  1   swapwu  64 -   127  128.00MB(64/0/0)   262144
  2 backupwu   0 - 51197  100.00GB(51198/0/0) 209707008
  3 unassignedwm   00 (0/0/0) 0
  4 unassignedwm   00 (0/0/0) 0
  5 unassignedwm   00 (0/0/0) 0
  6usrwm 128 - 51197   99.75GB(51070/0/0) 209182720
  7 unassignedwm   00 (0/0/0) 0

that is the reason when I was trying to replace the other disk zpool did take 
slice 0 of that disk which was 128mb and treated it as pool rather than taking 
the whole disk or slice 2 or whatever it does with normal devices... I have that 
system connected to EMC clarion and I am using powerpath software from emc to do 
multipathing and stuff... ehh.. will try to replace that device old internal 
disk with this one and lets see how that will work.


thanks so much for help.

Chris


On Fri, 1 Jun 2007, Will Murnane wrote:


On 6/1/07, Krzys [EMAIL PROTECTED] wrote:

bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool   68G   53.1G   14.9G78%  ONLINE -
mypool2 123M   83.5K123M 0%  ONLINE -

Are you sure you've allocated as large a LUN as you thought initially?
Perhaps ZFS is doing something funky with it; does putting UFS on it
show a large filesystem or a small one?

Will


!DSPAM:122,46601749220211363223461!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-06-01 Thread Krzys

Ok, now its seems like its working what I wanted to do:
bash-3.00# zpool status
  pool: mypool
 state: ONLINE
 scrub: resilver completed with 0 errors on Thu May 31 23:01:09 2007
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0

errors: No known data errors
bash-3.00# zpool detach mypool c1t3d0
bash-3.00# zpool status
  pool: mypool
 state: ONLINE
 scrub: resilver completed with 0 errors on Thu May 31 23:01:09 2007
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  c1t2d0ONLINE   0 0 0

errors: No known data errors
bash-3.00# zpool replace mypool c1t2d0 emcpower0a
bash-3.00# zpool status
  pool: mypool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress, 0.00% done, 17h50m to go
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  replacing ONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
emcpower0a  ONLINE   0 0 0

errors: No known data errors
bash-3.00#


thank you everyone who helped me with this...

Chris






On Fri, 1 Jun 2007, Will Murnane wrote:


On 6/1/07, Krzys [EMAIL PROTECTED] wrote:

bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool   68G   53.1G   14.9G78%  ONLINE -
mypool2 123M   83.5K123M 0%  ONLINE -

Are you sure you've allocated as large a LUN as you thought initially?
Perhaps ZFS is doing something funky with it; does putting UFS on it
show a large filesystem or a small one?

Will


!DSPAM:122,46601749220211363223461!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-05-31 Thread Krzys
Sorry to bother you but something is not clear to me regarding this process.. 
Ok, lets sat I have two internal disks (73gb each) and I am mirror them... now I 
want to replace those two mirrored disks into one LUN that is on SAN and it is 
around 100gb. Now I do meet one requirement of having more than 73gb of storage 
but do I need only something like 73gb at minimum or do I actually need two luns 
of 73gb or more since I have it mirrored?


My goal is simple to move data of two mirrored disks into one single SAN 
device... Any ideas if what I am planning to do is duable? or do I need to use 
zfs send and receive and just update everything and switch when I am done?


or do I just add this SAN disk to the existing pool and then remove mirror 
somehow? I would just have to make sure that all data is off that disk... is 
there any option to evacuate data off that mirror?



here is what I exactly have:
bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool   68G   52.9G   15.1G77%  ONLINE -
bash-3.00# zpool status
  pool: mypool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0

errors: No known data errors
bash-3.00#


On Tue, 29 May 2007, Cyril Plisko wrote:


On 5/29/07, Krzys [EMAIL PROTECTED] wrote:

Hello folks, I have a question. Currently I have zfs pool (mirror) on two
internal disks... I wanted to connect that server to SAN, then add more 
storage
to this pool (double the space) then start using it. Then what I wanted to 
do is
just take out the internal disks out of that pool and use SAN only. Is 
there any
way to do that with zfs pools? Is there any way to move data from those 
internal

disks to external disks?


You can zpool replace your disks with other disks. Provided that you have
same amount of new disks and they are of same or greater size


--
Regards,
  Cyril


!DSPAM:122,465c515921755021468!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-05-31 Thread Krzys
Hmm, I am having some problems, I did follow what you suggested and here is what 
I did:


bash-3.00# zpool status
  pool: mypool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0

errors: No known data errors
bash-3.00# zpool detach mypool c1t3d0
bash-3.00# zpool status
  pool: mypool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  c1t2d0ONLINE   0 0 0

errors: No known data errors


so now I have only one disk in my pool... Now, the c1t2d0 disk is a 72fb SAS 
drive. I am trying to replace it with SAN 100GB LUN (emcpower0a)




bash-3.00# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c1t0d0 SUN72G cyl 14087 alt 2 hd 24 sec 424
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   1. c1t1d0 SUN72G cyl 14087 alt 2 hd 24 sec 424
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   2. c1t2d0 SEAGATE-ST973401LSUN72G-0556-68.37GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   3. c1t3d0 FUJITSU-MAY2073RCSUN72G-0501-68.37GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   4. c2t5006016041E035A4d0 DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16
  /[EMAIL PROTECTED],70/[EMAIL PROTECTED]/SUNW,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0
   5. c2t5006016941E035A4d0 DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16
  /[EMAIL PROTECTED],70/[EMAIL PROTECTED]/SUNW,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0
   6. c3t5006016841E035A4d0 DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16
  /[EMAIL PROTECTED],70/[EMAIL PROTECTED],2/SUNW,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0
   7. c3t5006016141E035A4d0 DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16
  /[EMAIL PROTECTED],70/[EMAIL PROTECTED],2/SUNW,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0
   8. emcpower0a DGC-RAID5-0324 cyl 51198 alt 2 hd 256 sec 16
  /pseudo/[EMAIL PROTECTED]
Specify disk (enter its number): ^D


so I do run replace command and I get and error:
bash-3.00# zpool replace mypool c1t2d0 emcpower0a
cannot replace c1t2d0 with emcpower0a: device is too small

Any idea what I am doing wrong? Why it thinks that emcpower0a is too small?

Regards,

Chris




On Thu, 31 May 2007, Richard Elling wrote:


Krzys wrote:
Sorry to bother you but something is not clear to me regarding this 
process.. Ok, lets sat I have two internal disks (73gb each) and I am 
mirror them... now I want to replace those two mirrored disks into one LUN 
that is on SAN and it is around 100gb. Now I do meet one requirement of 
having more than 73gb of storage but do I need only something like 73gb at 
minimum or do I actually need two luns of 73gb or more since I have it 
mirrored?


You can attach any number of devices to a mirror.

You can detach all but one of the devices from a mirror.  Obviously, when
the number is one, you don't currently have a mirror.

The resulting logical size will be equivalent to the smallest device.

My goal is simple to move data of two mirrored disks into one single SAN 
device... Any ideas if what I am planning to do is duable? or do I need to 
use zfs send and receive and just update everything and switch when I am 
done?


or do I just add this SAN disk to the existing pool and then remove mirror 
somehow? I would just have to make sure that all data is off that disk... 
is there any option to evacuate data off that mirror?


The ZFS terminology is attach and detach  A replace is an attach
followed by detach.

It is a good idea to verify that the sync has completed before detaching.
zpool status will show the current status.
-- richard


!DSPAM:122,465f396b235932151120594!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs migration

2007-05-29 Thread Krzys
Hello folks, I have a question. Currently I have zfs pool (mirror) on two 
internal disks... I wanted to connect that server to SAN, then add more storage 
to this pool (double the space) then start using it. Then what I wanted to do is 
just take out the internal disks out of that pool and use SAN only. Is there any 
way to do that with zfs pools? Is there any way to move data from those internal 
disks to external disks?


I mean there are ways around it, I know I can make new pool, create snap on old 
and then send it over then when I am done just bring zone down make incremental 
sync and then switch that zone to use new pool, but I wanted to do it while I 
have everything up.. so my goal was to add another disk (SAN) disk to my 
existing two disks mirrored pool, then move data while I have everything running 
from one internal disks to SAN and then just take those internal disks out...


Any comments or suggestions greatly appreciated.

Regards,

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-05-29 Thread Krzys

Perfect, i will try to play with that...

Regards,

Chris


On Tue, 29 May 2007, Cyril Plisko wrote:


On 5/29/07, Krzys [EMAIL PROTECTED] wrote:

Hello folks, I have a question. Currently I have zfs pool (mirror) on two
internal disks... I wanted to connect that server to SAN, then add more 
storage
to this pool (double the space) then start using it. Then what I wanted to 
do is
just take out the internal disks out of that pool and use SAN only. Is 
there any
way to do that with zfs pools? Is there any way to move data from those 
internal

disks to external disks?


You can zpool replace your disks with other disks. Provided that you have
same amount of new disks and they are of same or greater size


--
Regards,
  Cyril


!DSPAM:122,465c515921755021468!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making 'zfs destroy' safer

2007-05-18 Thread Krzys


Hey, that's nothing, I had one zfs file system, then I cloned it, so I
thought that I had two separate file systems. then I was making snaps
of both of them. Then later on I decided I did not need original file
system with its snaps. So I did recursively remove it, all of a sudden
I got a message that this clone file system is mounted and cannot be
removed, my heart did stop for a second as that clone was a file
system that I was using. I suspect that I did not promote zfs file
system to be completely stand alone so ehh, I did not have idea that
was the case... but it did scare me how easy I could just loose file
system by removing wrong thing.

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive question

2007-04-16 Thread Krzys
[18:19:00] [EMAIL PROTECTED]: /root  zfs send -i mypool/[EMAIL PROTECTED] mypool/[EMAIL PROTECTED] | 
zfs receive -F mypool2/[EMAIL PROTECTED]

invalid option 'F'
usage:
receive [-vn] filesystem|volume|snapshot
receive [-vn] -d filesystem

For the property list, run: zfs set|get

It does not seem to work unless I am doing it incorectly.

Chris

On Tue, 17 Apr 2007, Nicholas Lee wrote:


On 4/17/07, Krzys [EMAIL PROTECTED] wrote:



and when I did try to run that last command I got the following error:
[16:26:00] [EMAIL PROTECTED]: /root  zfs send -i mypool/[EMAIL PROTECTED]
mypool/[EMAIL PROTECTED] |
zfs receive mypool2/[EMAIL PROTECTED]
cannot receive: destination has been modified since most recent snapshot

is there any way to do such replication by zfs send/receive and avoind
such
error message? Is there any way to force file system not to be mounted? Is
there
any way to make it maybe read only parition and then when its needed maybe
make
it live or whaverer?



Check the -F option to zfs receive. This automatically rolls back the
target.
Nicholas


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive question

2007-04-16 Thread Krzys
Ah, ok, not a problem, do you know Cindy when next Solaris Update is going to be 
released by SUN? Yes, I am running U3 at this moment.


Regards,

Chris

On Mon, 16 Apr 2007, [EMAIL PROTECTED] wrote:


Chris,

Looks like you're not running a Solaris release that contains
the zfs receive -F option. This option is in current Solaris community
release, build 48.

http://docs.sun.com/app/docs/doc/817-2271/6mhupg6f1?a=view#gdsup

Otherwise, you'll have to wait until an upcoming Solaris 10 release.

Cindy

Krzys wrote:
[18:19:00] [EMAIL PROTECTED]: /root  zfs send -i mypool/[EMAIL PROTECTED] 
mypool/[EMAIL PROTECTED] | zfs receive -F mypool2/[EMAIL PROTECTED]

invalid option 'F'
usage:
 receive [-vn] filesystem|volume|snapshot
 receive [-vn] -d filesystem

For the property list, run: zfs set|get

It does not seem to work unless I am doing it incorectly.

Chris

On Tue, 17 Apr 2007, Nicholas Lee wrote:


On 4/17/07, Krzys [EMAIL PROTECTED] wrote:




and when I did try to run that last command I got the following error:
[16:26:00] [EMAIL PROTECTED]: /root  zfs send -i mypool/[EMAIL PROTECTED]
mypool/[EMAIL PROTECTED] |
zfs receive mypool2/[EMAIL PROTECTED]
cannot receive: destination has been modified since most recent snapshot

is there any way to do such replication by zfs send/receive and avoind
such
error message? Is there any way to force file system not to be mounted? 
Is

there
any way to make it maybe read only parition and then when its needed 
maybe

make
it live or whaverer?




Check the -F option to zfs receive. This automatically rolls back the
target.
Nicholas


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


!DSPAM:122,4623fa8a1809423226276!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] crashed remote system trying to do zfs send / receive

2007-04-16 Thread Krzys

Ah, perfect then... Thank you so much for letting me know...

Regards,

Chris


On Tue, 17 Apr 2007, Robert Milkowski wrote:


Hello Krzys,

Sunday, April 15, 2007, 4:53:43 AM, you wrote:

K Strange thing, I did try to do zfs send/receive using zfs.

K On the from host I did the following:


K bash-3.00# zfs send mypool/zones/[EMAIL PROTECTED] | ssh 10.0.2.79 zfs 
receive
K mypool/zones/[EMAIL PROTECTED]
K Password:
K ^CKilled by signal 2.

K 1 or 2 minutes later I did break this command and I wanted to time it so I 
did
K change command and reissued it.

K bash-3.00# time zfs send mypool/zones/[EMAIL PROTECTED] | ssh 10.0.2.79 zfs
K receive mypool/zones/[EMAIL PROTECTED]
K Password:
K ^CKilled by signal 2.


K real0m7.346s
K user0m0.220s
K sys 0m0.036s
K bash-3.00#

K Right after this I got on remote server kernel panic and here is the output:

K [22:35:30] @zglobix1: /root 
K panic[cpu1]/thread=30001334380: dangling dbufs (dn=6000a13eba0,
K dbuf=60007d927e8)


K 02a1004f9030 zfs:dnode_evict_dbufs+19c (6000a13eba0, 1, 7b64e800,
K 6000a9e34b0, 1, 2a1004f90e8)
K%l0-3: 06000a13edb0   06000a13edb8
K%l4-7: 02a1004f90e8 060007be3910 0003 0001
K 02a1004f9230 zfs:dmu_objset_evict_dbufs+d8 (21, 0, 0, 7b648400, 
6000a9e32c0,
K 6000a9e32c0)
K%l0-3: 060002738f89 000f 060007fef8f0 06000a9e34a0
K%l4-7: 06000a13eba0 06000a9e3398 0001 7b6485e7
K 02a1004f92e0 zfs:dmu_objset_evict+b4 (60007138900, 6000a9e32c0, 180e580,
K 7b60a800, 7b64e400, 7b64e400)
K%l0-3:  0180c000 0005 
K%l4-7: 060007b9f600 7b60a800 7b64e400 
K 02a1004f93a0 zfs:dsl_dataset_evict+34 (60007138900, 7b60af7c, 18364c0,
K 60001ac90c0, 6000a9e32c0, 60007b9f600)
K%l0-3:   02a10001fcc0 02a10001fcc0
K%l4-7: 030b1b80  01ac8d1a 018a8400
K 02a1004f9450 zfs:dbuf_evict_user+48 (60007138908, 60007b9f600, 
60008666cd0,
K 0, 0, 60008666be8)
K%l0-3:  060007138900 0013 
K%l4-7: 03000107c000 018ade70 0bc0 7b612fa4
K 02a1004f9500 zfs:zfsctl_ops_root+b184ac4 (60008666c40, 60008666be8,
K 70478000, 3, 3, 0)
K%l0-3: 060001ac90c0 000f 0600071389a0 
K%l4-7:   0001 70478018
K 02a1004f95b0 zfs:dmu_recvbackup+8e8 (60006f32d00, 60006f32fd8, 
60006f32e30,
K 1, 60006ad5fa8, 0)
K%l0-3: 060006f32d15 060007138900 7b607c00 7b648000
K%l4-7: 0040 0354 0001 0138
K 02a1004f9780 zfs:zfs_ioc_recvbackup+38 (60006f32000, 0, 0, 0, 9, 0)
K%l0-3: 0004  006d 
K%l4-7:  060006f3200c  0073
K 02a1004f9830 zfs:zfsdev_ioctl+160 (70478c00, 5d, ffbfebc8, 1f, 7c, 1000)
K%l0-3: 060006f32000   007c
K%l4-7: 7b63b688 70479248 02e8 70478f60
K 02a1004f98e0 genunix:fop_ioctl+20 (60006c8fd00, 5a1f, ffbfebc8, 13,
K 600067d34b0, 12066d4)
K%l0-3: 0600064da200 0600064da200 0003 060006cc6fd8
K%l4-7: ff342036 ff345c7c  018a9400
K 02a1004f9990 genunix:ioctl+184 (3, 60006c03688, ffbfebc8, 6500, ff00,
K 5a1f)
K%l0-3:   0004 c1ac
K%l4-7: 0001   

K syncing file systems... 2 1 done
K dumping to /dev/dsk/c1t1d0s1, offset 3436642304, content: kernel
K   94% done
K SC Alert: Failed to send email alert to the primary mailserver.

K SC Alert: Failed to send email alert for recent event.
K 100% done: 71483 pages dumped, compression ratio 5.34, dump succeeded
K rebooting...

K SC Alert: Host System has Reset
K Probing system devices
K Probing memory
K Probing I/O buses

K Sun Fire V240, No Keyboard
K Copyright 1998-2004 Sun Microsystems, Inc.  All rights reserved.
K OpenBoot 4.16.2, 16384 MB memory installed, Serial #63395381.
K Ethernet address 0:3:ba:c7:56:35, Host ID: 83c75635.



K Initializing  2048MB of memory at addr10 \
K SC Alert: Failed to send email alert for recent event.
K Rebooting with command: boot
K Boot device: disk1  File and args:
K SunOS Release 5.10 Version Generic_118833-36 64-bit
K Copyright 1983-2006 Sun Microsystems, Inc.  All rights reserved.
K Use is subject to license terms.
K /
K SC Alert: Failed to send email alert for recent event.
K Hardware watchdog enabled
K Hostname: zglobix1
K checking ufs filesystems
K /dev/rdsk/c1t1d0s7: is logging.
K Failed

[zfs-discuss] zfs snaps and removing some files

2007-04-14 Thread Krzys

Hello folks, I have strange and unusual request...

I have two 300gig drives mirrored:
[11:33:22] [EMAIL PROTECTED]: /d/d2  zpool status
  pool: mypool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0

errors: No known data errors


it does give me total of 
[11:32:55] [EMAIL PROTECTED]: /d/d2  zpool list

NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool  278G271G   6.75G97%  ONLINE -


I am using around 150 gig of that 278 gig that I have and disk is 99% full
[11:33:58] [EMAIL PROTECTED]: /d/d2  df -k .
Filesystem   1k-blocks  Used Available Use% Mounted on
mypool/d 152348400 149829144   2519256  99% /d/d2


I am taking snaps so I have it since last year. I removed month 11 already 
because did not have any space left.


[11:32:52] [EMAIL PROTECTED]: /d/d2  zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
mypool 271G  2.40G  24.5K  /mypool
mypool/d   271G  2.40G   143G  /d/d2
mypool/[EMAIL PROTECTED] 3.72G  -   123G  -
mypool/[EMAIL PROTECTED] 22.3G  -   156G  -
mypool/[EMAIL PROTECTED] 23.3G  -   161G  -
mypool/[EMAIL PROTECTED] 16.1G  -   172G  -
mypool/[EMAIL PROTECTED] 13.8G  -   168G  -
mypool/[EMAIL PROTECTED] 15.7G  -   168G  -
mypool/[EMAIL PROTECTED]185M  -   143G  -

Anyway in snaps that I have I do have certain files in those snaps that are few 
gig's in sizes. I did go to snaps and I did try to remove them but I got 
message:
[11:42:50] [EMAIL PROTECTED]: /d/d2/.zfs/snapshot/month_01  rm 
studio11-sol-sparc.tar

rm: remove write-protected file `studio11-sol-sparc.tar'? y
rm: cannot unlink `studio11-sol-sparc.tar': Read-only file system
[11:43:01] [EMAIL PROTECTED]: /d/d2/.zfs/snapshot/month_01 
[11:43:03] [EMAIL PROTECTED]: /d/d2/.zfs/snapshot/month_01  ls -la 
studio11-sol-sparc.tar

-rw-rw-r--1 root root 1123425280 Jan 25  2006 studio11-sol-sparc.tar
[11:43:16] [EMAIL PROTECTED]: /d/d2/.zfs/snapshot/month_01 

Is there a way to mount file system as read/write and be able to remove those 
big files that I dont need there?


I would love to keep those snaps for months... I know, easy suggestion would be 
to add more disk space and never worry about it but yet again in my situation I 
have two internal disks that are 300gb and I cannot add more internal drives to 
that system so I rather use what I can...


Thanks for any help.

Chris


___
zfs-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] lost zfs mirror, need some help to recover

2007-03-28 Thread Krzys


Hello folks, I have a small problem, originally I had this setup:
[16:39:40] @zglobix1: /root  zpool status -x
  pool: mypool
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-D3
 scrub: resilver completed with 0 errors on Wed Mar 28 16:39:23 2007
config:

NAMESTATE READ WRITE CKSUM
mypool  DEGRADED 0 0 0
  mirrorDEGRADED 0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  UNAVAIL  0 0 0  cannot open

errors: No known data errors


so I was trying to get that device back online working and I did mess it up...

I did run:
zpool detach mypool c1t3d0

and now I lost my mirroring

[16:40:14] @zglobix1: /root  zpool status
  pool: mypool
 state: ONLINE
 scrub: resilver completed with 0 errors on Wed Mar 28 16:39:23 2007
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  c1t2d0ONLINE   0 0 0

errors: No known data errors

is there any way to get my mirror back on that pool?

I have V240 with Solaris 10 U3 and its limited on how many disks I can have in 
that system so I was unable to do replace or anything, I did try to reboot 
system and did try tog et it working to have mirroring back in place but for 
some reason was unable to do it.


Anyway I would appreciate your help.

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] lost zfs mirror, need some help to recover

2007-03-28 Thread Krzys
Awesome, that worked great for me... I did not know I had to put c1t2d0 in 
there... but hey, it works and that is all it matters. Thank you so very much.


Chris


[19:58:24] @zglobix1: /root  zpool attach -f mypool c1t2d0 c1t3d0
[19:58:33] @zglobix1: /root  zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool  278G   48.5G230G17%  ONLINE -
[19:58:37] @zglobix1: /root  zpool status
  pool: mypool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress, 0.03% done, 6h59m to go
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0

errors: No known data errors
[19:58:47] @zglobix1: /root  zpool scrub mypool
cannot scrub mypool: currently resilvering
[19:58:59] @zglobix1: /root 
[19:59:03] @zglobix1: /root 
[19:59:04] @zglobix1: /root  zpool status -x
  pool: mypool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress, 0.11% done, 8h39m to go
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0

errors: No known data errors
[19:59:08] @zglobix1: /root 




On Wed, 28 Mar 2007, Robert Milkowski wrote:


Hello Krzys,

Wednesday, March 28, 2007, 10:58:40 PM, you wrote:

K Hello folks, I have a small problem, originally I had this setup:
K [16:39:40] @zglobix1: /root  zpool status -x
Kpool: mypool
K   state: DEGRADED
K status: One or more devices could not be opened.  Sufficient replicas exist 
for
K  the pool to continue functioning in a degraded state.
K action: Attach the missing device and online it using 'zpool online'.
K see: http://www.sun.com/msg/ZFS-8000-D3
K   scrub: resilver completed with 0 errors on Wed Mar 28 16:39:23 2007
K config:

K  NAMESTATE READ WRITE CKSUM
K  mypool  DEGRADED 0 0 0
KmirrorDEGRADED 0 0 0
K  c1t2d0  ONLINE   0 0 0
K  c1t3d0  UNAVAIL  0 0 0  cannot open

K errors: No known data errors


K so I was trying to get that device back online working and I did mess it 
up...

K I did run:
K zpool detach mypool c1t3d0

K and now I lost my mirroring

K [16:40:14] @zglobix1: /root  zpool status
Kpool: mypool
K   state: ONLINE
K   scrub: resilver completed with 0 errors on Wed Mar 28 16:39:23 2007
K config:

K  NAMESTATE READ WRITE CKSUM
K  mypool  ONLINE   0 0 0
Kc1t2d0ONLINE   0 0 0

K errors: No known data errors

K is there any way to get my mirror back on that pool?

K I have V240 with Solaris 10 U3 and its limited on how many disks I can have 
in
K that system so I was unable to do replace or anything, I did try to reboot
K system and did try tog et it working to have mirroring back in place but for
K some reason was unable to do it.

K Anyway I would appreciate your help.

First check if c1t3d0 is ok (format c1t3d0) if it is then issue:

 zpool attach mypool c1t2d0 c1t3d0

And you'll get you mirror back :)


--
Best regards,
Robertmailto:[EMAIL PROTECTED]
  http://milek.blogspot.com


!DSPAM:122,460adb7817910266247132!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs crashing

2007-01-31 Thread Krzys


I have Solaris 10 U2 with all the latest patches that started to crash recently 
on regular basis... so I started to dig and see what is causing it and here is 
what I found out:


panic[cpu1]/thread=2a1009a7cc0: really out of space

02a1009a6d70 zfs:zio_write_allocate_gang_members+33c (60017964140, 400, 200, 
3, 7b62f380, 7b64d400)

  %l0-3: 060017964158  0003 
  %l4-7: 03000ff13200 03000ff13000  0001
02a1009a6e70 zfs:zio_write_allocate_gang_members+2dc (600179643c0, 800, 200, 
400, 7b62f380, 7b64d400)

  %l0-3: 0600179643d8  0003 0400
  %l4-7: 03000ff13000 03001666d500  0001
02a1009a6f70 zfs:zio_write_allocate_gang_members+2dc (600167e1740, 800, 400, 
800, 7b62f380, 7b64d400)

  %l0-3: 0600167e1758  0001 1a00
  %l4-7: 03001666d500 03001674b800  0001
02a1009a7070 zfs:zio_write_allocate_gang_members+2dc (600167e19c0, 4e00, 
1400, 1a00, 7b62f380, 7b64d400)

  %l0-3: 0600167e19d8  0003 1a00
  %l4-7: 03001674b800 03001557db00  0001
02a1009a7170 zfs:zio_write_allocate_gang_members+2dc (6000b690d80, 4e00, 
3c00, 4e00, 7b62f380, 7b64d400)

  %l0-3: 06000b690d98  0001 ee00
  %l4-7: 03001557db00 030023477000  0001
02a1009a7270 zfs:zio_write_compress+1ec (6000b690d80, 23e20b, 23e000, 
ff00ff, 3, 30023477000)

  %l0-3:  0076 0077 ee00
  %l4-7:  00ff fc00 00ff
02a1009a7340 zfs:arc_write+e4 (6000b690d80, 6ebee00, 6, 3, 1, d02188b)
  %l0-3:  7b605eec 03002702a328 030020863c08
  %l4-7: 02a1009a7548 0004 0004 030006b703a0
02a1009a7450 zfs:zfsctl_ops_root+b223eac (3002702a328, 600085451c0, 12f956, 
3, 6, d02188b)

  %l0-3: 060001dccb00  06000b514658 03002702a448
  %l4-7: 030023477000 0013 0d60 
02a1009a7570 zfs:dnode_sync+35c (0, 0, 600085451c0, 600162ed7d8, 0, 3)
  %l0-3: 03002702a328 06000b5146b0 06000b514770 06000b514710
  %l4-7: 0060 06000b5146b3 000c 03002703d7a0
02a1009a7630 zfs:dmu_objset_sync_dnodes+6c (60001dccb00, 60001dccc40, 
600162ed7d8, 6000b514658, 0, 0)

  %l0-3: 703e0438 703e 703e 0001
  %l4-7:  703dc000  0600085451c0
02a1009a76e0 zfs:dmu_objset_sync+54 (60001dccb00, 600162ed7d8, 3, 3, 
3002331e8f8, d02188b)

  %l0-3: 00bb 000f  00410d60
  %l4-7: 060001dccc40 0060 060001dccbe0 060001dccc60
02a1009a77f0 zfs:dsl_dataset_sync+c (60008aefa40, 600162ed7d8, 60008aefad0, 
30ad278, 30ad278, 60008aefa40)

  %l0-3: 0001 0007 030ad2f8 0003
  %l4-7: 060008aefac8   
02a1009a78a0 zfs:dsl_pool_sync+104 (30ad1c0, d02188b, 60008aefa40, 
30005fc8148, 60001dcc880, 60001dcc8a8)

  %l0-3:  06ebf1b8 0006 0600162ed7d8
  %l4-7: 030ad328 030ad2f8 030ad268 
02a1009a7950 zfs:spa_sync+e4 (6ebee00, d02188b, 60001dcc8a8, 
600162ed7d8, 6ebef78, 2a1009a7cbc)

  %l0-3: 0d021889 06ebef40 06ebef08 
  %l4-7: 06e4a080 030ad1c0 01e0 
02a1009a7a00 zfs:txg_sync_thread+134 (30ad1c0, d02188b, 0, 2a1009a7ab0, 
30ad2d0, 30ad2d2)

  %l0-3: 030ad2e0 030ad290  030ad298
  %l4-7: 030ad2d6 030ad2d4 030ad288 0d021848

syncing file systems... 970 913 893 893 893 893 893 893 893 893 893 893 893 893 
893

SC Alert: Failed to send email alert for recent event.
 893 893 893 893 893 893 893 893 done (not all i/o completed)
dumping to /dev/dsk/c1t0d0s1, offset 3436969984, content: kernel
 59% done
SC Alert: SC Request to XIR Host due to Watchdog
ERROR: Externally Initiated Reset has occurred.

panic[cpu1]/thread=2a1009a7cc0: sync initiated
dump aborted: please record the above information!
rebooting...

SC Alert: Host System has Reset

XIR/Watchdog Reset
Executing Power On Self Test
0



Any help? I do zfs file system snaps and I am pretty low on disk space at this 
moment but I still have like 2 gig free


mypool/d 182163718 180277654   1886065  99% /d/d2

Thanks for help.

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org

Re: [zfs-discuss] zfs crashing

2007-01-31 Thread Krzys

I guess I need to upgrade this system then... thanks for info...

Chris



On Thu, 1 Feb 2007, James C. McPherson wrote:


Krzys wrote:


I have Solaris 10 U2 with all the latest patches that started to crash 
recently on regular basis... so I started to dig and see what is causing it 
and here is what I found out:


panic[cpu1]/thread=2a1009a7cc0: really out of space

02a1009a6d70 zfs:zio_write_allocate_gang_members+33c (60017964140, 

...

Any help? I do zfs file system snaps and I am pretty low on disk space at 
this moment but I still have like 2 gig free



this is bug 6452923 which is fixed in Solaris 10 update 3.



James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems


!DSPAM:122,45c160ab4948148481500!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] weird thing with zfs

2006-12-06 Thread Krzys

Thanks so much.. anyway resilvering worked its way, I got everything resolved
zpool status -v
  pool: mypool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0

errors: No known data errors

  pool: mypool2
 state: ONLINE
 scrub: resilver completed with 0 errors on Tue Dec  5 13:48:31 2006
config:

NAMESTATE READ WRITE CKSUM
mypool2 ONLINE   0 0 0
  raidz ONLINE   0 0 0
c3t0d0  ONLINE   0 0 0
c3t1d0  ONLINE   0 0 0
c3t2d0  ONLINE   0 0 0
c3t3d0  ONLINE   0 0 0
c3t4d0  ONLINE   0 0 0
c3t5d0  ONLINE   0 0 0
c3t6d0  ONLINE   0 0 0

errors: No known data errors

did not change any cables nor anything, just reboot... I will llook into 
replacing cables (those are the short scsi cables.. anyway this is so weird and 
original disk that I replaced seems to be good as well.. it must be connectivity 
problem... but whats weird is that I had it running for months without 
problems...


Regards and thanks to all for help.

Chris



On Tue, 5 Dec 2006, Richard Elling wrote:


BTW, there is a way to check what the SCSI negotiations resolved to.
I wrote about it once in a BluePrint
http://www.sun.com/blueprints/0500/sysperfnc.pdf
See page 11
-- richard

Richard Elling wrote:

This looks more like a cabling or connector problem.  When that happens
you should see parity errors and transfer rate negotiations.
 -- richard

Krzys wrote:

Ok, so here is an update

I did restart my sysyte, I power it off and power it on. Here is screen 
capture of my boot. I certainly do have some hard drive issues and will 
need to take a look at them... But I got my disk back visible to the 
system and zfs is doing resilvering again


Rebooting with command: boot
Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a  
File and args:
SunOS Release 5.10 Version Generic_118833-24 64-bit
Copyright 1983-2006 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Hardware watchdog enabled
Hostname: chrysek
WARNING: /[EMAIL PROTECTED],60/[EMAIL PROTECTED] (glm2):
SCSI bus DATA IN phase parity error
WARNING: /[EMAIL PROTECTED],60/[EMAIL PROTECTED] (glm2):
Target 6 reducing sync. transfer rate
WARNING: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 (sd5):
Error for Command: read(10)Error Level: Retryable
Requested Block: 286732066 Error Block: 286732066
Vendor: SEAGATESerial Number: 3HY14PVS
Sense Key: Aborted Command
ASC: 0x48 (initiator detected error message received), ASCQ: 0x0, 
FRU: 0x2

WARNING: /[EMAIL PROTECTED],60/[EMAIL PROTECTED] (glm2):
SCSI bus DATA IN phase parity error
WARNING: /[EMAIL PROTECTED],60/[EMAIL PROTECTED] (glm2):
Target 3 reducing sync. transfer rate
WARNING: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 (sd23):
Error for Command: read(10)Error Level: Retryable
Requested Block: 283623842 Error Block: 283623842
Vendor: SEAGATESerial Number: 3HY8HS7L
Sense Key: Aborted Command
ASC: 0x48 (initiator detected error message received), ASCQ: 0x0, 
FRU: 0x2

WARNING: /[EMAIL PROTECTED],60/[EMAIL PROTECTED] (glm2):
SCSI bus DATA IN phase parity error
WARNING: /[EMAIL PROTECTED],60/[EMAIL PROTECTED] (glm2):
Target 5 reducing sync. transfer rate
WARNING: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 (sd25):
Error for Command: read(10)Error Level: Retryable
Requested Block: 283623458 Error Block: 283623458
Vendor: SEAGATESerial Number: 3HY0LF18
Sense Key: Aborted Command
ASC: 0x48 (initiator detected error message received), ASCQ: 0x0, 
FRU: 0x2

/kernel/drv/sparcv9/zpool symbol avl_add multiply defined
/kernel/drv/sparcv9/zpool symbol assfail3 multiply defined
WARNING: kstat_create('unix', 0, 'dmu_buf_impl_t'): namespace collision
mypool2/d3 uncorrectable error
checking ufs filesystems
/dev/rdsk/c1t0d0s7: is logging.

chrysek console login: VERITAS SCSA Generic Revision: 3.5c
Dec  5 13:01:38 chrysek root: CAPTURE_UPTIME ERROR: /var/opt/SUNWsrsrp 
missing
Dec  5 13:01:38 chrysek root: CAPTURE_UPTIME ERROR: /var/opt/SUNWsrsrp 
missing

Dec  5 13:01:46 chrysek VERITAS: No proxy found.
Dec  5 13:01:52 chrysek vmd[546]: ready for connections
Dec  5 13:01:53 chrysek VERITAS: No proxy found.
Dec  5 13:01:54

[zfs-discuss] weird thing with zfs

2006-12-05 Thread Krzys


ok, two weeks ago I did notice one of my disk in zpool got problems.
I was getting Corrupt label; wrong magic number messages, then when I looked 
in format it did not see that disk... (last disk) I had that setup running for 
few months now and all of the sudden last disk failed. So I ordered another 
disk, had it replaced like a week ago, I did issue replace command after disk 
replacement, it was resilvering disks since forever, then I got hints from this 
group that snaps could be causing it so yesterday I did disable snaps and this 
morning I di dnotice the same disk that I replaced is gone... Does it seem weird 
that this disk would fail? Its new disk... I have Solaris 10 U2, 4 internal 
drives and then 7 external drives which are in single enclousures connected via 
scsi chain to each other... So it seems like last disk is failing. Those nipacks 
from sun have self termination so there is no terminator at the end... Any ideas 
what should I do? Do I need to order another drive and replace that one too? Or 
will it happen again? What do you think could be the problem? Ah, when I look at 
that enclosure I do see green light on it so it seems like it did not fail...


format
Searching for disks...
efi_alloc_and_init failed.
done


AVAILABLE DISK SELECTIONS:
   0. c1t0d0 SEAGATE-ST337LC-D703 cyl 45265 alt 2 hd 16 sec 809
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   1. c1t1d0 SEAGATE-ST337LC-D703 cyl 45265 alt 2 hd 16 sec 809
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   2. c1t2d0 SEAGATE-ST337LC-D703-279.40GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   3. c1t3d0 SEAGATE-ST337LC-D703-279.40GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   4. c3t0d0 SEAGATE-ST3146807LC-0007-136.73GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   5. c3t1d0 SEAGATE-ST3146807LC-0007-136.73GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   6. c3t2d0 SEAGATE-ST3146807LC-0007-136.73GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   7. c3t3d0 SEAGATE-ST3146807LC-0007-136.73GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   8. c3t4d0 SEAGATE-ST3146807LC-0007-136.73GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   9. c3t5d0 SEAGATE-ST3146807LC-0007-136.73GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
  10. c3t6d0 drive type unknown
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0



zpool status -v
  pool: mypool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0

errors: No known data errors

  pool: mypool2
 state: DEGRADED
 scrub: resilver completed with 0 errors on Mon Dec  4 22:34:57 2006
config:

NAME  STATE READ WRITE CKSUM
mypool2   DEGRADED 0 0 0
  raidz   DEGRADED 0 0 0
c3t0d0ONLINE   0 0 0
c3t1d0ONLINE   0 0 0
c3t2d0ONLINE   0 0 0
c3t3d0ONLINE   0 0 0
c3t4d0ONLINE   0 0 0
c3t5d0ONLINE   0 0 0
replacing UNAVAIL  0   775 0  insufficient replicas
  c3t6d0s0/o  UNAVAIL  0 0 0  cannot open
  c3t6d0  UNAVAIL  0   940 0  cannot open

errors: No known data errors

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] weird thing with zfs

2006-12-05 Thread Krzys
Thanks, ah another wird thing is that when I run format on that frive I get 
a coredump :(


format
Searching for disks...
efi_alloc_and_init failed.
done


AVAILABLE DISK SELECTIONS:
   0. c1t0d0 SEAGATE-ST337LC-D703 cyl 45265 alt 2 hd 16 sec 809
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   1. c1t1d0 SEAGATE-ST337LC-D703 cyl 45265 alt 2 hd 16 sec 809
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   2. c1t2d0 SEAGATE-ST337LC-D703-279.40GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   3. c1t3d0 SEAGATE-ST337LC-D703-279.40GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   4. c3t0d0 SEAGATE-ST3146807LC-0007-136.73GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   5. c3t1d0 SEAGATE-ST3146807LC-0007-136.73GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   6. c3t2d0 SEAGATE-ST3146807LC-0007-136.73GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   7. c3t3d0 SEAGATE-ST3146807LC-0007-136.73GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   8. c3t4d0 SEAGATE-ST3146807LC-0007-136.73GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   9. c3t5d0 SEAGATE-ST3146807LC-0007-136.73GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
  10. c3t6d0 drive type unknown
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
Specify disk (enter its number): 10

Segmentation Fault (core dumped)

:( Cant even get to format menu on that drive...

Chris



On Tue, 5 Dec 2006, Nicholas Senedzuk wrote:


The only time that I have seen a format return drive type unknown is when
the drive has failed. You may just have another bad drive and want to try
replacing it again. If that does not work you, you may have another problem
such as a bad backplane or a bad SCSI cable assuming the drive is an
external drive. Hope that helps.




On 12/5/06, Krzys [EMAIL PROTECTED] wrote:



ok, two weeks ago I did notice one of my disk in zpool got problems.
I was getting Corrupt label; wrong magic number messages, then when I
looked
in format it did not see that disk... (last disk) I had that setup running
for
few months now and all of the sudden last disk failed. So I ordered
another
disk, had it replaced like a week ago, I did issue replace command after
disk
replacement, it was resilvering disks since forever, then I got hints from
this
group that snaps could be causing it so yesterday I did disable snaps and
this
morning I di dnotice the same disk that I replaced is gone... Does it seem
weird
that this disk would fail? Its new disk... I have Solaris 10 U2, 4
internal
drives and then 7 external drives which are in single enclousures
connected via
scsi chain to each other... So it seems like last disk is failing. Those
nipacks
from sun have self termination so there is no terminator at the end... Any
ideas
what should I do? Do I need to order another drive and replace that one
too? Or
will it happen again? What do you think could be the problem? Ah, when I
look at
that enclosure I do see green light on it so it seems like it did not
fail...

format
Searching for disks...
efi_alloc_and_init failed.
done


AVAILABLE DISK SELECTIONS:
0. c1t0d0 SEAGATE-ST337LC-D703 cyl 45265 alt 2 hd 16 sec 809
   /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
1. c1t1d0 SEAGATE-ST337LC-D703 cyl 45265 alt 2 hd 16 sec 809
   /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
2. c1t2d0 SEAGATE-ST337LC-D703-279.40GB
   /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
3. c1t3d0 SEAGATE-ST337LC-D703-279.40GB
   /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
4. c3t0d0 SEAGATE-ST3146807LC-0007-136.73GB
   /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
5. c3t1d0 SEAGATE-ST3146807LC-0007-136.73GB
   /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
6. c3t2d0 SEAGATE-ST3146807LC-0007-136.73GB
   /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
7. c3t3d0 SEAGATE-ST3146807LC-0007-136.73GB
   /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
8. c3t4d0 SEAGATE-ST3146807LC-0007-136.73GB
   /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
9. c3t5d0 SEAGATE-ST3146807LC-0007-136.73GB
   /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   10. c3t6d0 drive type unknown
   /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0



zpool status -v
   pool: mypool
  state: ONLINE
  scrub: none requested
config:

 NAMESTATE READ WRITE CKSUM
 mypool  ONLINE

Re: [zfs-discuss] weird thing with zfs

2006-12-05 Thread Krzys

[12:00:40] [EMAIL PROTECTED]: /d/d3/nb1  pstack core
core 'core' of 29506:   format -e
-  lwp# 1 / thread# 1  
 000239b8 c_disk   (51800, 52000, 4bde4, 525f4, 54e78, 0) + 4e0
 00020fb4 main (2, 0, ffbff8e8, 0, 52000, 29000) + 46c
 000141a8 _start   (0, 0, 0, 0, 0, 0) + 108
-  lwp# 2 / thread# 2  
 ff241818 _door_return (0, 0, 0, 0, fef92400, ff26cbc0) + 10
 ff0c0c30 door_create_func (0, feefc000, 0, 0, ff0c0c10, 0) + 20
 ff2400b0 _lwp_start (0, 0, 0, 0, 0, 0)
-  lwp# 3 / thread# 3  
 ff240154 __lwp_park (75e78, 75e88, 0, 0, 0, 0) + 14
 ff23a1e4 cond_wait_queue (75e78, 75e88, 0, 0, 0, 0) + 28
 ff23a764 cond_wait (75e78, 75e88, 1, 0, 0, ff26cbc0) + 10
 ff142a60 subscriber_event_handler (551d8, fedfc000, 0, 0, ff142a2c, 0) + 34
 ff2400b0 _lwp_start (0, 0, 0, 0, 0, 0)



On Tue, 5 Dec 2006, Torrey McMahon wrote:


Krzys wrote:
Thanks, ah another wird thing is that when I run format on that frive I 
get a coredump :(


Run pstack /path/to/core and send the output.


!DSPAM:122,45759fd826586021468!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] weird thing with zfs

2006-12-05 Thread Krzys

Does not work :(

dd if=/dev/zero of=/dev/rdsk/c3t6d0s0 bs=1024k count=1024
dd: opening `/dev/rdsk/c3t6d0s0': I/O error

That is so strange... it seems like I lost another disk... I will try to reboot 
and see what I get, but I guess I need to order another disk then and give it a 
try...


Chris





On Tue, 5 Dec 2006, Al Hopper wrote:


On Tue, 5 Dec 2006, Krzys wrote:


Thanks, ah another wird thing is that when I run format on that frive I get
a coredump :(

... snip 

Try zeroing out the disk label with something like:

dd if=/dev/zero of=/dev/rdsk/c?t?d?p0  bs=1024k count=1024

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
  Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
OpenSolaris Governing Board (OGB) Member - Feb 2006


!DSPAM:122,4575a7731650371292!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] weird thing with zfs

2006-12-05 Thread Krzys
config:

NAME  STATE READ WRITE CKSUM
mypool2   DEGRADED 0 0 0
  raidz   DEGRADED 0 0 0
c3t0d0ONLINE   0 0 0
c3t1d0ONLINE   0 0 0
c3t2d0ONLINE   0 0 0
c3t3d0ONLINE   0 0 0
c3t4d0ONLINE   0 0 0
c3t5d0ONLINE   0 0 0
replacing DEGRADED 0 012
  c3t6d0s0/o  UNAVAIL  0 0 0  cannot open
  c3t6d0  ONLINE   0 0 0

errors: No known data errors

I do see that drive... and it is doing resilvering

format works too and I dont get coredump

format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c1t0d0 SEAGATE-ST337LC-D703 cyl 45265 alt 2 hd 16 sec 809
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   1. c1t1d0 SEAGATE-ST337LC-D703 cyl 45265 alt 2 hd 16 sec 809
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   2. c1t2d0 SEAGATE-ST337LC-D703-279.40GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   3. c1t3d0 SEAGATE-ST337LC-D703-279.40GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   4. c3t0d0 SEAGATE-ST3146807LC-0007-136.73GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   5. c3t1d0 SEAGATE-ST3146807LC-0007-136.73GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   6. c3t2d0 SEAGATE-ST3146807LC-0007-136.73GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   7. c3t3d0 SEAGATE-ST3146807LC-0007-136.73GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   8. c3t4d0 SEAGATE-ST3146807LC-0007-136.73GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   9. c3t5d0 SEAGATE-ST3146807LC-0007-136.73GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
  10. c3t6d0 SEAGATE-ST3146807LC-0007-136.73GB
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
Specify disk (enter its number): 10
selecting c3t6d0
[disk formatted]
/dev/dsk/c3t6d0s0 is part of active ZFS pool mypool2. Please see zpool(1M).


FORMAT MENU:
disk   - select a disk
type   - select (define) a disk type
partition  - select (define) a partition table
current- describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label  - write label to the disk
analyze- surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
inquiry- show vendor, product and revision
volname- set 8-character volume name
!cmd - execute cmd, then return
quit
format verify

Volume name = 
ascii name  = SEAGATE-ST3146807LC-0007-136.73GB
bytes/sector=  512
sectors = 286749487
accessible sectors = 286749454
Part  TagFlag First Sector Size Last Sector
  0usrwm34  136.72GB  286733070
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 2867330718.00MB  286749454

format q







On Tue, 5 Dec 2006, Krzys wrote:



ok, two weeks ago I did notice one of my disk in zpool got problems.
I was getting Corrupt label; wrong magic number messages, then when I 
looked in format it did not see that disk... (last disk) I had that setup 
running for few months now and all of the sudden last disk failed. So I 
ordered another disk, had it replaced like a week ago, I did issue replace 
command after disk replacement, it was resilvering disks since forever, then 
I got hints from this group that snaps could be causing it so yesterday I did 
disable snaps and this morning I di dnotice the same disk that I replaced is 
gone... Does it seem weird that this disk would fail? Its new disk... I have 
Solaris 10 U2, 4 internal drives and then 7 external drives which are in 
single enclousures connected via scsi chain to each other... So it seems like 
last disk is failing. Those nipacks from sun have self termination so there 
is no terminator at the end... Any ideas what should I do? Do I need to order 
another drive and replace that one too? Or will it happen again? What do you 
think could

Re: [zfs-discuss] replacing a drive in a raidz vdev

2006-12-04 Thread Krzys
I am having no luck replacing my drive as well. few days ago I replaced my drive 
and its completly messed up now.


  pool: mypool2
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress, 8.70% done, 8h19m to go
config:

NAME  STATE READ WRITE CKSUM
mypool2   DEGRADED 0 0 0
  raidz   DEGRADED 0 0 0
c3t0d0ONLINE   0 0 0
c3t1d0ONLINE   0 0 0
c3t2d0ONLINE   0 0 0
c3t3d0ONLINE   0 0 0
c3t4d0ONLINE   0 0 0
c3t5d0ONLINE   0 0 0
replacing DEGRADED 0 0 0
  c3t6d0s0/o  UNAVAIL  0 0 0  cannot open
  c3t6d0  ONLINE   0 0 0

errors: No known data errors

this is what I get, I am running Solaris 10 U2
two days ago I did see 2.00% range, and then like 10h remaining, now its still 
going and its already at least few days since it started.


when I do: zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool2 952G684G268G71%  DEGRADED   -

I have almost 1TB of space.
when I do df -k it does show me only 277gb, it is better than only displaying 
12gb as I did see yesterday.

mypool2/d3   277900047  12022884 265877163   5% /d/d3

when I do zfs list I get:
mypool2684G   254G52K  /mypool2
mypool2/d  191G   254G   189G  /mypool2/d
mypool2/[EMAIL PROTECTED]   653M  -   145G  -
mypool2/[EMAIL PROTECTED]  31.2M  -   145G  -
mypool2/[EMAIL PROTECTED]  36.8M  -   144G  -
mypool2/[EMAIL PROTECTED]  37.9M  -   144G  -
mypool2/[EMAIL PROTECTED]  31.7M  -   145G  -
mypool2/[EMAIL PROTECTED]  27.7M  -   145G  -
mypool2/[EMAIL PROTECTED]  34.0M  -   146G  -
mypool2/[EMAIL PROTECTED]  26.8M  -   149G  -
mypool2/[EMAIL PROTECTED]  34.4M  -   151G  -
mypool2/[EMAIL PROTECTED]  141K  -   189G  -
mypool2/d3 492G   254G  11.5G  legacy

I am so confused with all of this... Why its taking so long to replace that one 
bad disk? Why such different results? What is going on? Is there a problem with 
my zpool/zfs combination? Did I do anything wrong? Did I actually loose data on 
my drive? If I knew it woul dbe this bad I would just destroy my whole zpool and 
zfs and start from the beginning but I wanted to see how would it go trough 
replacement to see whats the process... I am so happy I did not use zfs in my 
production environment yet to be honest with you...


Chris



On Sat, 2 Dec 2006, Theo Schlossnagle wrote:

I had a disk malfunction in a raidz pool today.  I had an extra on in the 
enclosure and performed a: zpool replace pool old new and several unexpected 
behaviors have transpired:


the zpool replace command hung for 52 minutes during which no zpool 
commands could be executed (like status, iostat or list).


When it finally returned, the drive was marked as replacing as I expected 
from reading the man page.  However, it's progress counter has not been 
monotonically increasing.  It started at 1% and then went to 5% and then back 
to 2%, etc. etc.


I just logged in to see if it was done and ran zpool status and received:

pool: xsr_slow_2
state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
  continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 100.00% done, 0h0m to go
config:

  NAME   STATE READ WRITE CKSUM
  xsr_slow_2 ONLINE   0 0 0
raidzONLINE   0 0 0
  c4t600039316A1Fd0s2ONLINE   0 0 0
  c4t600039316A1Fd1s2ONLINE   0 0 0
  c4t600039316A1Fd2s2ONLINE   0 0 0
  c4t600039316A1Fd3s2ONLINE   0 0 0
  replacing  ONLINE   0 0 0
c4t600039316A1Fd4s2  ONLINE   2.87K   251 0
c4t600039316A1Fd6ONLINE   0 0 0
  c4t600039316A1Fd5s2ONLINE   0 0 0


I thought to myself, if it is 100% done why is it still replacing? I waited 
about 15 seconds and ran the command again to find something rather 
disconcerting:


pool: xsr_slow_2
state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
  continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 0.45% done, 27h27m to go
config:

  NAME

Re: [zfs-discuss] raidz DEGRADED state

2006-11-30 Thread Krzys
Great, thank you, it certainly helped, I did not want to loose data on that disk 
therefore wanted to be sure than sorry


thanks for help.

Chris


On Thu, 30 Nov 2006, Bart Smaalders wrote:


Krzys wrote:


my drive did go bad on me, how do I replace it? I am sunning solaris 10 U2 
(by the way, I thought U3 would be out in November, will it be out soon? 
does anyone know?



[11:35:14] server11: /export/home/me  zpool status -x
  pool: mypool2
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist 
for

the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-D3
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mypool2 DEGRADED 0 0 0
  raidz DEGRADED 0 0 0
c3t0d0  ONLINE   0 0 0
c3t1d0  ONLINE   0 0 0
c3t2d0  ONLINE   0 0 0
c3t3d0  ONLINE   0 0 0
c3t4d0  ONLINE   0 0 0
c3t5d0  ONLINE   0 0 0
c3t6d0  UNAVAIL  0   679 0  cannot open

errors: No known data errors
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Shut down the machine, replace the drive, reboot
and type:

zpool replace mypool2 c3t6d0


On earlier versions of ZFS I found it useful to do this
at the login prompt; it seemed fairly memory intensive.

- Bart


--
Bart Smaalders  Solaris Kernel Performance
[EMAIL PROTECTED]   http://blogs.sun.com/barts


!DSPAM:122,456f173b1758223226276!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz DEGRADED state

2006-11-30 Thread Krzys

Ah, did not see your follow up. Thanks.

Chris


On Thu, 30 Nov 2006, Cindy Swearingen wrote:


Sorry, Bart, is correct:

  If  new_device  is  not  specified,   it   defaults   to
 old_device.  This form of replacement is useful after an
 existing  disk  has  failed  and  has  been   physically
 replaced.  In  this case, the new disk may have the same
 /dev/dsk path as the old device, even though it is actu-
 ally a different disk. ZFS recognizes this.

cs

Cindy Swearingen wrote:

One minor comment is to identify the replacement drive, like this:

# zpool replace mypool2 c3t6d0 c3t7d0

Otherwise, zpool will error...

cs

Bart Smaalders wrote:


Krzys wrote:



my drive did go bad on me, how do I replace it? I am sunning solaris 10 
U2 (by the way, I thought U3 would be out in November, will it be out 
soon? does anyone know?



[11:35:14] server11: /export/home/me  zpool status -x
  pool: mypool2
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas 
exist for

the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-D3
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mypool2 DEGRADED 0 0 0
  raidz DEGRADED 0 0 0
c3t0d0  ONLINE   0 0 0
c3t1d0  ONLINE   0 0 0
c3t2d0  ONLINE   0 0 0
c3t3d0  ONLINE   0 0 0
c3t4d0  ONLINE   0 0 0
c3t5d0  ONLINE   0 0 0
c3t6d0  UNAVAIL  0   679 0  cannot open

errors: No known data errors
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




Shut down the machine, replace the drive, reboot
and type:

zpool replace mypool2 c3t6d0


On earlier versions of ZFS I found it useful to do this
at the login prompt; it seemed fairly memory intensive.

- Bart



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


!DSPAM:122,456f1b0c21174266247132!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool question.

2006-10-23 Thread Krzys
Awesome, thanks for your help, will there be any way to convert raidz to 
raidz2?


Thanks again for help/

Chris

On Mon, 23 Oct 2006, Robert Milkowski wrote:


Hello Krzys,

Sunday, October 22, 2006, 8:42:06 PM, you wrote:

K I have solaris 10 U2 and I have raidz partition setup on 5 disks, I just 
added a
K new disk and was wondering, can I add another disk to raidz? I was able to 
add
K it to a pool but I do not think it added it to zpool.

You can't grow RAID-Z :(
You can add a disk but you will end-up with one raid-z group and one
disk and striping between them.

K Also when spare disks and raidz2 will be released in Solaris 10? Does anyone
K know when U3 will be comming out?

In S10U3 which should be available late November.


--
Best regards,
Robertmailto:[EMAIL PROTECTED]
  http://milek.blogspot.com


!DSPAM:122,453cd0015307021468!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool question.

2006-10-22 Thread Krzys


I have solaris 10 U2 and I have raidz partition setup on 5 disks, I just added a 
new disk and was wondering, can I add another disk to raidz? I was able to add 
it to a pool but I do not think it added it to zpool.


[13:38:41] /root  zpool status -v mypool2
  pool: mypool2
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mypool2 ONLINE   0 0 0
  raidz ONLINE   0 0 0
c3t0d0  ONLINE   0 0 0
c3t1d0  ONLINE   0 0 0
c3t2d0  ONLINE   0 0 0
c3t3d0  ONLINE   0 0 0
c3t4d0  ONLINE   0 0 0
c3t5d0  ONLINE   0 0 0

errors: No known data errors

[14:35:36] /root  zpool add mypool2 c3t6d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c3t6d0s0 contains a ufs filesystem.
/dev/dsk/c3t6d0s4 contains a ufs filesystem.
[14:36:02] /root  zpool add -f mypool2 c3t6d0
[14:36:14] /root  zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool  278G187G   90.6G67%  ONLINE -
mypool2 952G367K952G 0%  ONLINE -
[14:36:21] /root  zpool status -v mypool2
  pool: mypool2
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mypool2 ONLINE   0 0 0
  raidz ONLINE   0 0 0
c3t0d0  ONLINE   0 0 0
c3t1d0  ONLINE   0 0 0
c3t2d0  ONLINE   0 0 0
c3t3d0  ONLINE   0 0 0
c3t4d0  ONLINE   0 0 0
c3t5d0  ONLINE   0 0 0
  c3t6d0ONLINE   0 0 0

errors: No known data errors


Also when spare disks and raidz2 will be released in Solaris 10? Does anyone 
know when U3 will be comming out?


Thanks guys.

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [osol-discuss] Cloning a disk w/ ZFS in it

2006-10-22 Thread Krzys
yeah disks need to be identical but why do you need to do prtvtoc and fmthard 
to duplicate the disk label (before the dd), I thought that dd would take care 
of all of that... whenever I used dd I used it on slice 2 and I never had to do 
prtvtoc and fmthard... Juts make sure disks are identical and that is the key.


Regards,

Chris

On Fri, 20 Oct 2006, Richard Elling - PAE wrote:


minor adjustments below...

Darren J Moffat wrote:

Asif Iqbal wrote:

Hi

I have a X2100 with two 74G disks. I build the OS on the first disk
with slice0 root 10G ufs, slice1 2.5G swap, slice6 25MB ufs and slice7
62G zfs. What is the fastest way to clone it to the second disk. I
have to build 10 of those in 2 days. Once I build the disks I slam
them to the other X2100s and ship it out.


if clone really means make completely identical then do this:

boot of cd or network.

dd if=/dev/dsk/sourcedisk  of=/dev/dsk/destdisk

Where sourcedisk and destdisk are both localally attached.


I use prtvtoc and fmthard to duplicate the disk label (before the dd)
Note: the actual disk geometry may change between vendors or disk
firmware revs.  You will first need to verify that the geometries are
similar, especially the total number of blocks.

For dd, I'd use a larger block size than the default.  Something like:
dd bs=1024k if=/dev/dsk/sourcedisk  of=/dev/dsk/destdisk

The copy should go at media speed, approximately 50-70 MBytes/s for
the X2100 disks.
-- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


!DSPAM:122,45390d6810494021468!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] vfstab

2006-10-02 Thread Krzys

Great, thanks :)

Chris

On Mon, 2 Oct 2006, Mark Shellenbaum wrote:


Krzys wrote:

Hello all,

Is there any way to mount zfs file system from vfstab?

Thanks,

Chris




Set the mountpoint property for the file system to legacy and add the 
necessary info to the vfstab




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool df mypool

2006-09-17 Thread Krzys
in man page it does say zpool df home dows work, when I do type it it does not 
work and I get the following error:

zpool df mypool
unrecognized command 'df'
usage: zpool command args ...
where 'command' is one of the following:

create  [-fn] [-R root] [-m mountpoint] pool vdev ...
destroy [-f] pool

add [-fn] pool vdev ...

list [-H] [-o field[,field]*] [pool] ...
iostat [-v] [pool] ... [interval [count]]
status [-vx] [pool] ...

online pool device ...
offline [-t] pool device ...
clear pool [device]

attach [-f] pool device new_device
detach pool device
replace [-f] pool device [new_device]

scrub [-s] pool ...

import [-d dir] [-D]
import [-d dir] [-D] [-f] [-o opts] [-R root] -a
import [-d dir] [-D] [-f] [-o opts] [-R root ] pool | id [newpool]
export [-f] pool ...
upgrade
upgrade -v
upgrade -a | pool

I am running Solaris 10 Update 2. Is my man page out of date or is my zfs not up 
to date?


Thanks.

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: low disk performance

2006-09-17 Thread Krzys
That is bad such a big time difference... 14rs vs less than 2 hrs... did you 
have the same hardware setup? I did not follow up the thread...


Chris


On Sun, 17 Sep 2006, Gino Ruopolo wrote:


Other test, same setup.


SOLARIS10:

zpool/a   filesystem containing over 10Millions subdirs each containing 10 
files of about 1k
zpool/b   empty filesystem

rsync -avx  /zpool/a/* /zpool/b

time:  14 hours   (iostat showing %b = 100 for each lun in the zpool)

FreeBSD:
/vol1/a   dir containing over 10Millions subdirs each containing 10 files 
of about 1k
/vol1/b   empty dir

rsync -avx /vol1/a/* /vol1/b

time: 1h 40m !!

Also a zone running on zpool/zone1 was almost completely unusable because of 
i/o load.


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


!DSPAM:122,450da906299689287932!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] mounting during boot

2006-09-16 Thread Krzys
Hello everyone, I just wanted to play with zfs just a bit before I start using 
it at my workplace on servers so I did set it up on my Solaris 10 U2 box.
I used to have all my disks mounted as UFS and everything was fine. I had my 
/etc/vfstab as such:

#
fd -   /dev/fd   fd -   no  -
/proc  -   /proc proc   -   no  -
/dev/dsk/c1t0d0s1  -   - swap   -   no  -
/dev/dsk/c1t0d0s0  /dev/rdsk/c1t0d0s0  / ufs1   no  logging
/dev/dsk/c1t0d0s6  /dev/rdsk/c1t0d0s6  /usr  ufs1   no  logging
/dev/dsk/c1t0d0s5  /dev/rdsk/c1t0d0s5  /var  ufs1   no  logging
/dev/dsk/c1t0d0s7  /dev/rdsk/c1t0d0s7  /d/d1 ufs2   yes logging
/devices   -   /devices  devfs  -   no  -
ctfs   -   /system/contractctfs-   no  -
objfs  -   /system/object  objfs   -   no  -
swap   -   /tmptmpfs   -   yes -
/dev/dsk/c1t1d0s7  /dev/rdsk/c1t1d0s7  /d/d2   ufs 2   yes logging
/d/d2/downloads -  /d/d2/web/htdocs/downloads  lofs2   yes -
/d/d1/home/cw/pics  -  /d/d2/web/htdocs/pics   lofs2   yes -

So I decided to put /d/d2 drive on zfs, created my pool, then did create zfs an 
dmounted it under /d/d2 while I did copy content od /d/d2 to my new zfs and then 
removed it from vfstab file.


Ok, so now line where is does say:
/dev/dsk/c1t1d0s7  /dev/rdsk/c1t1d0s7  /d/d2   ufs 2   yes logging
is commented out from my vfstab file. I rebooted my system just to get all my 
things started as I wanted (well I did bring all webservers and everything else 
down for the duration of copy so that nothing was accessing /d/d2 drive).


So my system is booting up and I cannot login. aparently my service:
svc:/system/filesystem/local:default went into maitenance mode... somehow system 
could not mount those two items from vfstab:

/d/d2/downloads -  /d/d2/web/htdocs/downloads  lofs2   yes -
/d/d1/home/cw/pics  -  /d/d2/web/htdocs/pics   lofs2   yes -
I could not login and do anything, had to login trough console put my service
svc:/system/filesystem/local:default out of maitenance mode, clear maitenance 
state and all my services started to get going and system was no longer in 
single user mode...


That sucks a bit since how can I mount both UFS drives, then mount zfs and then 
get lofs mountpoints after?


Also if certain dysks did not mount I used to go to /etc/vfstab and was able to 
see what was going on, now since zfs does not use vfstab how can I know what was 
mounted or not before system went down? Sometimes drives go bad, sometimes 
certain dysks are commented out in vfstab such as backup disks, with zfs it is 
controlled trough command line, what if I do not want to boot something at boot 
time? How can I distinguish what suppose to be mounted at boot and whats not 
uzing zfs list? is there a config file that I can just comment out few lines and 
be able to mount them at other times other than boot?


Thanks for suggestions... and sorry if this is wrong group to post such question 
since this is not a question about opensolaris but zfs on Solaris 10 Update 2.


Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Properties of ZFS snapshots I'd like to see...

2006-05-05 Thread Krzys
Maybe there could be a flag for certain snaps where it could be made read 
only?!? But I dont know how this could be implemented and I do not think that 
would be possible... Anyway I still think that if I had a production system with 
those snaps I would rather remove that golden image and continue with 
operations rather than have no space and put my system to a halt.


:)

On Fri, 5 May 2006, Darren J Moffat wrote:


Krzys wrote:
I did not think of it this way and it is a very valid point, but I still 
think that most likely you would have a backup already on tape if need be 
and haveing space available for writing rhather than having no disk space 
for live data is much more important than a snap, but thats my opinion. I 
think it certainly should be an option.


What if my first snapshot is the golden image of my zones or diskless clients 
?  I need that online not on a tape.


--
Darren J Moffat


!DSPAM:122,445b7eed184751123715201!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Properties of ZFS snapshots I'd like to see...

2006-05-05 Thread Krzys
I realy do like the way NetApp is handling snaps :) that would be an excelent 
thing in ZFS :)


On Fri, 5 May 2006, Marion Hakanson wrote:


Interesting discussion.  I've often been impressed at how NetApp-like
the overal ZFS feature-set is (implies that I like NetApp's).  Is it
verboten to compare ZFS to NetApp?  I hope not

NetApp has two ways of making snapshots.  There is a set of automatic
snapshots, which are created, rotate and expire on their own (i.e. the
filer does all of this).  Often you'll have a number of hourly, daily,
weekly, etc. snapshots in this category.  These are the ones that users
can count on seeing when they seek to perform a self-recovery of a
mistakenly damaged file.

Then you have the ones you create manually, or which are created by
backup software.  The filer itself will never delete these, it's up
to the external creator to manage them.

This has proven to be a fantastic model for the usage patterns that I have
experienced (over probably 6+ years of NetApp use), and I would like to
see something similar available for ZFS.

Personally, I think that having an expiration time (and creation) be
associated with the snapshot/pool itself is a good thing.  What happens
if one exports said filesystem/pool (with snapshots) to another system,
if such creation/expiration is handled by some outside utility?

Hmm, I'm not sure if the NetApp auto-snapshot schedule follows a disk
volume if it's exported to a different filer.  I think it doesn't.

Regards,

Marion



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


!DSPAM:122,445b80b818937266247132!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss