[zfs-discuss] Using one mirrored disks in new machine

2009-01-25 Thread iman habibi
Dear Admins
I installed solaris 10-u6 based zfs on under two mirrored disks(rpool) in my
first sunfire v880 sparc machine.
my questions are:
1-can i remove one of my mirrored disks from this first machine and use it
as installed operating system in another machine and making this machine
operational?
(I want install solaris on it but this new machine doesnt have cdromcant
boot from network).
2-after that,can i insert new H.D.D in missed places for recovery mirrored
state(rpool) in each machine?
Both machines have the same hardware,two H.D.D,and all H.D.D are the same
charactristics and capacity.

any guide appreciated.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] locate disk command? locate broken disk?

2009-01-25 Thread Craig Morgan
There is an optional utility supplied by Sun (for all supported OSes)  
to map the internal drives of the X4500/X4540 to their platform  
specific device IDs, its called 'hd' and is on one of the support CD's  
supplied with the systems (and can be downloaded if you've mislaid the  
disk!).

Documentation here (including link to download) ... 
http://docs.sun.com/source/820-1120-19/hdtool_new.html#0_64301

HTH

Craig

On 24 Jan 2009, at 18:39, Orvar Korvar wrote:

 If zfs says that one disk is broken, how do I locate it? It says  
 that disk c0t3d0 is broken. Which disk is that? I must locate them  
 during install?

 In Thumper it is possible to issue a ZFS command, and the  
 corresponding disk's lamp will flash? Is there any zlocate command  
 that will flash a particular disk's lamp?
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Craig

Craig Morgan
t: +44 (0)791 338 3190
f: +44 (0)870 705 1726
e: craig.mor...@sun.com

~ 

  NOTICE:  This email message is for the sole use of the intended
  recipient(s) and may contain confidential and privileged information.
  Any unauthorized review, use, disclosure or distribution is  
prohibited.
  If you are not the intended recipient, please contact the sender by
  reply email and destroy all copies of the original message.
~ 




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using one mirrored disks in new machine

2009-01-25 Thread Tomas Ögren
On 25 January, 2009 - iman habibi sent me these 2,0K bytes:

 Dear Admins
 I installed solaris 10-u6 based zfs on under two mirrored disks(rpool) in my
 first sunfire v880 sparc machine.
 my questions are:
 1-can i remove one of my mirrored disks from this first machine and use it
 as installed operating system in another machine and making this machine
 operational?
 (I want install solaris on it but this new machine doesnt have cdromcant
 boot from network).
 2-after that,can i insert new H.D.D in missed places for recovery mirrored
 state(rpool) in each machine?
 Both machines have the same hardware,two H.D.D,and all H.D.D are the same
 charactristics and capacity.

Newer ZFS rpool thingies have the hostid stored in them, so it will
refuse to boot off that (because it might be in use from someone else
off shared storage or so).. You need to boot something else on the new
machine, then 'zpool import -f rpool', export it and reboot..

One way I guess, is to install another system with ufs root as well,
boot off that on the new one, stick the removed mirror from the first
system into the new system and 'zpool import -f rpool', export, reboot
onto the zfs disk..

After that, you can attach mirrors with 'zpool attach'.. might need to
run installboot on the second disk as well..

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status -x strangeness

2009-01-25 Thread Blake Irvin
You can upgrade live.  'zfs upgrade' with no arguments shows you the  
zfs version status of filesystems present without upgrading.



On Jan 24, 2009, at 10:19 AM, Ben Miller mil...@eecis.udel.edu wrote:

 We haven't done 'zfs upgrade ...' any.  I'll give that a try the  
 next time the system can be taken down.

 Ben

 A little gotcha that I found in my 10u6 update
 process was that 'zpool
 upgrade [poolname]' is not the same as 'zfs upgrade
 [poolname]/[filesystem(s)]'

 What does 'zfs upgrade' say?  I'm not saying this is
 the source of
 your problem, but it's a detail that seemed to affect
 stability for
 me.


 On Thu, Jan 22, 2009 at 7:25 AM, Ben Miller
 The pools are upgraded to version 10.  Also, this
 is on Solaris 10u6.

 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] AVS on opensolaris 2008.11

2009-01-25 Thread Jim Dunham
Ahmed,

 Thanks for your informative reply. I am involved with kristof
 (original poster) in the setup, please allow me to reply below

 Was the follow 'test' run during resynchronization mode or  
 replication
 mode?


 Neither, testing was done while in logging mode. This was chosen to
 simply avoid any network issues and to get the setup working as fast
 as possible. The setup was created with:

 sndradm -E pri /dev/zvol/rdsk/gold/myzvol /dev/rramdisk/ram1 sec
 /dev/zvol/rdsk/gold/myzvol /dev/rramdisk/ram1 ip async

 Note that the logging disks are ramdisks again trying to avoid disk
 contention and get fastest performance (reliability is not a concern
 in this test). Before running the tests, this was the state

 #sndradm -P
 /dev/zvol/rdsk/gold/myzvol  -  pri:/dev/zvol/rdsk/gold/myzvol
 autosync: off, max q writes: 4096, max q fbas: 16384, async threads:
 2, mode: async, state: logging

 While we should be getting minimal performance hit (hopefully), we got
 a big performance hit, disk throughput was reduced to almost 10% of
 the normal rate.

Is it possible to share information on your ZFS storage pool  
configuration, your testing tool, testing types and resulting data?

I just downloaded Solaris Express CE (b105) 
http://opensolaris.org/os/downloads/sol_ex_dvd_1/ 
,  configured ZFS in various storage pool types, SNDR with and without  
RAM disks, and I do not see that disk throughput was reduced to almost  
10% o the normal rate. Yes there is some performance impact, but no  
where near there amount reported.

There are various factors which could come into play here, but the  
most obvious reason that someone may see a serious performance  
degradation as reported, is that prior to SNDR being configured, the  
existing system under test was already maxed out on some system  
limitation, such as CPU and memory.  I/O impact should not be a  
factor, given that a RAM disk is used. The addition of both SNDR and a  
RAM disk in the data, regardless of how small their system cost is,  
will have a profound impact on disk throughput.

Jim


 Please feel free to ask for any details, thanks for the help

 Regards
 ___
 storage-discuss mailing list
 storage-disc...@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/storage-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Unable to destory a pool

2009-01-25 Thread Ramesh Mudradi
# zpool list
NAME SIZE   USED  AVAILCAP  HEALTH  ALTROOT
jira-app-zpool   272G   330K   272G 0%  ONLINE  -

The following command hangs forever. If I reboot the box , zpool list shows 
online as I mentioned the output above.

# zpool destroy -f jira-app-zpool

How can get rid of this pool and any reference to it. 

bash-3.00# zpool status
  pool: jira-app-zpool
 state: UNAVAIL
status: One or more devices are faultd in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: http://www.sun.com/msg/ZFS-8000-HC
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
jira-app-zpool  UNAVAIL  0 0 4  insufficient replicas
  c3t0d3FAULTED  0 0 4  experienced I/O failures

errors: 2 data errors, use '-v' for a list
bash-3.00#
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using one mirrored disks in new machine

2009-01-25 Thread Richard Elling
Tomas Ögren wrote:
 On 25 January, 2009 - iman habibi sent me these 2,0K bytes:

   
 Dear Admins
 I installed solaris 10-u6 based zfs on under two mirrored disks(rpool) in my
 first sunfire v880 sparc machine.
 my questions are:
 1-can i remove one of my mirrored disks from this first machine and use it
 as installed operating system in another machine and making this machine
 operational?
 (I want install solaris on it but this new machine doesnt have cdromcant
 boot from network).
 2-after that,can i insert new H.D.D in missed places for recovery mirrored
 state(rpool) in each machine?
 Both machines have the same hardware,two H.D.D,and all H.D.D are the same
 charactristics and capacity.
 

I can't see a reason this would explicitly not work, but I've not tried
it on such hardware.  Recently, I completed a brain transplant on an
OpenSolaris box (new mobo, different graphics, different chipset,
different processor) and it just booted right up with a reconfigure
reboot... no problems at all... quite impressive, I must say! :-)

 Newer ZFS rpool thingies have the hostid stored in them, so it will
 refuse to boot off that (because it might be in use from someone else
 off shared storage or so).. You need to boot something else on the new
 machine, then 'zpool import -f rpool', export it and reboot..
   

This should not  be necessary as there is a check to see if
the importing pool is also the boot pool.  That said, I cannot
speak about this in Solaris 10 as I have not looked at that code.
OpenSolaris should be ok, though.

 One way I guess, is to install another system with ufs root as well,
 boot off that on the new one, stick the removed mirror from the first
 system into the new system and 'zpool import -f rpool', export, reboot
 onto the zfs disk..

 After that, you can attach mirrors with 'zpool attach'.. might need to
 run installboot on the second disk as well..
   

installboot may or may not be necessary, depending on how the
mirrored boot was originally installed.  If by JumpStart with the mirror
option, then installboot should already be there.  If you did it by
hand, then the instructions for mirroring root pools in the ZFS
Administration Guide would need to be followed, which include
manually running installboot.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] thoughts on parallel backups, rsync, and send/receive

2009-01-25 Thread Richard Elling
Recently, I've been working on a project which had agressive backup
requirements. I believe we solved the problem with parallelism.  You
might consider doing the same.  If you get time to do your own experiments,
please share your observations with the community.
http://richardelling.blogspot.com/2009/01/parallel-zfs-sendreceive.html
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] thoughts on parallel backups, rsync, and send/receive

2009-01-25 Thread Ian Collins
Richard Elling wrote:
 Recently, I've been working on a project which had agressive backup
 requirements. I believe we solved the problem with parallelism.  You
 might consider doing the same.  If you get time to do your own experiments,
 please share your observations with the community.
 http://richardelling.blogspot.com/2009/01/parallel-zfs-sendreceive.html
   

You raise some interesting points about rsync getting bogged down over
time.  I have been working with a client with a requirement for
replication between a number of hosts and I have found doing several
rend/receives made quite an impact.  What I haven't done is try this
with the latest performance improvements in b105.  Have you?  My guess
is the gain will be less.

One thing I have yet to do is find the optimum number of parallel
transfers when there are 100s of filesystems.  I'm looking into making
this dynamic, based on throughput.

Are you working with OpenSolaris?  I still haven't managed to nail the
toxic streams problem in Solaris 10, which have curtailed my project.

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss