Thanks to all for the suggestions. I'm not sure if it's totally working yet, but it definitely looks better.
Thanks Andrey - I didn't even think to check for trespassed luns. I found about 1/2 the luns were trespassed. I fixed those. I re-ran a "vxdisk -o alldgs list". The Clariion SP went unmanaged.... (And another support case to be opened.) I checked again, found a couple more trespassed luns and fixed those. This time the vxdisk command only took 1 1/2 minutes to complete. Before, it would take up to 5 minutes. And no errors in the VCS logs for dg monitoring timing out, yay! Jon, vx logs showed nothing that I could identify as an issue - other than some long response times. Kiru, since we're using PowerPath, we have DMP set to single-active and mpxio disabled. We are not using CVM in our situation I've have some more testing to do, but this looks promising. Thanks all.! Bryan > Maybe looking at the Veritas logs while the disk group commands run would > give you a hint... > > In /var/adm/vx/ > > Jon > > > > On Tue, Oct 7, 2008 at 1:59 PM, Kirubakaran Kaliannan < > [EMAIL PROTECTED]> wrote: > > >> Hi, >> >> What is the storage vendor you are using and is it configured as A/P or >> A/PF ? >> >> Taking 5 minutes is probably the due to tress pass. >> The reasons could be, >> 1. not have the proper ASL/APM for the storage on both the nodes >> (including the failover node). >> 2. are both nodes are on a CVM configuration ? if they are not, then >> there will not be any co-ordination between the nodes to avoid tress >> pass. >> 3. check for the array configured as in HCL. >> >> Please let us know, if you still have the issue after verifying the >> above. >> >> Thanks >> -kiru >> >> >> -----Original Message----- >> From: [EMAIL PROTECTED] >> [mailto:[EMAIL PROTECTED] On Behalf Of Bryan >> Bahnmiller >> Sent: Tuesday, October 07, 2008 11:42 AM >> To: veritas-vx@mailman.eng.auburn.edu >> Subject: [Veritas-vx] VxVM disk group commands take too long >> >> Hello all, >> >> I have a situation where one system is taking a long time to scan for >> disk groups. I have a VCS cluster in a 4+1 configuration. I have >> recently added the last production node to the cluster. After the disks >> were presented to the failover node, it took a long time for any Vx >> commands to respond. >> >> The servers are all Solaris 10, the cluster, VxVM and VxFS are all >> 5.0mp1. The server that I added was one we had used for testing and had >> many different luns presented and removed from it. But it will import >> and deport the disk group just fine. The other server, the failover node >> in the cluster, takes about 5 minutes to import the dg. If I do a >> "vxdisk -o alldgs list" on the failover node, it takes about 5 minutes >> to respond. Any way I look at the devices, it looks clean. Whether I >> look at them with cfgadm, luxadm, vxdisk or PowerPath, the disks look >> ok. So anytime I do any vx commands on the failover node, I get VCS >> monitoring timeout errors for monitoring the dg's. >> >> Anyone have suggestions? >> >> Thanks, >> Bryan >> >> -- >> Bryan Bahnmiller >> >> >> _______________________________________________ >> Veritas-vx maillist - Veritas-vx@mailman.eng.auburn.edu >> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx >> >> _______________________________________________ >> Veritas-vx maillist - Veritas-vx@mailman.eng.auburn.edu >> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx >> >> _______________________________________________ >> Veritas-vx maillist - Veritas-vx@mailman.eng.auburn.edu >> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx >> >> _______________________________________________ Veritas-vx maillist - Veritas-vx@mailman.eng.auburn.edu http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx