[Veritas-vx] hp_ux VxVM errors

2006-11-15 Thread Adrian Constantinescu

Hi All,
 
My name is Adrian and I have some problems with a HP_UX rp7xxx system
together with a HP Storageworks MSA30.
I use VxVM on the server and on the storage.
The ideea is that I  cannot see the status of the volumes from the commands
# vxdisk list

# vxdg list
# vxprint –ht
I attached for you the configuration of the system and the output of the commands.
There are 2 volumes in one group on the array and the volumes are mounted
and the data is available.
I tried to restart the vxconfigd -k but you have bellow the output:
 
vasstat1:/opt/output/daily_stats# vxconfigd -k

V-5-1-0 vxvm:vxconfigd: NOTICE: Generating /etc/ vx/array.info

 
 VxVM vxconfigd ERROR V-5-1-1589 enable failed: Volboot file not loaded 
transactions are disabled.
vasstat1:/opt/output/daily_stats# vxdisk list
DEVICE   TYPE    DISK GROUP    STATUS

vasstat1:/opt/output/daily_stats#
vasstat1:/opt/output/daily_stats#
vasstat1:/opt/output/daily_stats#
vasstat1:/opt/output/daily_stats# vxdisk list
DEVICE   TYPE    DISK     GROUP    
STATUS
vasstat1:/opt/output/daily_stats# vxdg list
NAME STATE   ID
vasstat1:/opt/output/daily_stats# vxprint -ht
VxVM
 vxprint ERROR V-5-1-684 IPC failure: Configuration daemon is not accessible
 
I'll apperciate if you can tell me some possibilities to fix the problems.
Adrian.
 
1)  ioscan -fn

Class   I  H/W PathDriverS/W State   H/W Type Description
==
root0  root  CLAIMED BUS_NEXUS
cell0  1   cell  CLAIMED BUS_NEXUS
ioa 0  1/0 sba   CLAIMED BUS_NEXUSSystem Bus 
Adapter (805)
ba  0  1/0/0   lba   CLAIMED BUS_NEXUSLocal PCI Bus 
Adapter (782)
tty 0  1/0/0/0/0   asio0 CLAIMED INTERFACEPCI 
SimpleComm (103c1290)
  /dev/diag/mux0  /dev/hpgps1 /dev/mux0   
/dev/tty0p0
tty 1  1/0/0/0/1   asio0 CLAIMED INTERFACEPCI Serial 
(103c1048)
  /dev/GSPdiag1   /dev/GSPdiag2   /dev/diag/mux1  
/dev/mux1   /dev/tty1p0 /dev/tty1p2 /dev/tty1p4
ext_bus 0  1/0/0/3/0   c8xx  CLAIMED INTERFACESCSI C1010 
Ultra160 Wide LVD A6793-60001
target  4  1/0/0/3/0.6 tgt   CLAIMED DEVICE
disk1  1/0/0/3/0.6.0   sdisk CLAIMED DEVICE   HP 
36.4GST336754LC
  /dev/dsk/c0t6d0   /dev/rdsk/c0t6d0
target  5  1/0/0/3/0.7 tgt   CLAIMED DEVICE
ctl 2  1/0/0/3/0.7.0   sctl  CLAIMED DEVICE   Initiator
  /dev/rscsi/c0t7d0
ext_bus 1  1/0/0/3/1   c8xx  CLAIMED INTERFACESCSI C1010 
Ultra Wide Single-Ended A6793-60001
target  2  1/0/0/3/1.2 tgt   CLAIMED DEVICE
disk0  1/0/0/3/1.2.0   sdisk CLAIMED DEVICE   _NEC
DVD_RW ND-3540A
  /dev/dsk/c1t2d0   /dev/rdsk/c1t2d0
target  6  1/0/0/3/1.7 tgt   CLAIMED DEVICE
ctl 3  1/0/0/3/1.7.0   sctl  CLAIMED DEVICE   Initiator
  /dev/rscsi/c1t7d0
ba  1  1/0/1   lba   CLAIMED BUS_NEXUSLocal PCI-X 
Bus Adapter (783)
ba  2  1/0/1/1/0   PCItoPCI  CLAIMED BUS_NEXUSPCItoPCI 
Bridge
ext_bus 2  1/0/1/1/0/1/0   c8xx  CLAIMED INTERFACESCSI C1010 
Ultra160 Wide LVD
target  9  1/0/1/1/0/1/0.7tgt   CLAIMED DEVICE
ctl 5  1/0/1/1/0/1/0.7.0  sctl  CLAIMED DEVICE   Initiator
  /dev/rscsi/c2t7d0
ext_bus 3  1/0/1/1/0/1/1   c8xx  CLAIMED INTERFACESCSI C1010 
Ultra160 Wide LVD
target  7  1/0/1/1/0/1/1.6tgt   CLAIMED DEVICE
disk2  1/0/1/1/0/1/1.6.0  sdisk CLAIMED DEVICE   HP 
36.4GST336754LC
  /dev/dsk/c3t6d0   /dev/rdsk/c3t6d0
target  8  1/0/1/1/0/1/1.7tgt   CLAIMED DEVICE
ctl 4  1/0/1/1/0/1/1.7.0  sctl  CLAIMED DEVICE   Initiator
  /dev/rscsi/c3t7d0
lan 0  1/0/1/1/0/4/0   igelanCLAIMED INTERFACEHP 
A6794-60001 PCI 1000Base-T
ba  3  1/0/2   lba   CLAIMED BUS_NEXUSLocal PCI-X 
Bus Adapter (783)
ba  4  1/0/2/1/0   PCItoPCI  CLAIMED BUS_NEXUSPCItoPCI 
Bridge
lan 1  1/0/2/1/0/4/0   ietherCLAIMED INTERFACEHP 
AB545-60001 PCI/PCI-X 1000Base-T 4-port 1000B-T Adapter
lan 2  1/0/2/1/0/4/1   ietherCLAIMED INTERFACEHP 
AB545-60001 PCI/PCI-X 1000Base-T 4-port 1000B-T Adapter
lan 3  1/0/2/1/0/6/0   ietherCLAIMED INTERFACEHP 
AB545-60001 PCI/PCI-X 1000Base-T 4-port 1000B-T Adapter
lan 4  1/0/2/1/0/6/1   ietherCLAIMED INTERFACEHP 
AB545-60001 PCI/PCI-X 1000Bas

[Veritas-vx] VxFS 5.0 on AIX 5.3

2006-11-15 Thread Pavel A Tsvetkov

Hello all!

I want to install VxFS 5.x  (or
4.x) on AIX 5.3 ML5. Has anybody seen any problems with VxFS on AIX  ?
I'm going to use  VxVM volumes.

Regards, Pavel___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx


Re: [Veritas-vx] hp_ux VxVM errors

2006-11-15 Thread acena
Hi Adrian

Which is the version of SF? Do you have the bootdisk under vxvm?? In /etc/vx  
there is the file volboot??

Maybe you have to re-create the volboot .this tech note can help you.

http://support.veritas.com/docs/250360

--
 
Andrea Cena
Senior Solution Specialist - Storage & Availability
> SORINT
__
Mobile: +39.335.1295330
Phone:  +39.011.4334099
Fax:+39.035.697590 
   [EMAIL PROTECTED]

PERSONALE E CONFIDENZIALE.
Questa mail potrebbe includere materiale confidenziale, proprietario o 
altrimenti privato per l'uso esclusivo del destinatario.
Se l'avete ricevuto per errore, siete pregati di contattare chi ha inviato il 
messaggio e di cancellarne tutte le copie.
Ogni altro uso da parte vostra del messaggio è proibito.

PERSONAL AND CONFIDENTIAL.
This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information.  
If you have received it in error, please notify the sender immediately  and 
delete all the copies.
Any other use of the email by you is prohibited.
Content-Type: multipart/alternative;
boundary="=_Part_86311_31543183.1163594908948"


--=_Part_86311_31543183.1163594908948
Content-Type: text/plain; charset=WINDOWS-1252; format=flowed
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

 Hi All,

My name is Adrian and I have some problems with a HP_UX rp7xxx system
together with a HP Storageworks MSA30.
I use VxVM on the server and on the storage.
The ideea is that I  cannot see the status of the volumes from the commands
# vxdisk list

# vxdg list

# vxprint –ht

I attached for you the configuration of the system and the output of the
commands.

There are 2 volumes in one group on the array and the volumes are mounted
and the data is available.
I tried to restart the vxconfigd -k but you have bellow the output:

vasstat1:/opt/output/daily_stats# vxconfigd -k

V-5-1-0 vxvm:vxconfigd: NOTICE: Generating /etc/ vx/array.info

  VxVM vxconfigd ERROR V-5-1-1589 enable failed: Volboot file not loaded
transactions are disabled.

vasstat1:/opt/output/daily_stats# vxdisk list

DEVICE   TYPEDISK GROUPSTATUS

vasstat1:/opt/output/daily_stats#

vasstat1:/opt/output/daily_stats#

vasstat1:/opt/output/daily_stats#

vasstat1:/opt/output/daily_stats# vxdisk list

DEVICE   TYPEDISK GROUPSTATUS

vasstat1:/opt/output/daily_stats# vxdg list

NAME STATE   ID

vasstat1:/opt/output/daily_stats# vxprint -ht

VxVM vxprint ERROR V-5-1-684 IPC failure: Configuration daemon is not
accessible



I'll apperciate if you can tell me some possibilities to fix the problems.

Adrian.

--=_Part_86311_31543183.1163594908948
Content-Type: text/html; charset=WINDOWS-1252
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline


Hi All,
 
My name is Adrian and I have some problems with a HP_UX rp7xxx system
together with a HP Storageworks MSA30.
I use VxVM on the server and on the storage.
The ideea is that I  cannot see the status of the volumes from the 
commands
# vxdisk list

# 
vxdg list
# 
vxprint –ht
I attached for you the configuration of the system and the output of the 
commands.
There are 2 volumes in one group on the array and the volumes are 
mounted
and the data is available.
I tried to restart the vxconfigd -k but you have bellow the output:
 
vasstat1:/opt/output/daily_stats# 
vxconfigd -k

V-5-1-0 
vxvm:vxconfigd: NOTICE: Generating /etc/ 
vx/array.info

 
 VxVM 
vxconfigd ERROR V-5-1-1589 enable failed: Volboot 
file not loaded 
transactions are 
disabled.
vasstat1:/opt/output/daily_stats# 
vxdisk list
DEVICE   
TYPE   
 DISK 
GROUP    STATUS

vasstat1:/opt/output/daily_stats#
vasstat1:/opt/output/daily_stats#
vasstat1:/opt/output/daily_stats#
vasstat1:/opt/output/daily_stats# 
vxdisk list
DEVICE   
TYPE   
 DISK  
   GROUP   
 
STATUS
vasstat1:/opt/output/daily_stats# 
vxdg list
NAME 
STATE   
ID
vasstat1:/opt/output/daily_stats# vxprint 
-ht
VxVM
 vxprint ERROR V-5-1-684 IPC failure: Configuration daemon is not 
accessible
 
I'll apperciate if you can tell me some 
possibilities to fix the problems.
Adrian.
 

--=_Part_86311_31543183.1163594908948--
___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
___
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx


Re: [Veritas-vx] Recommended mount options for VxFS

2006-11-15 Thread Scott Kaiser
You can also grow the log using fsadm, if you don't want to place it on
a separate device and it was built too small originally. Man fsadm_vxfs
for the specific flag.

Regards,
Scott
 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf 
> Of Par Botes
> Sent: Wednesday, November 08, 2006 12:35 AM
> To: [EMAIL PROTECTED]; veritas-vx@mailman.eng.auburn.edu
> Subject: Re: [Veritas-vx] Recommended mount options for VxFS
> 
> Nope,
> 
> I mean mkfs -o logsize (I.e. the size of the intent log)
> 
> If you have the filesystem already then you need to convert 
> to a multivolume filesystem, create a volumeset and Use 
> policy to specify that logdata goes onto a specific device 
> and all other data goes onto another device.
> This is in multi volume file systems (chapter 9 of the 4.0 fs 
> admin guide).
> 
> Best,
> Par
>  
> 
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf 
> Of [EMAIL PROTECTED]
> Sent: Wednesday, November 08, 2006 11:09 AM
> To: veritas-vx@mailman.eng.auburn.edu
> Subject: Re: [Veritas-vx] Recommended mount options for VxFS
> 
> Par,
> 
> by "logsize" you mean "logiosize" from mount_vxfs manual ?
> 
> Regards
> przemol
> 
> On Mon, Oct 30, 2006 at 09:26:10AM -0800, Par Botes wrote:
> > Ok,
> > 
> > You may want to consider these then. These are for 5.0 on 
> Solaris (but
> 
> > should Be easily transferable to most OS'es and even previous 
> > versions). Note that this is generic Guidelines. Depending 
> on memory 
> > pressure, application IO profile and a host of other things 
> These may 
> > or may not be good tunables. I'd consider them. For 
> databases there is
> 
> > nothing That beats the ODM interface (or the quickIO interface for 
> > non-oracle workloads), this is due to the fact that ODM/QIO also 
> > alters the filesystem locking behaviour besides the cache behaviour.
> > There are some other tricks we employ as well to accelerate on the 
> > ODM/QIO IO path.
> > 
> > Anyway, here is a generic write up on the first set of 
> tunables/mount 
> > options I'd consider For vxfs/Vxvm for three workloads; 
> DSS/OLTP/file 
> > serving. I would wait with doing anything for The vxtunefs 
> read_pref 
> > and nstreams etc before I've analyzed the impact of these tunables 
> > since The vxtunefs parameters are autotuned (and autotuned really 
> > well).
> > 
> > For DMP: Use the balanced path algorithm for A/A type of arrays, 
> > consider using a failover group for A/P arrays. Balanced path gives 
> > the best performance for almost all workloads.
> > 
> > 
> > DSS/HPC
> > ==
> > VxFS mount options:
> > Log mode= delaylog (best performance and enables data integrity) 
> > Logsize= medium (doesn't need to be very much larger than 
> default or 
> > keep default) Mincache=direct (turn off in-kernel buffering) 
> > Convosync=direct (delay inode updates for writes to files)
> > 
> > Tunefs:
> > Read_ahead=1 (traditional vxfs read ahead enabled) 
> Fcl_winterval=6000 
> > (50 minute update interval for FCL)
> > 
> > VxVM kernel tuning (/etc/system)
> > Set vxio:vol_maxio=32768 (max IO size = 16MB)
> > 
> > 
> > OLTP
> > ===
> > VxFS mount options:
> > Log mode= delaylog (best performance and enables data integrity) 
> > Logsize= medium (doesn't need to be very much larger than 
> default or 
> > keep default) Mincache=direct (turn off in-kernel buffering) 
> > Convosync=direct (delay inode updates for writes to files)
> > 
> > Tunefs:
> > Read_ahead=0 (turn off read ahead)
> > Fcl_winterval=6000 (50 minute update interval for FCL)
> > 
> > VxVM kernel tuning (/etc/system)
> > Set vxio:vol_maxio=32768 (max IO size = 16MB)
> > 
> > 
> > Fileserving, lots of smallish files (like an NFS home directory
> > server)
> > 
> ==
> > == Log mode= delaylog (best performance and enables data integrity) 
> > Logsize= large (Make the log big and use a separate device 
> for the log
> 
> > using volume sets)
> > 
> > Tunefs:
> > Read_ahead=0 (turn off read ahead)
> > Fcl_winterval=6000 (50 minute update interval for FCL)
> > 
> > kernel tuning (/etc/system)
> > Set vxfs:vxfs_ninode= (this value should be viewed as 1KB 
> units, set 
> > to no more than 50% or memory but make it big) Set ncsize= (this 
> > should be 80% of vxfs_ninode, the default is derived from maxusers)
> > 
> > 
> > Other
> > ===
> > On some platforms (AIX) you can reduce vx_bc_bufhwm and 
> > vx_vmm_buf_count to reduce the size of the FS cache in the 
> kernel, on 
> > Solaris this is dynamically done and not directly tuned).
> > You may also want to consider tuning the fsflusher if pid 3 
> uses a lot
> 
> > of cpu (but that's a sun tunable and not really a vxfs/VxVM 
> tunable).
> > 
> > 
> > Now, the caveats... These tunables try to optimize the 
> performance by 
> > either changing the way memory is consumed or a

Re: [Veritas-vx] Veritas-vx Digest, Vol 7, Issue 11

2006-11-15 Thread Mrutyunjaya Dash








Hi Vijay,

 

Now things are looking something better,
I could able to add the LUNs as VxVM disk and now those all are being
recognized as enclosure disks. But I have here only one doubt, after installing
the package for ASL and APM (VRTSHDS-AMS_9500V_SunOSsparc_vm41MP1_280270.tar),
still the apm is Not-Active but
the enclosure is being added. Could you please help me here in pointing out the
reason of not getting activated the dmphdsalua
apm. Infact after adding the package I have tried running the vxdctl
enable command.

 

 

bash-3.00# vxdmpadm listapm all

Module
Name    APM
Name   APM
Version  Array Types   State



dmpaa 
dmpaa 
1   
A/A  
Active

dmpap 
dmpap 
1    A/P  
Active

dmpap 
dmpap 
1    A/P-C
Active

dmpapf
dmpapf
1    A/PF-VERITAS 
Not-Active

dmpapf
dmpapf
1    A/PF-T3PLUS  
Not-Active

dmpapg
dmpapg
1    A/PG 
Not-Active

dmpapg
dmpapg
1    A/PG-C   
Not-Active

dmpjbod   
dmpjbod   
1   
Disk 
Active

dmpjbod   
dmpjbod   
1   
APdisk    Active

dmphdsalua
dmphdsalua
1   
A/A-A-HDS Not-Active

 

bash-3.00# vxdmpadm listenclosure all

ENCLR_NAME   
ENCLR_TYPE
ENCLR_SNO   
STATUS   ARRAY_TYPE



Disk 
Disk  
DISKS   
CONNECTED    Disk

AMS_WMS0 
AMS_WMS   
75040421
CONNECTED    A/A-A-HDS

 

Regards,

Dash

 









From: vijay vijay
[mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 14, 2006
8:23 PM
To: Mrutyunjaya Dash
Subject: RE: [Veritas-vx]
Veritas-vx Digest, Vol 7, Issue 11



 

you don't have to label both disk.
just one.
 
I have here more than 200 servers with same what you are trying to setup.
 
If you follow my previous email you will never see any issue.
 
 
Good luck
 
Vijay











Subject: RE: [Veritas-vx]
Veritas-vx Digest, Vol 7, Issue 11
Date: Mon, 13 Nov 2006 21:24:51 +0530
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; veritas-vx@mailman.eng.auburn.edu



Hi Vijay,

 

Let me just brief you
the environment and what my intention is to do. Basically the intention is to setup
a Solaris 10 server with Hitachi SAN and VxVM. Now on the connectivity part the
Hitachi SAN box has two controllers connected to two brocade switch and SUN
server has two HBA connected to both of the same switch to make at the switch,
HBA and controller of SAN level redundancy. So it is obvious that we need to
have multi pathing application to manage the show. For this we will be using
Veritas DMP. 

 

Now I have installed
Solaris 10 with Veritas Foundation Suite and created the required LUNs and
attached each to both of the SAN controller as to get the multi pathing to
work. Now the format command is detecting two disks for each LUN. Now the idea
is to add two LUNs as a single disk in VxVM. 

 

As each LUN is
identified is two disks through format command, do you suggest to lebel both of
the disks?

 

I would appreciate your
suggestion here to make something workable for me.

 

Regards,

Dash

 









From: vijay vijay
[mailto:[EMAIL PROTECTED] 
Sent: Monday, November 13, 2006
9:06 PM
To: Mrutyunjaya Dash; veritas-vx@mailman.eng.auburn.edu
Subject: RE: [Veritas-vx]
Veritas-vx Digest, Vol 7, Issue 11



 


Are you running Solaris ??? or any other UNIX (HP,AIX)
 
Looks to me that System see's the disk but not active. It seems to me that you
 need to run vxdiskadm or vxdiskadd ?
 
Have you checked the disk by running format command and make sure that disk is
labeled, if not you have to label the new LUN you attached
 
Hope that helps to troubleshoot your problems
 
Cheers,
Vijay









Subject: RE: [Veritas-vx] Veritas-vx
Digest, Vol 7, Issue 11
Date: Sun, 12 Nov 2006 15:35:00 +0530
From: [EMAIL PROTECTED]
To: veritas-vx@mailman.eng.auburn.edu
CC: [EMAIL PROTECTED]



Hi Vijay,

 

Appreciate your reply.
As you said to install the array support
library, I just downloaded the appropriate ASL package
VRTSHTC-AMSWMS_solvm40_278145.tar. After adding the package I excuted vxconfigd
–k and vxdctl enable. But still output of vmdmpadm listapm all show not
active for the particular HDS module. 

 

bash-3.00# vxdmpadm
listapm all

Module
Name    APM
Name   APM
Version  Array Types   State



dmpaa 
dmpaa 
1    A/A
  Active

dmpap 
dmpap 
1   
A/P  
Active

dmpap 
dmp