This is probably because the file system is almost full. Before the file
systems expands, it needs free space for the meta-data by growing the volume
500 mb, it provides plenty of meta-data space for a larger grow.
You could then do a defray fsadm –E and –D will report on degree of
fragmentatio
Vxevac mirrors the data from one disk to another. You can run multiple mirror
operations simultaneously. You could also select to mirror volumes to new disks
then remove the plexes from the old disks. Or, with VEA, you can look at the
disk group and select the Disk View. From here you can drag a
The minimum configuration for FileStore is 4 NICs, 2 for private
interconnect (including a private IP for PXE boot of the second node to
get it installed) and 2 for the client network for access.
On 2/23/12 11:10 AM, "Colin Yemm" wrote:
>Sergey,
>
>Two separate VLANs (instead of crossover cables
The section of the Bundled Agents Ref Guide that shows Failback has a note
above it that these are optional for Base mode. You are using MPathd mode, so
Failback setting is ignored. Your issue is with Mpathd, not MNICB
nfs10 in.mpathd[9033]: [ID 620804 daemon.error] Successfully failed back to N
-boun...@mailman.eng.auburn.edu
[mailto:veritas-ha-boun...@mailman.eng.auburn.edu] On Behalf Of Everett Henson
Sent: Wednesday, October 13, 2010 11:40 AM
To: Gene Henriksen; 'veritas-ha@mailman.eng.auburn.edu'
Subject: Re: [Veritas-ha] I/O Fencing non-CFS
Thanks Gene. I understand it'
IO Fencing works with VCS. Even without CFS/RAC it is the best protection
against split-brain. You CAN import a DG on multiple systems by using vxdg -C
import to clear the name in the private region.
Alternatives include the preonline_ipc trigger (in sample_triggers, copy to
triggers, rename p
Having taught Global Clustering and its predecessor, I would prefer to
see WAC have the ability to failover from one IP to another or make
multiple connections over 2 or 3 IPs for higher redundancy. Keep in mind
that DR is not HA If the WAC loses the connection, you would get
notification and can
Look at the Resource Type attributes: hatype -display Application
From: veritas-ha-boun...@mailman.eng.auburn.edu
[mailto:veritas-ha-boun...@mailman.eng.auburn.edu] On Behalf Of Shashi Kanth
Boddula
Sent: Monday, March 30, 2009 5:16 AM
To: bjoern.heinem...@it
First I think there is a difference in the 4.x and 5.x Agent framework
on which the Agents are compiled, so I don't know if copying an agent
from 5.0 to 4.x will work.
Second, I don't know if it is legal from the perspective of the
licensing (I am definitely not a lawyer).
Third, you would
The steward's purpose is to try to ping the remote cluster just like the
Icmp and IcmpS heartbeats. The only method of getting status is for the
WAC process to have a connection. The steward is sent an IP and tries to
ping the remote cluster, if it fails, it reports that it cannot ping the
remote c
Use the "haclus -value ReadOnly" command to determine if it is open
(ReadOnly=0) or closed (ReadOnly=1)
If you set the haclus option for BackupInterval, it will create a
main.backup when open. The purpose is that an open config will not stop
a reboot.
From
My students often cite the listener as a process that will fail for no
apparent reason. The fix for this is to make it non-critical and to use
the RestartLimit to allow the agent to restart the Listener process.
When an agent restarts a resource it will log a message and if you have
Notification se
224062569.93.
xxx_DG enabled,cds 1224062672.105.
xxx_DG enabled,cds 1224062491.85.
Gene Henriksen wrote:
> If you have a "?" in the GUI, then it cannot probe the resource on one
> system or the other. It will not import on either until it is probed
on
If you have a "?" in the GUI, then it cannot probe the resource on one
system or the other. It will not import on either until it is probed on
both. This is to avoid a concurrency violation.
Hold the cursor on the resource and a pop-up box should show the status
so you can see where it is not prob
LUNs are really not "cluster aware", clusters, through Storage
Foundation, are LUN aware.
Zone the LUN to be seen by both systems.
Run devfsadm or the Solaris 10 equivalent if it has changed on both
systems. You should now be able to see the LUN in the format command
output.
Run "vxdctl
Are you getting a "stale file handle" at the client system on failover
of the NFS service group? If so this is most likely because the major
device numbers are different on the two servers for Vx drivers. The
major device number is used by NFS to construct the file handle for the
client. Look up t
The following will show persistent or temporary or both
### For persistent freeze
train1 !# hagrp -list Frozen=1
sg2 train1
sg2 train2
train1 !# hagrp -display -attribute Frozen
#Group Attribute System Value
ClusterService F
Out of curiosity, what is the difference between the "good" types.cf and
what it saves? Did you manually edit the "good" types.cf to make
changes?
If you edited the types.cf, you could have added extra spaces or put in
a setting that is already a default and these would get stripped out on
a sa
Have you read the information in the VCS Users Guide on Zones? There is
a description of how to set it up. VCS runs in the global zone. Some
resources have the ability to start processes or IP, for example, in a
zone (these resources have a "container name" in the attributes.
Resources such as NIC
Nothing is missing, that is all there should be. The fact that there is
no online, offline, monitor or clean indicates they are built into the
NotifierAgent binary. You will see the same on some others.
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behal
no effect on the hagui (java) on a local system.
From: i man [mailto:[EMAIL PROTECTED]
Sent: Wednesday, June 25, 2008 9:01 AM
To: Gene Henriksen; veritas-ha@mailman.eng.auburn.edu
Subject: Re: [Veritas-ha] ClusterService Group
Thankyou Gene, but my
No problem. In your case, shutting down CSG will stop notification. It
has no effect on VCS.
From: i man [mailto:[EMAIL PROTECTED]
Sent: Wednesday, June 25, 2008 8:53 AM
To: Gene Henriksen; veritas-ha@mailman.eng.auburn.edu
Subject: Re: [Veritas-ha
ClusterService is a group that "belongs" to the cluster itself. In most
instances you will find it that way you describe. It also is a great
place for the notifier resource. Normally it is not required. When using
Global Clustering (connecting 2 or more clusters for wide area
failover), the Wide Ar
The order of auto starting is based on the AutoStartPolicy which
defaults to "Order". Therfore the first one in the list would be the
preferred system for AutoStart.
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jon
Price
Sent: Wednesday, June
to run on other clusters.
From: John Cronin [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 03, 2008 10:45 AM
To: i man
Cc: Jim Senicka; Gene Henriksen; veritas-ha@mailman.eng.auburn.edu
Subject: Re: [Veritas-ha] .stale file
It would be no problem to create
It indicates you did not close and save the cluster configuration after
making modifications. It is a warning. If you close and save the config,
it goes away.
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of i man
Sent: Tuesday, June 03, 2008 7:28
Really doesn't matter. One could have priority=12 and the other = 31.
This could be caused by manually editing the main.cf and creating your
own service groups.
I have seen it in the hagui when sys1 is added first (priority=0), then
sys2 is added (priority=1), then sys1 is removed from the
FaultOnMonitorTimeout is obviously set to 4 and you are having more than
4 timeouts. That is 4 minutes of non-monitoring. You could increase FOTM
or the monitor interval or both.
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Manuel
Braun
Sent: Thu
The Wide Area failover in VCS is called Global Clustering. In 4.x it is
enabled (already installed) by adding the GCO key. In 5.x it is part of
the product and you either buy a HA or HA/DR license. The HA/DR license
includes Global Clustering.
It requires 2 clusters (they could be single node clu
You need to run the vcs install as "-installonly" so it will not try to
create new configuration files.
As far as the .cf files, they will get copied over from the other
cluster node when it joins the cluster.
If you have custom agents or triggers they need to be copied from the
other node in the
Edit the /etc/llttab on both systems, change the ce4 to ce3, restart
VCS, gab, llt.
you can use
#lltstat -nvv |more
to see which port is not connected.
note that is not -n w, but -n v v (with no spaces between them).
From: [EMAIL PROTECTED]
[mailto:[EMAIL PRO
If you "fixed" the cable issue, then LLT heartbeats will start to travel
over the line and the jeopardy will clear itself in less than a minute.
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Karthik
Sent: Friday, February 08, 2008 6:53 PM
To: veri
That is incorrect. Offlining a critical resource with VCS does not cause
failover. However if his storage resources are critical and he does not offline
them before the SAN is shutdown, the group will fault and failover and then
fault again because the resoures cannot be brought online because t
To mirror volumes you must be dealing with a relatively small distance,
such as less than 80K. For these distances, why not use a single cluster
called a "stretch" or "campus" cluster? In SF 5.0 there is the concept
of "site awareness" so that VM is aware of the two sites and if a volume
at the rem
VCS handles all of your questions.
When VCS starts, it calls on the agents to probe their resources. If the
resource is online, then it marks it "online". If a service group is
already up, no problem. When all probes are done, it evaluates what is
offline and then brings up the offline service
Look at the options to IP resource
Sent by Good Messaging (www.good.com)
-Original Message-
From: Evsyukov, Sergey [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 16, 2007 04:42 AM US Mountain Standard Time
To: veritas-ha@mailman.eng.auburn.edu
Subject:[Veritas-ha] IPMu
There are definite advantages to having the notifier in the
ClusterService Group. First, notification is a cluster level function.
Second CS group has "Special powers" like a super group. Because of its
importance in Global Clusters, CSG is the first group online (your
notifications get delivered e
One other reason for not Enabled: some resources run a "Open" entry
point when you enable them to create some structure in memory. If all
attributes are not sec correctly, such as when creating resources from
the command line, the structure would be incorrectly built and then
would not come online.
Versions of VCS/GAB/LLT must be the same within a cluster.
_
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Weber,
Klaus
Sent: Monday, April 16, 2007 4:04 AM
To: veritas-ha@mailman.eng.auburn.edu
Subject: [Veritas-ha] replacing nodes with newer Veritas VCS
Softwareinsta
Yes it is supported. VVR supports many to one (many hosts replicating to
a single host with multiple secondaries). We also support hosts having
both primary and secondary RVGs simultaneously, such as H1 has primary
for RVG1 and secondary for RVG2 and host H2 is primary for RVG2 and
secondary for RV
That would be correct behavior. Child always wins. In an offline local,
the child is the production group and the parent is the test group. If
the production group needs to fail to the server where the parent is
running, then the parent is shut down to make way for the child.
John Cronin is correc
Multiple protocols on the same interface is not unusual. In the days of Windows
NT 3.51, systems had IPX/SPX, NetBEUI and TCP all on the same port to
communicate with the different clients typically found in those days (this was
before Microsoft realized that TCP was not going to die, the Intern
Newer versions of VCS are kinder and better than 1.3. Node number can be
duplicate, but cluster IDs need to be unique if they will be out on the same
network (remember LLT doesn't get routed, so it isn't going everywhere).
With current VCS, if LLT sports a duplicate, it will announce that it
If you remove ClusterService you will lose the Notifier resource.
IF you have removed the node from all service groups, SystemList,
AutoStartList, etc (grep the main.cf) then you can delete it from the
cluster. It will complain if it is still referenced.
There is no need to change node numb
From the 4.1 VCS install Guide:
Supported Software
◆ Solaris 8, 9, and 10 (32-bit and 64-bit) operating systems
_
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Munish Dhawan
Sent: Sunday, February 18, 2007 11:53 PM
To: Frank, Lutz; veritas-ha@mailman.eng.auburn
The two low-pri links are on different IP subnets. Since LLT is not
routable, the question is how your network reaching those two ports is
wired. If access between ce0 and qfe1 are thru a router, LLT will not
work.
_
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Munis
Yes you can share hubs and VLANs. In Symantec classrooms, we have 8
clusters using one hub for heartbeat 1 and another hub for heartbeat 2.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: Tuesday, November 14, 2006 1:30 PM
To: veri
47 matches
Mail list logo