Just FYI, the following links are good resources for common VCS commands:
https://sort.symantec.com/public/documents/vcs/6.0/aix/productguides/html/vcs_admin/apbs02.htm
http://www.datadisk.co.uk/html_docs/veritas/veritas_cluster_cs.htm
Cheers!
Eric
From: Lyndonneu Mei mailto:lyndonne...@gmail.co
Just to add to what my good friend Colin said. . .
The message is saying that the NBU agent's online (start) attempt failed, and
that's why clean is being called. This happens when VCS attempts to bring up a
resource (NBU in this case) and determines that the attempt wasn't successful.
So anoth
The configuration you’re considering – running your cluster interconnects over
two separate VLANs – is actually our preferred and recommended method, even
when deploying a simple 2-node cluster. While using direct connections between
cluster nodes is simple and convenient, it becomes problemati
For LLT, there are two configuration files: /etc/llttab and
/etc/llthosts.
Make sure you've made the appropriate changes to both files.
From: veritas-ha-boun...@mailman.eng.auburn.edu
[mailto:veritas-ha-boun...@mailman.eng.auburn.edu] On Behalf Of amit
Sent: Sunday, March 15, 2009 2:07 PM
Will VCS be managing the non-global zone so that it runs on either host?
If so, the zone needs to be configured on each host. If the non-global
zone will only be running on one of the servers independent of the
cluster, then you're fine.
From: veritas-ha-boun...@mailman.eng.auburn.edu
[mailto:
Not sure where you're at on fixing this, but make sure the VxVM volumes
are actually being stopped before attempting to offline (deport) the
Disk Group.
This typically happens in VCS when you don't have the StartVolumes and
StopVolumes attribute set to true on the disk group resource and you
ha
My suggestion would be to upgrade SF and VCS first, the reason being that the
later versions of VCS support Solaris 8, 9, and 10 within the same cluster.
This way, you can do a rolling upgrade of the OS afterwards by upgrading first
the idle node, then switch the app service group to the upgrad
Tom's got a point...I just now looked at the monitor script, and you
exit with a 0. The monitor should return 100 if it determines the app
is offline and 110 if it's determined to be up.
Eric
From: veritas-ha-boun...@mailman.eng.auburn.edu
[mailto:veritas-ha-boun...@mailman.eng.auburn.edu]
A "probe" is simply an initial run of the monitor entry point (your
monitor script) when the resource first goes online. If your monitor
script otherwise works, it's possible the initial probe is running too
soon after the online entry point when the resource isn't yet fully
online.
Eric
F
Right...I'd forgotten about the -value option. That's obviously an
easier method. :-)
Eric
From: Philippe Belliard
Sent: Monday, December 01, 2008 4:47 PM
To: Eric Hennessey; [EMAIL PROTECTED];
veritas-ha@mailman.eng.auburn.edu
Subject: RE: [Veritas-ha] Checking whether a v5
Try haclus -display (on any node) and grep for the ReadOnly attribute.
A value of 1 indicates a closed (read-only) config while a value of 0
indicates an open (read-write) state.
Eric
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: Monday, December 01,
Not so...VCS 4.1 was released specifically to support Solaris 10.
Eric
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of ssloh
Sent: Sunday, November 09, 2008 10:11 PM
To: [EMAIL PROTECTED]; Veritas-ha@mailman.eng.auburn.edu
Subject: Re: [Veritas-ha] Upgrad
Thanks for clarifying that, Annette.
Eric
-Original Message-
From: Annette Benz
Sent: Tuesday, September 30, 2008 12:32 AM
To: Eric Hennessey; Shashi Kanth Boddula;
veritas-ha@mailman.eng.auburn.edu
Subject: RE: [Veritas-ha] ASM instance & VCS group problem
Hi ,
Please note that
I think you need to open a support case on this...it almost looks as if
your version of the Oracle agent for VCS is having a hard time parsing
the "+" sign in the instance name.
Eric
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Shashi
Kanth Boddula
Se
zone
attach/detach, but it'll give you a good overview of how we work with
zones.
http://eval.symantec.com/mktginfo/enterprise/white_papers/ent-whitepaper
_implementing_solaris_zones_06-2007.en-us.pdf
Cheers!
Eric
____
Eric Hennessey
Director,
If reinstalling on that one node doesn't work, you should open up a
support case.
Eric
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of i man
Sent: Wednesday, July 23, 2008 5:48 AM
To: Gene Henriksen
Cc: veritas-ha@mailman.eng.auburn.edu
Subject: Re: [Veritas-ha] Missing Files
The latest version of our zone agent supports zone attach-detach, which
isn't reflected in that paper. The essential best practices outlined in
that paper, though, remain the same.
Eric
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Rodolfo
Bonnin
Sent: Thursday, June 26, 2
viously, I
would just populate it with any and all systems you want to be able to
autostart the service group.
Eric
From: Jon Price [mailto:[EMAIL PROTECTED]
Sent: Wednesday, June 25, 2008 5:19 PM
To: Eric Hennessey; Veritas-ha@mailman.eng.auburn.edu
Subject: Re: [Veritas-ha] AutoStartLi
According to the docs, if the system identified in AutoStartList isn't
up when all others are up after a full cluster start, the SG remains
offline. So if you really don't care which system hosts a given SG on
cluster start, you can omit the AutoStartList attribute.
Eric
From: [EMAIL PROTE
Just to add to this discussion...
If you enable I/O fencing and set the DiskGroup attribute PanicSystemOnDGLoss
to true, you'll get the desired failover behavior. The behavior you're seeing
is by design and is intended to favor data integrity over availability.
Eric
_
In short, VCS has no real dependency on time of day, so you won't run
into any issues.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Joshua
Fielden
Sent: Friday, March 07, 2008 11:05 AM
To: Rodolfo Bonnin; veritas-ha@mailman.eng.auburn.edu
Subject: Re:
This is perfectly do-able with VCS 4.1, since 4.1 supports Solaris 8, 9
and 10.
The general procedure at a high level is:
1. Freeze all service groups
2. Upgrade an idle node in the cluster
3. Once the upgraded node reboots and rejoins the cluster, unfreeze the
service groups running on one of
Sending again to entire list...
The design you describe is a classic stretch cluster (also referred to
as metro or campus cluster) configuration.
Implementing I/O fencing is always a good idea for an extra level of
protection against data corruption in a cluster partition (split-brain)
even
If the SAN is going to have maintenance done to it, you're more than
likely going to want to take the storage offline, which will likely
impact the web server as well, anyway, assuming its storage is also on
the SAN. Best to take the whole service group offline as Jim said.
-Original Messag
erwise the cluster join attempt will fail.
In addition, when working with CFS and/or SF-RAC clusters the
requirements become more stringent as there's an increased risk of
failure when running disparate OS versions and patch levels.
Cheers!
Eric
-Original Message-----
From: Eric Hennesse
and that's our only
requirement for a cluster member to properly join a VCS cluster. The
cluster join process is strictly governed by the version of VCS, not by
the OS version.
Hope this helps!
Eric
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Frida
vailable from Sun. See the following site:
http://sunsolve.sun.com
But we've supported mixed Solaris versions and patch levels for several
releases of VCS.
Eric
-Original Message-
From: Jim Senicka
Sent: Friday, December 07, 2007 8:26 PM
To: '[EMAIL PROTECTED]'; Eric He
E won't.
:-)
Cheers!
Eric
From: upen [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 06, 2007 7:43 PM
To: Eric Hennessey
Cc: veritas-ha@mailman.eng.auburn.edu
Subject: Re: [Veritas-ha] best way for patching of cluster servers
Thanks Eric
One question,
Does Veritas/symant
The typical approach to applying OS patches in a clustered environment
is to patch an idle server, let it reboot and rejoin the cluster, and
make sure it's running OK. If it is, use the cluster software to switch
application(s) from an active server to the one you just patched, and if
the app come
That's correct...VCS won't restart services that are already running
when it starts up.
It's always a good idea when doing this type of thing, though, to
persistently freeze all service groups first, then unfreeze them after
the cluster is running.
From: [E
r 05, 2007 10:34 AM
To: [EMAIL PROTECTED]
Cc: veritas-ha@mailman.eng.auburn.edu; Eric Hennessey
Subject: Re: [Veritas-ha] Can I use Solaris 10 / VCS connected to a
shared Firewire Drive?
Are you trying to do Solaris on X86 or SPARC? I tried to use firewire
on SPARC just to play around a year
VCS with RAC requires SCSI-3 capable disks for I/O fencing, so this
wouldn't work.
From: F.A [mailto:[EMAIL PROTECTED]
Sent: Sunday, November 04, 2007 6:23 PM
To: veritas-ha@mailman.eng.auburn.edu; Eric Hennessey
Subject: Can I use Solaris 10 / VCS conn
ZFS has been implemented with VCS by at least one systems integrator in
the past as a custom agent.
We'll be releasing an agent for ZFS in the coming months, but I'm unsure
of the exact release date.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Nathan
This question was also posted to the Symantec forums, and I responded
there:
It seems to me that the fact you're migrating to a new server makes this
easy:
Build out the new servers using the target OS version/patch level.
Install appropriate Oracle s/w on the new server.
Install target version/
Better yet, you should not issue this "error" at all.
Think of the monitor entry point as non-judgmental. It merely reports whether
a resource is online or offline on a given node. If $HTTPDCONF is not present,
that's not an error condition on a node where the service group isn't presently
on
What's the RestartLimit attribute set to? The default value is zero (no
restart).
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Anderson, Ryan C (US SSA)
Sent: Wednesday, July 25, 2007 2:30 PM
To: veritas-ha@mailman.eng.auburn.edu
Subject: [Verit
As long as at least one plex in the volume is available, VCS will detect
no change in the state of any configured Volume or Mount resources.
That said, it's always best practice to freeze a service group when
mucking about with any resources under that service group's control.
Eric
-Origina
Hi Sohail,
I know of one stretch/metro/campus cluster using VxVM mirroring at 100Km
(~60 miles). Most of the stretch cluster distances out there are much
less (< 50Km), but the distance you're considering is well within
limits.
Eric
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[
How heavily loaded is this server?
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Anand
Ganesh
Sent: Tuesday, June 12, 2007 1:54 PM
To: Tom Riemer; veritas-ha@mailman.eng.auburn.edu
Subject: Re: [Veritas-ha] Odd behavior from DiskGroup monitor
So
VCS used to do something like this with a version called VCS Traffic
Director.
We decided that it was better to let load balancers do what load
balancers do, and put one in front of a VCS cluster running a parallel
service group.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL
Laszlo and I had an offline discussion of this. He identified the
problem himself: with disabled resources, the service group only goes
to a partial online state, not online. Therefore, the trap doesn't get
issued.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On
And the use of a steward is HIGHLY recommended.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jim
Senicka
Sent: Wednesday, March 14, 2007 10:10 AM
To: Cronin, John S; Pavel A Tsvetkov; Veritas-ha@mailman.eng.auburn.edu
Subject: Re: [Veritas-ha] Veritas
RDCs are supported only with synchronous replication, regardless of the
type of replication used. It doesn't matter if it's VVR or some form of
array-based replication.
Eric
_
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Pavel A
Tsvetkov
Sent: Tuesday, March 13, 2007
We've implemented VCS on similar blades before (HS21) with just two interfaces.
We dedicated one of the interfaces as a private cluster interconnect, and the
other was used as the public network connection with an LLT lowpri link
configured on it. This satisfies the requirement for two interco
Not true...VCS 4.1 supports Solaris 8, 9 and 10.
_
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Munish Dhawan
Sent: Sunday, February 18, 2007 11:53 PM
To: Frank, Lutz; veritas-ha@mailman.eng.auburn.edu
Subject: Re: [Veritas-ha] NFSLock-Agent
Frank,
VCS4.1 is not supp
The usual practice for applying patches in a VCS environment is to
simply switch (hagrp -switch) all your service groups to one node, patch
and reboot the now-inactive node, switch all the service groups to the
patched node, then patch the second node.
This way, your only application outages occur
It's not only possible, it's supported.
Eric
_
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Pablo Méndez
Hernández
Sent: Friday, December 22, 2006 2:14 PM
To: veritas-ha@mailman.eng.auburn.edu
Subject: [Veritas-ha] Sharing switch witch different cluster-id
Hi manage
Based on the log messages, the agent is doing exactly what it's supposed
to be doing in the event of a network failure on the interfaces.
You say this is happening every day. Is it once a day? Several times
each day?
_
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ahi
You can do this better with the type-level attribute RestartLimit. Of
course, modifying this attribute modifies it for ALL instances of that
resource type. But starting with VCS 4.x, you can override this
attribute for a specific instance of a given type.
So if I have multiple Application resour
This looks similar to something I used to see when the EEPROM variable
use_local_mac_address was set to false.
Have you checked that on all nodes?
Alternatively, check and make sure that each NIC from each system is on
its own private network/VLAN.
Eric
-Original Message-
From: [EMAIL
By that, do you mean is it possible to simulate a resource that's taking
a long time to go online? If so, the answer unfortunately is no.
The simulator will model cluster behavior under the assumption that all
resources are behaving properly. So you can simulate how the cluster
will behave when
In 4.1, the bundled Mount agent supports NFS
mounts.
Eric
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Evsyukov,
SergeySent: Wednesday, November 15, 2006 4:35 AMTo:
veritas-ha@mailman.eng.auburn.eduSubject: [Veritas-ha] NFS mount
agent question
Hello
colleagues,
We
While we recommend same OS rev and kernel patch, it's not a
requirement. Differences should be transitional,
though.
For instance, VCS 5.0 supports Solaris 8, 9 and 10.
Therefore, you can mix Solaris 8, 9 and 10 nodes in the same cluster while
you're in the midst of getting all nodes upg
I guess about the only thing having the individual volume
resources gives you is more granular failure detection. If you just
have a DiskGroup resource with StopVolumes/StartVolumes flipped on, and one
volume fails, your Informix resource will ultimately fault, but your DiskGroup
resource w
There's currently no Symantec/Veritas supplied agent for
SVM/SDS. It would require a custom agent.
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Pavel A
TsvetkovSent: Wednesday, September 20, 2006 7:25 AMTo:
veritas-ha@mailman.eng.auburn.eduSubject: [Veritas-ha] SVM agent
Odd that hastatus wasn't implemented in the sim as hasim -status. I'll
look into this.
Regarding the "almost instantaneous actions" in the sim...what version
are you running? The latest versions of the sim have introduced an
artificial delay between resources on/offlining.
Eric
-Original
The VCS simulator...learn it, live it, love it.
:-)
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jim
SenickaSent: Tuesday, September 19, 2006 11:13 AMTo:
Steven SimCc: veritas-ha@mailman.eng.auburn.eduSubject:
Re: [Veritas-ha] SG interaction question
comments below
It appears you have the notifier configured correctly. You should open
a support case on this.
Eric
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Stoyan
Angelov
Sent: Friday, September 15, 2006 3:39 PM
To: veritas-ha@mailman.eng.auburn.edu
Subject: [
Title: VCS preonline Trigger
Make sure the trigger script exists with name preonline in
/opt/VRTSvcs/bin/triggers on both nodes in the cluster.
Eric
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]Sent: Friday, August 25, 2006 1:12
PMTo: veritas-ha@mailman
59 matches
Mail list logo