Issuing the command on one full rep will be enough...

Ruzi
--- "Potkay, Peter M (PLC, IT)"
<[EMAIL PROTECTED]> wrote:
> Mike/Ruzi, do you know if you have to issue the
> command on both your full
> repositories? Or will one be enough?
>
>
> -----Original Message-----
> From: Ruzi R [mailto:[EMAIL PROTECTED]
> Sent: Monday, August 25, 2003 11:36 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Urgent help: Unexpected behavior in the
> Cluster!!!
>
>
> Thanks everyone who has responded.The problem has
> been
> resolved.
>
> Just to answer Peter's question:  As I said in my
> original email, I had done REFRESH CLUSTER
> REPOS(YES/NO) -- including the new repositories.
> However, the mainframe did not like the REPOS parm.
> So
> I issued it without the REPOS. The qmgrs on other
> platforms still showed the CLUSSDrs to the old
> repositories when issued DISPLAY CLUSQMGR(*),
> "after"
> the  REFRESH CLUSTER REPOS(YES) was issued.
>
> Anyway, the problem has been resolved by issuing
> RESET
> CLUSTER QMNAME(QM1/QM2) ACTION(FORCEREMOVE)
> QUEUES(YES) command on the new full repositories. I
> did not think that this command would be necessary
> as
> I removed the full repositories and the cluster
> channels gracefully (i.e. as per the steps indicated
> in the Cluster manual).
>
> Thanks,
>
> Ruzi
>
>
>
> --- "Potkay, Peter M (PLC, IT)"
> <[EMAIL PROTECTED]> wrote:
> > So you would issue this command from the QM where
> > other QMS are still trying
> > to send stuff to. In Ruzi's case, the old QM1 and
> > QM2 queue managers, right?
> > This would tell all other QMs in the cluster to
> > delete any automatic
> > CLUSSNDRs to QM1 or QM2?
> >
> >
> > What do you do in the case where perhaps QM1 and
> QM2
> > were already deleted?
> > Is there any official way in this case? Perhaps
> > issuing  "REFRESH CLUSTER
> > REPOS(YES)" on the QM that had the bad automatic
> > CLUSSNDRs would flush them
> > out? In Ruzi's case, the mainframe QM? (Sorry
> Ruzi,
> > from your original post
> > it was not clear if you issued this command on the
> > mainframe or not).
> >
> >
> >
> >
> > -----Original Message-----
> > From: Mike Davidson [mailto:[EMAIL PROTECTED]
> > Sent: Monday, August 25, 2003 9:48 AM
> > To: [EMAIL PROTECTED]
> > Subject: Re: Urgent help: Unexpected behavior in
> the
> > Cluster!!!
> >
> >
> >
> > I found in my testing that using the RESET command
> > got rid of the
> > automatically-defined channels from the DIS
> > CLUSCHL(*) command. I tried it
> > after reading the "Queue Manager Clusters" manual
> > and, believe it or not, it
> > worked. Here's an excerpt (p. 68):
> >
> > "You might use the RESET CLUSTER command if, for
> > example, a queue manager
> > has been deleted but still has cluster-receiver
> > channels defined to the
> > cluster. Instead of waiting for WebSphere MQ to
> > remove these definitions
> > (which it does automatically) you can issue the
> > RESET CLUSTER command to
> > tidy up sooner. All other queue managers in the
> > cluster are then informed
> > that the queue manager is no longer available."
> >
> > I hope this helps.
> >
> > Mike Davidson
> > TSYS MQ Tech Support
> > [EMAIL PROTECTED]
> >
> >
> >
> >
> >         "Potkay, Peter M (PLC, IT)"
> > <[EMAIL PROTECTED]>
> > Sent by: MQSeries List <[EMAIL PROTECTED]>
> >
> >
> > 08/25/2003 08:55 AM
> >
> >
> > Please respond to MQSeries List
> >
> >
> >
> >
> >         To:        [EMAIL PROTECTED]
> >         cc:
> >         Subject:        Re: Urgent help:
> Unexpected
> > behavior in the
> > Cluster!!!
> >
> > I would bet that the mainframe queue manager still
> > has some automatic
> > CLUSSNDRs leftover from when they were pointing to
> > QM1 and QM2. just
> > stopping / deleting / or modifying the manually
> > defined ones does not do
> > anything to the automatic ones.
> >
> > They will even continue to retry even now, hoping
> > that the listener on QM!/@
> > comes back.
> >
> > I know of no graceful way of eliminating Automatic
> > CLUSSNDRs. :-(  My
> > biggest pet peeve with clustering. The only way I
> > know how to do it is to
> > completely blow away the repositories in the
> cluster
> > (partial and full).
> > Maybe someone knows a better way???
> >
> >
> > The one thing I have learned in the past couple of
> > weeks with clustering is
> > never ever just delete something. You always have
> to
> > un cluster it first
> > (queues, channels), and then delete.
> >
> >
> >
> > -----Original Message-----
> > From: Ruzi R [mailto:[EMAIL PROTECTED]
> > Sent: Monday, August 25, 2003 8:39 AM
> > To: [EMAIL PROTECTED]
> > Subject: Urgent help: Unexpected behavior in the
> > Cluster!!!
> >
> >
> > Hi all,
> >
> > The QM1 and QM2  (on W2K, MQ 5.3) were full
> > repositories for our cluster. I have just replaced
> > them (full repositories) with  QMA and QMB--
> > removing
> > QM1 and QM2 from the cluster. I have done this
> > following the instructions given in the MQ Cluster
> > manual. I have done the clean-up of the obsolete
> > channels etc. (e.g. TO.QM1 and TO.QM2). In other
> > words, no cluster member has any CLUSSDR channel
> > pointing to the old repositiories. The old
> > repositories have their cluster channels deleted.
> > Have done REFRESH CLUSTER REPOS(YES/NO) as per the
> > instructions in the manual.
> >
> > As we don t need QM1 and MQ2 any longer, I  did
> > endmqm
> > on QM1 and QM2. Stopped the listener on both. Did
> > some
> > testing; everything seemd OK in the cluster.
> >
> > Then I had to move on to something else
> >
> > MQ2T (on OS/390  MQ 5.3) is a member in the
> cluster.
> > I
> > had modified the procs for MQ2T to pick up the new
> > connames (for QMA and QMB). As part of testing, my
> > colleague  stopped MQ2T and re-started.
> Apparently,
> > on
>
=== message truncated ===

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive

Reply via email to