I've done it before on both. My logic being that I wanted both full repositories to be rid of the "rogue" auto-defined channels and not knowing for sure if running the RESET command on one full repository would cause MQ to let the other full repository know of the change/deletion.

Mike Davidson
TSYS MQ Tech Support
[EMAIL PROTECTED]



"Potkay, Peter M (PLC, IT)" <[EMAIL PROTECTED]>
Sent by: MQSeries List <[EMAIL PROTECTED]>

08/25/2003 01:23 PM

Please respond to MQSeries List

       
        To:        [EMAIL PROTECTED]
        cc:        
        Subject:        Re: Urgent help: Unexpected behavior in the Cluster!!!

Mike/Ruzi, do you know if you have to issue the command on both your full
repositories? Or will one be enough?


-----Original Message-----
From: Ruzi R [mailto:[EMAIL PROTECTED]
Sent: Monday, August 25, 2003 11:36 AM
To: [EMAIL PROTECTED]
Subject: Re: Urgent help: Unexpected behavior in the Cluster!!!


Thanks everyone who has responded.The problem has been
resolved.

Just to answer Peter's question:  As I said in my
original email, I had done REFRESH CLUSTER
REPOS(YES/NO) -- including the new repositories.
However, the mainframe did not like the REPOS parm. So
I issued it without the REPOS. The qmgrs on other
platforms still showed the CLUSSDrs to the old
repositories when issued DISPLAY CLUSQMGR(*), "after"
the  REFRESH CLUSTER REPOS(YES) was issued.

Anyway, the problem has been resolved by issuing RESET
CLUSTER QMNAME(QM1/QM2) ACTION(FORCEREMOVE)
QUEUES(YES) command on the new full repositories. I
did not think that this command would be necessary as
I removed the full repositories and the cluster
channels gracefully (i.e. as per the steps indicated
in the Cluster manual).

Thanks,

Ruzi



--- "Potkay, Peter M (PLC, IT)"
<[EMAIL PROTECTED]> wrote:
> So you would issue this command from the QM where
> other QMS are still trying
> to send stuff to. In Ruzi's case, the old QM1 and
> QM2 queue managers, right?
> This would tell all other QMs in the cluster to
> delete any automatic
> CLUSSNDRs to QM1 or QM2?
>
>
> What do you do in the case where perhaps QM1 and QM2
> were already deleted?
> Is there any official way in this case? Perhaps
> issuing  "REFRESH CLUSTER
> REPOS(YES)" on the QM that had the bad automatic
> CLUSSNDRs would flush them
> out? In Ruzi's case, the mainframe QM? (Sorry Ruzi,
> from your original post
> it was not clear if you issued this command on the
> mainframe or not).
>
>
>
>
> -----Original Message-----
> From: Mike Davidson [mailto:[EMAIL PROTECTED]
> Sent: Monday, August 25, 2003 9:48 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Urgent help: Unexpected behavior in the
> Cluster!!!
>
>
>
> I found in my testing that using the RESET command
> got rid of the
> automatically-defined channels from the DIS
> CLUSCHL(*) command. I tried it
> after reading the "Queue Manager Clusters" manual
> and, believe it or not, it
> worked. Here's an excerpt (p. 68):
>
> "You might use the RESET CLUSTER command if, for
> example, a queue manager
> has been deleted but still has cluster-receiver
> channels defined to the
> cluster. Instead of waiting for WebSphere MQ to
> remove these definitions
> (which it does automatically) you can issue the
> RESET CLUSTER command to
> tidy up sooner. All other queue managers in the
> cluster are then informed
> that the queue manager is no longer available."
>
> I hope this helps.
>
> Mike Davidson
> TSYS MQ Tech Support
> [EMAIL PROTECTED]
>
>
>
>
>         "Potkay, Peter M (PLC, IT)"
> <[EMAIL PROTECTED]>
> Sent by: MQSeries List <[EMAIL PROTECTED]>
>
>
> 08/25/2003 08:55 AM
>
>
> Please respond to MQSeries List
>
>
>
>
>         To:        [EMAIL PROTECTED]
>         cc:
>         Subject:        Re: Urgent help: Unexpected
> behavior in the
> Cluster!!!
>
> I would bet that the mainframe queue manager still
> has some automatic
> CLUSSNDRs leftover from when they were pointing to
> QM1 and QM2. just
> stopping / deleting / or modifying the manually
> defined ones does not do
> anything to the automatic ones.
>
> They will even continue to retry even now, hoping
> that the listener on QM!/@
> comes back.
>
> I know of no graceful way of eliminating Automatic
> CLUSSNDRs. :-(  My
> biggest pet peeve with clustering. The only way I
> know how to do it is to
> completely blow away the repositories in the cluster
> (partial and full).
> Maybe someone knows a better way???
>
>
> The one thing I have learned in the past couple of
> weeks with clustering is
> never ever just delete something. You always have to
> un cluster it first
> (queues, channels), and then delete.
>
>
>
> -----Original Message-----
> From: Ruzi R [mailto:[EMAIL PROTECTED]
> Sent: Monday, August 25, 2003 8:39 AM
> To: [EMAIL PROTECTED]
> Subject: Urgent help: Unexpected behavior in the
> Cluster!!!
>
>
> Hi all,
>
> The QM1 and QM2  (on W2K, MQ 5.3) were full
> repositories for our cluster. I have just replaced
> them (full repositories) with  QMA and QMB--
> removing
> QM1 and QM2 from the cluster. I have done this
> following the instructions given in the MQ Cluster
> manual. I have done the clean-up of the obsolete
> channels etc. (e.g. TO.QM1 and TO.QM2). In other
> words, no cluster member has any CLUSSDR channel
> pointing to the old repositiories. The old
> repositories have their cluster channels deleted.
> Have done REFRESH CLUSTER REPOS(YES/NO) as per the
> instructions in the manual.
>
> As we don t need QM1 and MQ2 any longer, I  did
> endmqm
> on QM1 and QM2. Stopped the listener on both. Did
> some
> testing; everything seemd OK in the cluster.
>
> Then I had to move on to something else
>
> MQ2T (on OS/390  MQ 5.3) is a member in the cluster.
> I
> had modified the procs for MQ2T to pick up the new
> connames (for QMA and QMB). As part of testing, my
> colleague  stopped MQ2T and re-started. Apparently,

> on
> the re-start, the event logs on the old full
> repositories got flooded with messages indicating
> MQ2T
> was trying to connect to them (at least that is what
> he says.. I haven t seen the errors myself. He
> cleaned
> up the log and saved the data somewhere but I don t
> know where yet). Why would this happen as there is
> nothing in the start-up procc pointing to QM1 or
> MQ2?
> I have to find an answer to this as we have another
> cluster whose full repsoitories will have to be
> replaced today. He said that, as soon as he stopped
> the listener on these qmgrs the errors stopped. It
> just so happens that we don t need QM1 and QM2 any
> longer, so I will delete them. My question is,
> suppose that QM1 and QM2 were to stay as independent
> queue managers (not as a cluster member) with their
> listeners running obviously, how would I prevent
> their
> logs from getting filled with messages from cluster
> queue managers trying to connect?
>
> My thinking is,  because the new full repositories
> will keep the info related to the old repositories
> for
> 90 days   during which time, any member qmgr
> re-starting will cause this kind of error???
>
> I would very much appreciate your input on the above
> unexpected behavior the cluster.
>
> Thanks,
>
> Ruzi
>
> Instructions for managing your mailing list
> subscription are provided in
> the Listserv General Users Guide available at
> http://www.lsoft.com
> Archive: http://vm.akh-wien.ac.at/MQSeries.archive
>
>
> This communication, including attachments, is for
> the exclusive use of
> addressee and may contain proprietary, confidential
> or privileged
> information. If you are not the intended recipient,
> any use, copying,
> disclosure, dissemination or distribution is
> strictly prohibited. If
> you are not the intended recipient, please notify
> the sender
> immediately by return email and delete this
> communication
=== message truncated ===

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive



The information contained in this communication (including any attachments hereto) is confidential and is intended solely for the personal and confidential use of the individual or entity to whom it is addressed. The information may also constitute a legally privileged confidential communication. If the reader of this message is not the intended recipient or an agent responsible for delivering it to the intended recipient, you are hereby notified that you have received this communication in error and that any review, dissemination, copying, or unauthorized use of this information, or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this communication in error, please notify us immediately by e-mail, and delete the original message. Thank you

Reply via email to