Urgent help: Unexpected behavior in the Cluster!!!

2003-08-25 Thread Ruzi R
Hi all,

The QM1 and QM2  (on W2K, MQ 5.3) were full
repositories for our cluster. I have just replaced
them (full repositories) with  QMA and QMB-- removing
QM1 and QM2 from the cluster. I have done this
following the instructions given in the MQ Cluster
manual. I have done the clean-up of the obsolete
channels etc. (e.g. TO.QM1 and TO.QM2). In other
words, no cluster member has any CLUSSDR channel
pointing to the old repositiories. The old
repositories have their cluster channels deleted.
Have done REFRESH CLUSTER REPOS(YES/NO) as per the
instructions in the manual.

As we don t need QM1 and MQ2 any longer, I  did endmqm
on QM1 and QM2. Stopped the listener on both. Did some
testing; everything seemd OK in the cluster.

Then I had to move on to something else

MQ2T (on OS/390  MQ 5.3) is a member in the cluster. I
had modified the procs for MQ2T to pick up the new
connames (for QMA and QMB). As part of testing, my
colleague  stopped MQ2T and re-started. Apparently, on
the re-start, the event logs on the old full
repositories got flooded with messages indicating MQ2T
was trying to connect to them (at least that is what
he says.. I haven t seen the errors myself. He cleaned
up the log and saved the data somewhere but I don t
know where yet). Why would this happen as there is
nothing in the start-up procc pointing to QM1 or MQ2?
I have to find an answer to this as we have another
cluster whose full repsoitories will have to be
replaced today. He said that, as soon as he stopped
the listener on these qmgrs the errors stopped. It
just so happens that we don t need QM1 and QM2 any
longer, so I will delete them. My question is,
suppose that QM1 and QM2 were to stay as independent
queue managers (not as a cluster member) with their
listeners running obviously, how would I prevent their
logs from getting filled with messages from cluster
queue managers trying to connect?

My thinking is,  because the new full repositories
will keep the info related to the old repositories for
90 days   during which time, any member qmgr
re-starting will cause this kind of error???

I would very much appreciate your input on the above
unexpected behavior the cluster.

Thanks,

Ruzi

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive


Re: Query on choosing a particular MQ channel in a multi-hopping setup

2003-08-25 Thread Potkay, Peter M (PLC, IT)
When the message gets to B, normal MQ name resolution takes place on that
message.

If the destination Queue Manager in the header is QueueManagerC, MQ on Queue
manager B will look for...

1.)A transmit queue called QueueManagerC, and will put the message there.
Whatever channel is associated with that transmit queue will get it.

2.)A Queue Manager Alias with the name QueueManagerC. The definition of the
Queue Manager Alias will point to a particular transmit queue, where the
associated channel will grab it.

3.) If it can't find either of these 2 named queues, then MQ will look to
see if queue manager B has a default transmit queue parameter, and if found,
will use that one. This is not a good option to use on intermediate (HUB)
queue managers, as it limits the default action to a single direction. Not
good.

If this does not satisfy your needs, you can...

1.) Put an intermediate application on Queue Manager B that intercepts these
messages and places them to any Remote queue def that you want, which can be
mapped to any transmit queue/channel that you like.

2.) On queue manager A, define his default transmit queue to be
QueueManagerB.XMITQ. Any time you put a message where queue manager A does
not know the queue manager name, it will force it over to B. On B, define
all the queue manager aliases for the new queue manager names you will be
using, and have them defined so that they do 2 things. First, they will
switch the funky queue manager name you used to the correct queue manager
(QueueManagerC). Second, they can put the message to any transmit queue you
want. The end result is that all the message will end up on queue manager C
with the correct queue manager name in the header, and they will get there
via the channels you want. Make sure you understand the implications of
using the default xmit queue property on queue manager A .

So, this means you can have QM aliases on B with the names of QMCFAST,
QMCSLOW, QMCKK. Each of these QM aliases will change the QM name to
QueueManagerC, but each will refer to a different XMIT queue from B to C.
Now, the app on A never uses QueueManagerC on its MQOPEN/MQPUT1. It instead
uses one of the 3 funky names. Since QM A has no clue what these are, it
ships them over to the default XMIT queue over to B, where the QM aliases do
there thang.






-Original Message-
From: K K HK [mailto:[EMAIL PROTECTED]
Sent: Sunday, August 24, 2003 11:37 PM
To: [EMAIL PROTECTED]
Subject: Query on choosing a particular MQ channel in a multi-hopping
setup


Dear MQers,

We are looking for your suggestion/input on how to define the MQ channel in
a multi-hopping circumstances.

The case is that we have two pairs of MQ channels defined between three
queue managers (say A-B-C). We know that we can make use remote queue+xmit
queue to force the use a particular channel between A and B. However, which
channel will be chosen by 'B', as an intermediate queue manager, to select
for transmitting message from B to C among the two available channels?

Is there any way for us to force a particular type of message (or MQQ) to
use a particular MQ channel between B and C?

TIA
K K

_
Linguaphone :  Learning English? Get Japanese lessons for FREE
http://go.msnserver.com/HK/30476.asp

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive


This communication, including attachments, is for the exclusive use of
addressee and may contain proprietary, confidential or privileged
information. If you are not the intended recipient, any use, copying,
disclosure, dissemination or distribution is strictly prohibited. If
you are not the intended recipient, please notify the sender
immediately by return email and delete this communication and destroy all copies.

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive


Re: Urgent help: Unexpected behavior in the Cluster!!!

2003-08-25 Thread Potkay, Peter M (PLC, IT)
I would bet that the mainframe queue manager still has some automatic
CLUSSNDRs leftover from when they were pointing to QM1 and QM2. just
stopping / deleting / or modifying the manually defined ones does not do
anything to the automatic ones.

They will even continue to retry even now, hoping that the listener on QM!/@
comes back.

I know of no graceful way of eliminating Automatic CLUSSNDRs. :-(  My
biggest pet peeve with clustering. The only way I know how to do it is to
completely blow away the repositories in the cluster (partial and full).
Maybe someone knows a better way???


The one thing I have learned in the past couple of weeks with clustering is
never ever just delete something. You always have to un cluster it first
(queues, channels), and then delete.



-Original Message-
From: Ruzi R [mailto:[EMAIL PROTECTED]
Sent: Monday, August 25, 2003 8:39 AM
To: [EMAIL PROTECTED]
Subject: Urgent help: Unexpected behavior in the Cluster!!!


Hi all,

The QM1 and QM2  (on W2K, MQ 5.3) were full
repositories for our cluster. I have just replaced
them (full repositories) with  QMA and QMB-- removing
QM1 and QM2 from the cluster. I have done this
following the instructions given in the MQ Cluster
manual. I have done the clean-up of the obsolete
channels etc. (e.g. TO.QM1 and TO.QM2). In other
words, no cluster member has any CLUSSDR channel
pointing to the old repositiories. The old
repositories have their cluster channels deleted.
Have done REFRESH CLUSTER REPOS(YES/NO) as per the
instructions in the manual.

As we don t need QM1 and MQ2 any longer, I  did endmqm
on QM1 and QM2. Stopped the listener on both. Did some
testing; everything seemd OK in the cluster.

Then I had to move on to something else

MQ2T (on OS/390  MQ 5.3) is a member in the cluster. I
had modified the procs for MQ2T to pick up the new
connames (for QMA and QMB). As part of testing, my
colleague  stopped MQ2T and re-started. Apparently, on
the re-start, the event logs on the old full
repositories got flooded with messages indicating MQ2T
was trying to connect to them (at least that is what
he says.. I haven t seen the errors myself. He cleaned
up the log and saved the data somewhere but I don t
know where yet). Why would this happen as there is
nothing in the start-up procc pointing to QM1 or MQ2?
I have to find an answer to this as we have another
cluster whose full repsoitories will have to be
replaced today. He said that, as soon as he stopped
the listener on these qmgrs the errors stopped. It
just so happens that we don t need QM1 and QM2 any
longer, so I will delete them. My question is,
suppose that QM1 and QM2 were to stay as independent
queue managers (not as a cluster member) with their
listeners running obviously, how would I prevent their
logs from getting filled with messages from cluster
queue managers trying to connect?

My thinking is,  because the new full repositories
will keep the info related to the old repositories for
90 days   during which time, any member qmgr
re-starting will cause this kind of error???

I would very much appreciate your input on the above
unexpected behavior the cluster.

Thanks,

Ruzi

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive


This communication, including attachments, is for the exclusive use of
addressee and may contain proprietary, confidential or privileged
information. If you are not the intended recipient, any use, copying,
disclosure, dissemination or distribution is strictly prohibited. If
you are not the intended recipient, please notify the sender
immediately by return email and delete this communication and destroy all copies.

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive


Re: Query on choosing a particular MQ channel in a multi-hopping setup

2003-08-25 Thread Wyatt, T. Rob
KK,

When the message arrives on B, it will resolve the XMitQ based on QMgr
resolution.  If you wish to have two sets of channels from A to C, both
passing through B, you will have to supply consistent names all along the
route.

For example, on A if you have two QRemotes C and C-FAST and you want
messages to move down the C-FAST channel, you would have to supply two
XMitQ's on B called C and C-FAST.  You would also need a QMgr alias on C
called C-FAST.

Now if you set up a QRemote on A with C-FAST as the RQMNAME, the message
will use the channel associated with the C-FAST QMgr all the way to C.  On
the other hand, if your QRemote uses C as the RQMNAME, the message will go
down the normal path.

As long as you supply QRemotes and XMitQ's that will properly resolve the
two routes, you can do this through as many hops as necessary and keep the
paths separate.

HTH -- T.Rob

-Original Message-
From: K K HK [mailto:[EMAIL PROTECTED]
Sent: Sunday, August 24, 2003 11:37 PM
To: [EMAIL PROTECTED]
Subject: Query on choosing a particular MQ channel in a multi-hopping
setup


Dear MQers,

We are looking for your suggestion/input on how to define the MQ channel in
a multi-hopping circumstances.

The case is that we have two pairs of MQ channels defined between three
queue managers (say A-B-C). We know that we can make use remote queue+xmit
queue to force the use a particular channel between A and B. However, which
channel will be chosen by 'B', as an intermediate queue manager, to select
for transmitting message from B to C among the two available channels?

Is there any way for us to force a particular type of message (or MQQ) to
use a particular MQ channel between B and C?

TIA
K K

_
Linguaphone :  Learning English? Get Japanese lessons for FREE
http://go.msnserver.com/HK/30476.asp

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive


Re: Query on choosing a particular MQ channel in a multi-hopping setup

2003-08-25 Thread Ruzi R
You can use Queue manager aliases to acheive what you
want. Each alias can point to a different xmit queue.
For instance, on the B server, if the queue manager
QMB  has 2 qmgr aliases called QMC1 and QMC2, you will
have:

 xmit queue QMC - B.TO.C channel
 QMC1 can point to xmit queue QMC1--- B.TO.C1 channel
 QMC2 can point to xmit queue QMC2--- B.TO.C2
channel.


On QMA (on the server A: the putting applications can
specify either QMC or QMC1 or QMC2 as the remote queue
manager name.

You can get more info on the subject from the
Intercommunication manual, page 27.

Hope this helps.

Ruzi
--- K K HK [EMAIL PROTECTED] wrote:
 Dear MQers,

 We are looking for your suggestion/input on how to
 define the MQ channel in
 a multi-hopping circumstances.

 The case is that we have two pairs of MQ channels
 defined between three
 queue managers (say A-B-C). We know that we can make
 use remote queue+xmit
 queue to force the use a particular channel between
 A and B. However, which
 channel will be chosen by 'B', as an intermediate
 queue manager, to select
 for transmitting message from B to C among the two
 available channels?

 Is there any way for us to force a particular type
 of message (or MQQ) to
 use a particular MQ channel between B and C?

 TIA
 K K


_
 Linguaphone :  Learning English? Get Japanese
 lessons for FREE
 http://go.msnserver.com/HK/30476.asp

 Instructions for managing your mailing list
 subscription are provided in
 the Listserv General Users Guide available at
 http://www.lsoft.com
 Archive: http://vm.akh-wien.ac.at/MQSeries.archive

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive


Re: Urgent help: Unexpected behavior in the Cluster!!!

2003-08-25 Thread Ruzi R
The mainframe procs do NO have any reference
whatsoever to the old repositories. And I did
uncluster things first before I removed/deleted
them...

Any other ideas, please???

Thanks,

Ruzi
--- Potkay, Peter M (PLC, IT)
[EMAIL PROTECTED] wrote:
 I would bet that the mainframe queue manager still
 has some automatic
 CLUSSNDRs leftover from when they were pointing to
 QM1 and QM2. just
 stopping / deleting / or modifying the manually
 defined ones does not do
 anything to the automatic ones.

 They will even continue to retry even now, hoping
 that the listener on QM!/@
 comes back.

 I know of no graceful way of eliminating Automatic
 CLUSSNDRs. :-(  My
 biggest pet peeve with clustering. The only way I
 know how to do it is to
 completely blow away the repositories in the cluster
 (partial and full).
 Maybe someone knows a better way???


 The one thing I have learned in the past couple of
 weeks with clustering is
 never ever just delete something. You always have to
 un cluster it first
 (queues, channels), and then delete.



 -Original Message-
 From: Ruzi R [mailto:[EMAIL PROTECTED]
 Sent: Monday, August 25, 2003 8:39 AM
 To: [EMAIL PROTECTED]
 Subject: Urgent help: Unexpected behavior in the
 Cluster!!!


 Hi all,

 The QM1 and QM2  (on W2K, MQ 5.3) were full
 repositories for our cluster. I have just replaced
 them (full repositories) with  QMA and QMB--
 removing
 QM1 and QM2 from the cluster. I have done this
 following the instructions given in the MQ Cluster
 manual. I have done the clean-up of the obsolete
 channels etc. (e.g. TO.QM1 and TO.QM2). In other
 words, no cluster member has any CLUSSDR channel
 pointing to the old repositiories. The old
 repositories have their cluster channels deleted.
 Have done REFRESH CLUSTER REPOS(YES/NO) as per the
 instructions in the manual.

 As we don t need QM1 and MQ2 any longer, I  did
 endmqm
 on QM1 and QM2. Stopped the listener on both. Did
 some
 testing; everything seemd OK in the cluster.

 Then I had to move on to something else

 MQ2T (on OS/390  MQ 5.3) is a member in the cluster.
 I
 had modified the procs for MQ2T to pick up the new
 connames (for QMA and QMB). As part of testing, my
 colleague  stopped MQ2T and re-started. Apparently,
 on
 the re-start, the event logs on the old full
 repositories got flooded with messages indicating
 MQ2T
 was trying to connect to them (at least that is what
 he says.. I haven t seen the errors myself. He
 cleaned
 up the log and saved the data somewhere but I don t
 know where yet). Why would this happen as there is
 nothing in the start-up procc pointing to QM1 or
 MQ2?
 I have to find an answer to this as we have another
 cluster whose full repsoitories will have to be
 replaced today. He said that, as soon as he stopped
 the listener on these qmgrs the errors stopped. It
 just so happens that we don t need QM1 and QM2 any
 longer, so I will delete them. My question is,
 suppose that QM1 and QM2 were to stay as independent
 queue managers (not as a cluster member) with their
 listeners running obviously, how would I prevent
 their
 logs from getting filled with messages from cluster
 queue managers trying to connect?

 My thinking is,  because the new full repositories
 will keep the info related to the old repositories
 for
 90 days   during which time, any member qmgr
 re-starting will cause this kind of error???

 I would very much appreciate your input on the above
 unexpected behavior the cluster.

 Thanks,

 Ruzi

 Instructions for managing your mailing list
 subscription are provided in
 the Listserv General Users Guide available at
 http://www.lsoft.com
 Archive: http://vm.akh-wien.ac.at/MQSeries.archive


 This communication, including attachments, is for
 the exclusive use of
 addressee and may contain proprietary, confidential
 or privileged
 information. If you are not the intended recipient,
 any use, copying,
 disclosure, dissemination or distribution is
 strictly prohibited. If
 you are not the intended recipient, please notify
 the sender
 immediately by return email and delete this
 communication and destroy all copies.

 Instructions for managing your mailing list
 subscription are provided in
 the Listserv General Users Guide available at
 http://www.lsoft.com
 Archive: http://vm.akh-wien.ac.at/MQSeries.archive

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive


Re: Urgent help: Unexpected behavior in the Cluster!!!

2003-08-25 Thread Potkay, Peter M (PLC, IT)
Ruzi, If you issue DISPLAY CLUSQMGR from the mainframe QM, what does it
show?



-Original Message-
From: Ruzi R [mailto:[EMAIL PROTECTED]
Sent: Monday, August 25, 2003 9:32 AM
To: [EMAIL PROTECTED]
Subject: Re: Urgent help: Unexpected behavior in the Cluster!!!


The mainframe procs do NO have any reference
whatsoever to the old repositories. And I did
uncluster things first before I removed/deleted
them...

Any other ideas, please???

Thanks,

Ruzi
--- Potkay, Peter M (PLC, IT)
[EMAIL PROTECTED] wrote:
 I would bet that the mainframe queue manager still
 has some automatic
 CLUSSNDRs leftover from when they were pointing to
 QM1 and QM2. just
 stopping / deleting / or modifying the manually
 defined ones does not do
 anything to the automatic ones.

 They will even continue to retry even now, hoping
 that the listener on QM!/@
 comes back.

 I know of no graceful way of eliminating Automatic
 CLUSSNDRs. :-(  My
 biggest pet peeve with clustering. The only way I
 know how to do it is to
 completely blow away the repositories in the cluster
 (partial and full).
 Maybe someone knows a better way???


 The one thing I have learned in the past couple of
 weeks with clustering is
 never ever just delete something. You always have to
 un cluster it first
 (queues, channels), and then delete.



 -Original Message-
 From: Ruzi R [mailto:[EMAIL PROTECTED]
 Sent: Monday, August 25, 2003 8:39 AM
 To: [EMAIL PROTECTED]
 Subject: Urgent help: Unexpected behavior in the
 Cluster!!!


 Hi all,

 The QM1 and QM2  (on W2K, MQ 5.3) were full
 repositories for our cluster. I have just replaced
 them (full repositories) with  QMA and QMB--
 removing
 QM1 and QM2 from the cluster. I have done this
 following the instructions given in the MQ Cluster
 manual. I have done the clean-up of the obsolete
 channels etc. (e.g. TO.QM1 and TO.QM2). In other
 words, no cluster member has any CLUSSDR channel
 pointing to the old repositiories. The old
 repositories have their cluster channels deleted.
 Have done REFRESH CLUSTER REPOS(YES/NO) as per the
 instructions in the manual.

 As we don t need QM1 and MQ2 any longer, I  did
 endmqm
 on QM1 and QM2. Stopped the listener on both. Did
 some
 testing; everything seemd OK in the cluster.

 Then I had to move on to something else

 MQ2T (on OS/390  MQ 5.3) is a member in the cluster.
 I
 had modified the procs for MQ2T to pick up the new
 connames (for QMA and QMB). As part of testing, my
 colleague  stopped MQ2T and re-started. Apparently,
 on
 the re-start, the event logs on the old full
 repositories got flooded with messages indicating
 MQ2T
 was trying to connect to them (at least that is what
 he says.. I haven t seen the errors myself. He
 cleaned
 up the log and saved the data somewhere but I don t
 know where yet). Why would this happen as there is
 nothing in the start-up procc pointing to QM1 or
 MQ2?
 I have to find an answer to this as we have another
 cluster whose full repsoitories will have to be
 replaced today. He said that, as soon as he stopped
 the listener on these qmgrs the errors stopped. It
 just so happens that we don t need QM1 and QM2 any
 longer, so I will delete them. My question is,
 suppose that QM1 and QM2 were to stay as independent
 queue managers (not as a cluster member) with their
 listeners running obviously, how would I prevent
 their
 logs from getting filled with messages from cluster
 queue managers trying to connect?

 My thinking is,  because the new full repositories
 will keep the info related to the old repositories
 for
 90 days   during which time, any member qmgr
 re-starting will cause this kind of error???

 I would very much appreciate your input on the above
 unexpected behavior the cluster.

 Thanks,

 Ruzi

 Instructions for managing your mailing list
 subscription are provided in
 the Listserv General Users Guide available at
 http://www.lsoft.com
 Archive: http://vm.akh-wien.ac.at/MQSeries.archive


 This communication, including attachments, is for
 the exclusive use of
 addressee and may contain proprietary, confidential
 or privileged
 information. If you are not the intended recipient,
 any use, copying,
 disclosure, dissemination or distribution is
 strictly prohibited. If
 you are not the intended recipient, please notify
 the sender
 immediately by return email and delete this
 communication and destroy all copies.

 Instructions for managing your mailing list
 subscription are provided in
 the Listserv General Users Guide available at
 http://www.lsoft.com
 Archive: http://vm.akh-wien.ac.at/MQSeries.archive

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive


Re: Urgent help: Unexpected behavior in the Cluster!!!

2003-08-25 Thread Mike Davidson

I found in my testing that using the RESET command got rid of the automatically-defined channels from the DIS CLUSCHL(*) command. I tried it after reading the Queue Manager Clusters manual and, believe it or not, it worked. Here's an excerpt (p. 68):

You might use the RESET CLUSTER command if, for example, a queue manager has been deleted but still has cluster-receiver channels defined to the cluster. Instead of waiting for WebSphere MQ to remove these definitions (which it does automatically) you can issue the RESET CLUSTER command to tidy up sooner. All other queue managers in the cluster are then informed that the queue manager is no longer available.

I hope this helps.

Mike Davidson
TSYS MQ Tech Support
[EMAIL PROTECTED]







Potkay, Peter M (PLC, IT) [EMAIL PROTECTED]
Sent by: MQSeries List [EMAIL PROTECTED]
08/25/2003 08:55 AM
Please respond to MQSeries List



To:[EMAIL PROTECTED]
cc:
Subject:Re: Urgent help: Unexpected behavior in the Cluster!!!
I would bet that the mainframe queue manager still has some automatic
CLUSSNDRs leftover from when they were pointing to QM1 and QM2. just
stopping / deleting / or modifying the manually defined ones does not do
anything to the automatic ones.

They will even continue to retry even now, hoping that the listener on QM!/@
comes back.

I know of no graceful way of eliminating Automatic CLUSSNDRs. :-( My
biggest pet peeve with clustering. The only way I know how to do it is to
completely blow away the repositories in the cluster (partial and full).
Maybe someone knows a better way???


The one thing I have learned in the past couple of weeks with clustering is
never ever just delete something. You always have to un cluster it first
(queues, channels), and then delete.



-Original Message-
From: Ruzi R [mailto:[EMAIL PROTECTED]
Sent: Monday, August 25, 2003 8:39 AM
To: [EMAIL PROTECTED]
Subject: Urgent help: Unexpected behavior in the Cluster!!!


Hi all,

The QM1 and QM2 (on W2K, MQ 5.3) were full
repositories for our cluster. I have just replaced
them (full repositories) with QMA and QMB-- removing
QM1 and QM2 from the cluster. I have done this
following the instructions given in the MQ Cluster
manual. I have done the clean-up of the obsolete
channels etc. (e.g. TO.QM1 and TO.QM2). In other
words, no cluster member has any CLUSSDR channel
pointing to the old repositiories. The old
repositories have their cluster channels deleted.
Have done REFRESH CLUSTER REPOS(YES/NO) as per the
instructions in the manual.

As we don t need QM1 and MQ2 any longer, I did endmqm
on QM1 and QM2. Stopped the listener on both. Did some
testing; everything seemd OK in the cluster.

Then I had to move on to something else

MQ2T (on OS/390 MQ 5.3) is a member in the cluster. I
had modified the procs for MQ2T to pick up the new
connames (for QMA and QMB). As part of testing, my
colleague stopped MQ2T and re-started. Apparently, on
the re-start, the event logs on the old full
repositories got flooded with messages indicating MQ2T
was trying to connect to them (at least that is what
he says.. I haven t seen the errors myself. He cleaned
up the log and saved the data somewhere but I don t
know where yet). Why would this happen as there is
nothing in the start-up procc pointing to QM1 or MQ2?
I have to find an answer to this as we have another
cluster whose full repsoitories will have to be
replaced today. He said that, as soon as he stopped
the listener on these qmgrs the errors stopped. It
just so happens that we don t need QM1 and QM2 any
longer, so I will delete them. My question is,
suppose that QM1 and QM2 were to stay as independent
queue managers (not as a cluster member) with their
listeners running obviously, how would I prevent their
logs from getting filled with messages from cluster
queue managers trying to connect?

My thinking is, because the new full repositories
will keep the info related to the old repositories for
90 days  during which time, any member qmgr
re-starting will cause this kind of error???

I would very much appreciate your input on the above
unexpected behavior the cluster.

Thanks,

Ruzi

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive


This communication, including attachments, is for the exclusive use of
addressee and may contain proprietary, confidential or privileged
information. If you are not the intended recipient, any use, copying,
disclosure, dissemination or distribution is strictly prohibited. If
you are not the intended recipient, please notify the sender
immediately by return email and delete this communication and destroy all copies.

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive



The information contained in this 

Re: Urgent help: Unexpected behavior in the Cluster!!!

2003-08-25 Thread Mike Davidson

Peter - Correct, I issued the RESET command on the qmgr's that were being removed from the cluster. I've never had a scenario where the qmgr's (in this case QM1 and QM2) were already deleted from the box altogether.

Ruzi - As strange as it sounds, try recreating QM1 and QM2 and just run the ALTER QMGR REPOS() command. Then run the RESET CLUSTER() QMNAME(QM1/QM2) ACTION(FORCEREMOVE) QUEUES(YES/NO) command. Then run ALTER QMGR REPOS(' '). Then see what DIS CLUSQMGR on the mainframe returns. And, finally, delete QM1 and QM2 if no longer needed.

(Essentially, you'd be introducing QM1 and QM2 back into the cluster as full repositories, running the RESET command to try and rid yourself of the auto-defined chl's, and backing the qmgr's out of the cluster.)

Mike Davidson
TSYS MQ Tech Support
[EMAIL PROTECTED]
706.644.9501







Potkay, Peter M (PLC, IT) [EMAIL PROTECTED]
Sent by: MQSeries List [EMAIL PROTECTED]
08/25/2003 10:05 AM
Please respond to MQSeries List



To:[EMAIL PROTECTED]
cc:
Subject:Re: Urgent help: Unexpected behavior in the Cluster!!!
So you would issue this command from the QM where other QMS are still trying to send stuff to. In Ruzi's case, the old QM1 and QM2 queue managers, right? This would tell all other QMs in the cluster to delete any automatic CLUSSNDRs to QM1 or QM2?


What do you do in the case where perhaps QM1 and QM2 were already deleted? Is there any official way in this case? Perhaps issuing REFRESH CLUSTER REPOS(YES) on the QM that had the bad automatic CLUSSNDRs would flush them out? In Ruzi's case, the mainframe QM? (Sorry Ruzi, from your original post it was not clear if you issued this command on the mainframe or not).



-Original Message-
From: Mike Davidson [mailto:[EMAIL PROTECTED]
Sent: Monday, August 25, 2003 9:48 AM
To: [EMAIL PROTECTED]
Subject: Re: Urgent help: Unexpected behavior in the Cluster!!!


I found in my testing that using the RESET command got rid of the automatically-defined channels from the DIS CLUSCHL(*) command. I tried it after reading the Queue Manager Clusters manual and, believe it or not, it worked. Here's an excerpt (p. 68): 

You might use the RESET CLUSTER command if, for example, a queue manager has been deleted but still has cluster-receiver channels defined to the cluster. Instead of waiting for WebSphere MQ to remove these definitions (which it does automatically) you can issue the RESET CLUSTER command to tidy up sooner. All other queue managers in the cluster are then informed that the queue manager is no longer available.

I hope this helps. 

Mike Davidson
TSYS MQ Tech Support
[EMAIL PROTECTED]






Potkay, Peter M (PLC, IT) [EMAIL PROTECTED] 
Sent by: MQSeries List [EMAIL PROTECTED] 
08/25/2003 08:55 AM 
Please respond to MQSeries List 


To:[EMAIL PROTECTED] 
cc: 
Subject:Re: Urgent help: Unexpected behavior in the Cluster!!!

I would bet that the mainframe queue manager still has some automatic
CLUSSNDRs leftover from when they were pointing to QM1 and QM2. just
stopping / deleting / or modifying the manually defined ones does not do
anything to the automatic ones.

They will even continue to retry even now, hoping that the listener on QM!/@
comes back.

I know of no graceful way of eliminating Automatic CLUSSNDRs. :-( My
biggest pet peeve with clustering. The only way I know how to do it is to
completely blow away the repositories in the cluster (partial and full).
Maybe someone knows a better way???


The one thing I have learned in the past couple of weeks with clustering is
never ever just delete something. You always have to un cluster it first
(queues, channels), and then delete.



-Original Message-
From: Ruzi R [mailto:[EMAIL PROTECTED]
Sent: Monday, August 25, 2003 8:39 AM
To: [EMAIL PROTECTED]
Subject: Urgent help: Unexpected behavior in the Cluster!!!


Hi all,

The QM1 and QM2 (on W2K, MQ 5.3) were full
repositories for our cluster. I have just replaced
them (full repositories) with QMA and QMB-- removing
QM1 and QM2 from the cluster. I have done this
following the instructions given in the MQ Cluster
manual. I have done the clean-up of the obsolete
channels etc. (e.g. TO.QM1 and TO.QM2). In other
words, no cluster member has any CLUSSDR channel
pointing to the old repositiories. The old
repositories have their cluster channels deleted.
Have done REFRESH CLUSTER REPOS(YES/NO) as per the
instructions in the manual.

As we don t need QM1 and MQ2 any longer, I did endmqm
on QM1 and QM2. Stopped the listener on both. Did some
testing; everything seemd OK in the cluster.

Then I had to move on to something else

MQ2T (on OS/390 MQ 5.3) is a member in the cluster. I
had modified the procs for MQ2T to pick up the new
connames (for QMA and QMB). As part of testing, my
colleague stopped MQ2T and re-started. Apparently, on
the re-start, the event logs on the old full
repositories got flooded with messages indicating MQ2T
was trying to connect 

Re: Urgent help: Unexpected behavior in the Cluster!!!

2003-08-25 Thread Ruzi R
Thanks, Mike! This did it!. I just did not think that
it would benecessary to issue this command as I was
doing the removal of the full-rep gracefully.

Ruzi
--- Mike Davidson [EMAIL PROTECTED] wrote:
 I found in my testing that using the RESET command
 got rid of the
 automatically-defined channels from the DIS
 CLUSCHL(*) command. I tried it
 after reading the Queue Manager Clusters manual
 and, believe it or not,
 it worked. Here's an excerpt (p. 68):

 You might use the RESET CLUSTER command if, for
 example, a queue manager
 has been deleted but still has cluster-receiver
 channels defined to the
 cluster. Instead of waiting for WebSphere MQ to
 remove these definitions
 (which it does automatically) you can issue the
 RESET CLUSTER command to
 tidy up sooner. All other queue managers in the
 cluster are then informed
 that the queue manager is no longer available.

 I hope this helps.

 Mike Davidson
 TSYS MQ Tech Support
 [EMAIL PROTECTED]





 Potkay, Peter M (PLC, IT)
 [EMAIL PROTECTED]
 Sent by: MQSeries List [EMAIL PROTECTED]
 08/25/2003 08:55 AM
 Please respond to MQSeries List



 To: [EMAIL PROTECTED]
 cc:
 Subject:Re: Urgent help: Unexpected
 behavior in the Cluster!!!
 I would bet that the mainframe queue manager still
 has some automatic
 CLUSSNDRs leftover from when they were pointing to
 QM1 and QM2. just
 stopping / deleting / or modifying the manually
 defined ones does not do
 anything to the automatic ones.

 They will even continue to retry even now, hoping
 that the listener on
 QM!/@
 comes back.

 I know of no graceful way of eliminating Automatic
 CLUSSNDRs. :-(  My
 biggest pet peeve with clustering. The only way I
 know how to do it is to
 completely blow away the repositories in the cluster
 (partial and full).
 Maybe someone knows a better way???


 The one thing I have learned in the past couple of
 weeks with clustering
 is
 never ever just delete something. You always have to
 un cluster it first
 (queues, channels), and then delete.



 -Original Message-
 From: Ruzi R [mailto:[EMAIL PROTECTED]
 Sent: Monday, August 25, 2003 8:39 AM
 To: [EMAIL PROTECTED]
 Subject: Urgent help: Unexpected behavior in the
 Cluster!!!


 Hi all,

 The QM1 and QM2  (on W2K, MQ 5.3) were full
 repositories for our cluster. I have just replaced
 them (full repositories) with  QMA and QMB--
 removing
 QM1 and QM2 from the cluster. I have done this
 following the instructions given in the MQ Cluster
 manual. I have done the clean-up of the obsolete
 channels etc. (e.g. TO.QM1 and TO.QM2). In other
 words, no cluster member has any CLUSSDR channel
 pointing to the old repositiories. The old
 repositories have their cluster channels deleted.
 Have done REFRESH CLUSTER REPOS(YES/NO) as per the
 instructions in the manual.

 As we don t need QM1 and MQ2 any longer, I  did
 endmqm
 on QM1 and QM2. Stopped the listener on both. Did
 some
 testing; everything seemd OK in the cluster.

 Then I had to move on to something else

 MQ2T (on OS/390  MQ 5.3) is a member in the cluster.
 I
 had modified the procs for MQ2T to pick up the new
 connames (for QMA and QMB). As part of testing, my
 colleague  stopped MQ2T and re-started. Apparently,
 on
 the re-start, the event logs on the old full
 repositories got flooded with messages indicating
 MQ2T
 was trying to connect to them (at least that is what
 he says.. I haven t seen the errors myself. He
 cleaned
 up the log and saved the data somewhere but I don t
 know where yet). Why would this happen as there is
 nothing in the start-up procc pointing to QM1 or
 MQ2?
 I have to find an answer to this as we have another
 cluster whose full repsoitories will have to be
 replaced today. He said that, as soon as he stopped
 the listener on these qmgrs the errors stopped. It
 just so happens that we don t need QM1 and QM2 any
 longer, so I will delete them. My question is,
 suppose that QM1 and QM2 were to stay as independent
 queue managers (not as a cluster member) with their
 listeners running obviously, how would I prevent
 their
 logs from getting filled with messages from cluster
 queue managers trying to connect?

 My thinking is,  because the new full repositories
 will keep the info related to the old repositories
 for
 90 days   during which time, any member qmgr
 re-starting will cause this kind of error???

 I would very much appreciate your input on the above
 unexpected behavior the cluster.

 Thanks,

 Ruzi

 Instructions for managing your mailing list
 subscription are provided in
 the Listserv General Users Guide available at
 http://www.lsoft.com
 Archive: http://vm.akh-wien.ac.at/MQSeries.archive


 This communication, including attachments, is for
 the exclusive use of
 addressee and may contain proprietary, confidential
 or privileged
 information. If you are not the intended recipient,
 any use, copying,
 disclosure, dissemination or distribution is
 strictly prohibited. If
 you are not the intended 

Re: Urgent help: Unexpected behavior in the Cluster!!!

2003-08-25 Thread Mike Davidson

Glad it worked!

Mike Davidson
TSYS MQ Tech Support
[EMAIL PROTECTED]







Ruzi R [EMAIL PROTECTED]
Sent by: MQSeries List [EMAIL PROTECTED]
08/25/2003 11:03 AM
Please respond to MQSeries List



To:[EMAIL PROTECTED]
cc:
Subject:Re: Urgent help: Unexpected behavior in the Cluster!!!
Thanks, Mike! This did it!. I just did not think that
it would benecessary to issue this command as I was
doing the removal of the full-rep gracefully.

Ruzi
--- Mike Davidson [EMAIL PROTECTED] wrote:
 I found in my testing that using the RESET command
 got rid of the
 automatically-defined channels from the DIS
 CLUSCHL(*) command. I tried it
 after reading the Queue Manager Clusters manual
 and, believe it or not,
 it worked. Here's an excerpt (p. 68):

 You might use the RESET CLUSTER command if, for
 example, a queue manager
 has been deleted but still has cluster-receiver
 channels defined to the
 cluster. Instead of waiting for WebSphere MQ to
 remove these definitions
 (which it does automatically) you can issue the
 RESET CLUSTER command to
 tidy up sooner. All other queue managers in the
 cluster are then informed
 that the queue manager is no longer available.

 I hope this helps.

 Mike Davidson
 TSYS MQ Tech Support
 [EMAIL PROTECTED]





 Potkay, Peter M (PLC, IT)
 [EMAIL PROTECTED]
 Sent by: MQSeries List [EMAIL PROTECTED]
 08/25/2003 08:55 AM
 Please respond to MQSeries List



 To:   [EMAIL PROTECTED]
 cc:
 Subject:Re: Urgent help: Unexpected
 behavior in the Cluster!!!
 I would bet that the mainframe queue manager still
 has some automatic
 CLUSSNDRs leftover from when they were pointing to
 QM1 and QM2. just
 stopping / deleting / or modifying the manually
 defined ones does not do
 anything to the automatic ones.

 They will even continue to retry even now, hoping
 that the listener on
 QM!/@
 comes back.

 I know of no graceful way of eliminating Automatic
 CLUSSNDRs. :-( My
 biggest pet peeve with clustering. The only way I
 know how to do it is to
 completely blow away the repositories in the cluster
 (partial and full).
 Maybe someone knows a better way???


 The one thing I have learned in the past couple of
 weeks with clustering
 is
 never ever just delete something. You always have to
 un cluster it first
 (queues, channels), and then delete.



 -Original Message-
 From: Ruzi R [mailto:[EMAIL PROTECTED]
 Sent: Monday, August 25, 2003 8:39 AM
 To: [EMAIL PROTECTED]
 Subject: Urgent help: Unexpected behavior in the
 Cluster!!!


 Hi all,

 The QM1 and QM2 (on W2K, MQ 5.3) were full
 repositories for our cluster. I have just replaced
 them (full repositories) with QMA and QMB--
 removing
 QM1 and QM2 from the cluster. I have done this
 following the instructions given in the MQ Cluster
 manual. I have done the clean-up of the obsolete
 channels etc. (e.g. TO.QM1 and TO.QM2). In other
 words, no cluster member has any CLUSSDR channel
 pointing to the old repositiories. The old
 repositories have their cluster channels deleted.
 Have done REFRESH CLUSTER REPOS(YES/NO) as per the
 instructions in the manual.

 As we don t need QM1 and MQ2 any longer, I did
 endmqm
 on QM1 and QM2. Stopped the listener on both. Did
 some
 testing; everything seemd OK in the cluster.

 Then I had to move on to something else

 MQ2T (on OS/390 MQ 5.3) is a member in the cluster.
 I
 had modified the procs for MQ2T to pick up the new
 connames (for QMA and QMB). As part of testing, my
 colleague stopped MQ2T and re-started. Apparently,
 on
 the re-start, the event logs on the old full
 repositories got flooded with messages indicating
 MQ2T
 was trying to connect to them (at least that is what
 he says.. I haven t seen the errors myself. He
 cleaned
 up the log and saved the data somewhere but I don t
 know where yet). Why would this happen as there is
 nothing in the start-up procc pointing to QM1 or
 MQ2?
 I have to find an answer to this as we have another
 cluster whose full repsoitories will have to be
 replaced today. He said that, as soon as he stopped
 the listener on these qmgrs the errors stopped. It
 just so happens that we don t need QM1 and QM2 any
 longer, so I will delete them. My question is,
 suppose that QM1 and QM2 were to stay as independent
 queue managers (not as a cluster member) with their
 listeners running obviously, how would I prevent
 their
 logs from getting filled with messages from cluster
 queue managers trying to connect?

 My thinking is, because the new full repositories
 will keep the info related to the old repositories
 for
 90 days  during which time, any member qmgr
 re-starting will cause this kind of error???

 I would very much appreciate your input on the above
 unexpected behavior the cluster.

 Thanks,

 Ruzi

 Instructions for managing your mailing list
 subscription are provided in
 the Listserv General Users Guide available at
 http://www.lsoft.com
 Archive: http://vm.akh-wien.ac.at/MQSeries.archive


 This communication, 

Re: Urgent help: Unexpected behavior in the Cluster!!!

2003-08-25 Thread Ruzi R
Thanks everyone who has responded.The problem has been
resolved.

Just to answer Peter's question:  As I said in my
original email, I had done REFRESH CLUSTER
REPOS(YES/NO) -- including the new repositories.
However, the mainframe did not like the REPOS parm. So
I issued it without the REPOS. The qmgrs on other
platforms still showed the CLUSSDrs to the old
repositories when issued DISPLAY CLUSQMGR(*), after
the  REFRESH CLUSTER REPOS(YES) was issued.

Anyway, the problem has been resolved by issuing RESET
CLUSTER QMNAME(QM1/QM2) ACTION(FORCEREMOVE)
QUEUES(YES) command on the new full repositories. I
did not think that this command would be necessary as
I removed the full repositories and the cluster
channels gracefully (i.e. as per the steps indicated
in the Cluster manual).

Thanks,

Ruzi



--- Potkay, Peter M (PLC, IT)
[EMAIL PROTECTED] wrote:
 So you would issue this command from the QM where
 other QMS are still trying
 to send stuff to. In Ruzi's case, the old QM1 and
 QM2 queue managers, right?
 This would tell all other QMs in the cluster to
 delete any automatic
 CLUSSNDRs to QM1 or QM2?


 What do you do in the case where perhaps QM1 and QM2
 were already deleted?
 Is there any official way in this case? Perhaps
 issuing  REFRESH CLUSTER
 REPOS(YES) on the QM that had the bad automatic
 CLUSSNDRs would flush them
 out? In Ruzi's case, the mainframe QM? (Sorry Ruzi,
 from your original post
 it was not clear if you issued this command on the
 mainframe or not).




 -Original Message-
 From: Mike Davidson [mailto:[EMAIL PROTECTED]
 Sent: Monday, August 25, 2003 9:48 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Urgent help: Unexpected behavior in the
 Cluster!!!



 I found in my testing that using the RESET command
 got rid of the
 automatically-defined channels from the DIS
 CLUSCHL(*) command. I tried it
 after reading the Queue Manager Clusters manual
 and, believe it or not, it
 worked. Here's an excerpt (p. 68):

 You might use the RESET CLUSTER command if, for
 example, a queue manager
 has been deleted but still has cluster-receiver
 channels defined to the
 cluster. Instead of waiting for WebSphere MQ to
 remove these definitions
 (which it does automatically) you can issue the
 RESET CLUSTER command to
 tidy up sooner. All other queue managers in the
 cluster are then informed
 that the queue manager is no longer available.

 I hope this helps.

 Mike Davidson
 TSYS MQ Tech Support
 [EMAIL PROTECTED]




 Potkay, Peter M (PLC, IT)
 [EMAIL PROTECTED]
 Sent by: MQSeries List [EMAIL PROTECTED]


 08/25/2003 08:55 AM


 Please respond to MQSeries List




 To:[EMAIL PROTECTED]
 cc:
 Subject:Re: Urgent help: Unexpected
 behavior in the
 Cluster!!!

 I would bet that the mainframe queue manager still
 has some automatic
 CLUSSNDRs leftover from when they were pointing to
 QM1 and QM2. just
 stopping / deleting / or modifying the manually
 defined ones does not do
 anything to the automatic ones.

 They will even continue to retry even now, hoping
 that the listener on QM!/@
 comes back.

 I know of no graceful way of eliminating Automatic
 CLUSSNDRs. :-(  My
 biggest pet peeve with clustering. The only way I
 know how to do it is to
 completely blow away the repositories in the cluster
 (partial and full).
 Maybe someone knows a better way???


 The one thing I have learned in the past couple of
 weeks with clustering is
 never ever just delete something. You always have to
 un cluster it first
 (queues, channels), and then delete.



 -Original Message-
 From: Ruzi R [mailto:[EMAIL PROTECTED]
 Sent: Monday, August 25, 2003 8:39 AM
 To: [EMAIL PROTECTED]
 Subject: Urgent help: Unexpected behavior in the
 Cluster!!!


 Hi all,

 The QM1 and QM2  (on W2K, MQ 5.3) were full
 repositories for our cluster. I have just replaced
 them (full repositories) with  QMA and QMB--
 removing
 QM1 and QM2 from the cluster. I have done this
 following the instructions given in the MQ Cluster
 manual. I have done the clean-up of the obsolete
 channels etc. (e.g. TO.QM1 and TO.QM2). In other
 words, no cluster member has any CLUSSDR channel
 pointing to the old repositiories. The old
 repositories have their cluster channels deleted.
 Have done REFRESH CLUSTER REPOS(YES/NO) as per the
 instructions in the manual.

 As we don t need QM1 and MQ2 any longer, I  did
 endmqm
 on QM1 and QM2. Stopped the listener on both. Did
 some
 testing; everything seemd OK in the cluster.

 Then I had to move on to something else

 MQ2T (on OS/390  MQ 5.3) is a member in the cluster.
 I
 had modified the procs for MQ2T to pick up the new
 connames (for QMA and QMB). As part of testing, my
 colleague  stopped MQ2T and re-started. Apparently,
 on
 the re-start, the event logs on the old full
 repositories got flooded with messages indicating
 MQ2T
 was trying to connect to them (at least that is what
 he says.. I haven t seen the errors myself. He
 cleaned
 up the log and 

Re: Urgent help: Unexpected behavior in the Cluster!!!

2003-08-25 Thread Potkay, Peter M (PLC, IT)
Mike/Ruzi, do you know if you have to issue the command on both your full
repositories? Or will one be enough?


-Original Message-
From: Ruzi R [mailto:[EMAIL PROTECTED]
Sent: Monday, August 25, 2003 11:36 AM
To: [EMAIL PROTECTED]
Subject: Re: Urgent help: Unexpected behavior in the Cluster!!!


Thanks everyone who has responded.The problem has been
resolved.

Just to answer Peter's question:  As I said in my
original email, I had done REFRESH CLUSTER
REPOS(YES/NO) -- including the new repositories.
However, the mainframe did not like the REPOS parm. So
I issued it without the REPOS. The qmgrs on other
platforms still showed the CLUSSDrs to the old
repositories when issued DISPLAY CLUSQMGR(*), after
the  REFRESH CLUSTER REPOS(YES) was issued.

Anyway, the problem has been resolved by issuing RESET
CLUSTER QMNAME(QM1/QM2) ACTION(FORCEREMOVE)
QUEUES(YES) command on the new full repositories. I
did not think that this command would be necessary as
I removed the full repositories and the cluster
channels gracefully (i.e. as per the steps indicated
in the Cluster manual).

Thanks,

Ruzi



--- Potkay, Peter M (PLC, IT)
[EMAIL PROTECTED] wrote:
 So you would issue this command from the QM where
 other QMS are still trying
 to send stuff to. In Ruzi's case, the old QM1 and
 QM2 queue managers, right?
 This would tell all other QMs in the cluster to
 delete any automatic
 CLUSSNDRs to QM1 or QM2?


 What do you do in the case where perhaps QM1 and QM2
 were already deleted?
 Is there any official way in this case? Perhaps
 issuing  REFRESH CLUSTER
 REPOS(YES) on the QM that had the bad automatic
 CLUSSNDRs would flush them
 out? In Ruzi's case, the mainframe QM? (Sorry Ruzi,
 from your original post
 it was not clear if you issued this command on the
 mainframe or not).




 -Original Message-
 From: Mike Davidson [mailto:[EMAIL PROTECTED]
 Sent: Monday, August 25, 2003 9:48 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Urgent help: Unexpected behavior in the
 Cluster!!!



 I found in my testing that using the RESET command
 got rid of the
 automatically-defined channels from the DIS
 CLUSCHL(*) command. I tried it
 after reading the Queue Manager Clusters manual
 and, believe it or not, it
 worked. Here's an excerpt (p. 68):

 You might use the RESET CLUSTER command if, for
 example, a queue manager
 has been deleted but still has cluster-receiver
 channels defined to the
 cluster. Instead of waiting for WebSphere MQ to
 remove these definitions
 (which it does automatically) you can issue the
 RESET CLUSTER command to
 tidy up sooner. All other queue managers in the
 cluster are then informed
 that the queue manager is no longer available.

 I hope this helps.

 Mike Davidson
 TSYS MQ Tech Support
 [EMAIL PROTECTED]




 Potkay, Peter M (PLC, IT)
 [EMAIL PROTECTED]
 Sent by: MQSeries List [EMAIL PROTECTED]


 08/25/2003 08:55 AM


 Please respond to MQSeries List




 To:[EMAIL PROTECTED]
 cc:
 Subject:Re: Urgent help: Unexpected
 behavior in the
 Cluster!!!

 I would bet that the mainframe queue manager still
 has some automatic
 CLUSSNDRs leftover from when they were pointing to
 QM1 and QM2. just
 stopping / deleting / or modifying the manually
 defined ones does not do
 anything to the automatic ones.

 They will even continue to retry even now, hoping
 that the listener on QM!/@
 comes back.

 I know of no graceful way of eliminating Automatic
 CLUSSNDRs. :-(  My
 biggest pet peeve with clustering. The only way I
 know how to do it is to
 completely blow away the repositories in the cluster
 (partial and full).
 Maybe someone knows a better way???


 The one thing I have learned in the past couple of
 weeks with clustering is
 never ever just delete something. You always have to
 un cluster it first
 (queues, channels), and then delete.



 -Original Message-
 From: Ruzi R [mailto:[EMAIL PROTECTED]
 Sent: Monday, August 25, 2003 8:39 AM
 To: [EMAIL PROTECTED]
 Subject: Urgent help: Unexpected behavior in the
 Cluster!!!


 Hi all,

 The QM1 and QM2  (on W2K, MQ 5.3) were full
 repositories for our cluster. I have just replaced
 them (full repositories) with  QMA and QMB--
 removing
 QM1 and QM2 from the cluster. I have done this
 following the instructions given in the MQ Cluster
 manual. I have done the clean-up of the obsolete
 channels etc. (e.g. TO.QM1 and TO.QM2). In other
 words, no cluster member has any CLUSSDR channel
 pointing to the old repositiories. The old
 repositories have their cluster channels deleted.
 Have done REFRESH CLUSTER REPOS(YES/NO) as per the
 instructions in the manual.

 As we don t need QM1 and MQ2 any longer, I  did
 endmqm
 on QM1 and QM2. Stopped the listener on both. Did
 some
 testing; everything seemd OK in the cluster.

 Then I had to move on to something else

 MQ2T (on OS/390  MQ 5.3) is a member in the cluster.
 I
 had modified the procs for MQ2T to pick up the new
 connames (for QMA and QMB). As 

Re: message CSQX558E - too many channels?

2003-08-25 Thread EARmerc Roberts
Thanx, Rebecca.

I understand what you're saying. Since several channels on my system are
consistently active, I have seen dynamic channels on the systems that
connect to mine and I assume that I may have dynamic channels created for
some or all of that activity.  But I can still say unequivocally that given
the numbers we produce in my environment, I can't see us reaching 200
channels in use. What I plan to do is monitor the traffic more closely. I
will also be checking the logs more closely for other messages which might
provide more information.

Ernest Roberts
IT - Sr Sys Prog
MBUSA, LLC

- Forwarded by Ernest Roberts/171/DCAG/DCX on 08/25/2003 01:58 PM -

  Bullock,
  Rebecca (CSC)   To:  [EMAIL PROTECTED]
  [EMAIL PROTECTED] cc:
  G   Subject: Re: message CSQX558E - too 
many channels?
  Sent by:
  MQSeries List
  [EMAIL PROTECTED]
  en.AC.AT


  08/24/2003 01:19
  PM
  Please respond
  to MQSeries List






Ernest, you may have less than 50 channels defined, but are some of them
client channels with multiple clients using them? That will bump your
active
channel count up.

Rebecca Bullock
Computer Sciences Corporation
MFCoE

Princeton, NJ  08541
email: [EMAIL PROTECTED] / [EMAIL PROTECTED]


-Original Message-
From: EARmerc Roberts [mailto:[EMAIL PROTECTED]
Sent: Friday, August 22, 2003 10:19 AM
To: [EMAIL PROTECTED]
Subject: Re: message CSQX558E - too many channels?

I have the MC. The problem is that I have a limit of 200 channels that I
know was not even remotely approached during our peak operating periods,
yet the message shows up. The 'For example' part is useless and should have
been omitted because it is not accurate or truly specific about whatever
the actual problem was that caused the message to be generated. The useful
part is the procedure to stop and start the channel. the rest is just
guessing and has no place in a manual that is supposed to help with problem
diagnosis and resolution. I have less than 50 channels defined.

Ernest Roberts
IT - Sr Sys Prog
MBUSA, LLC
Three Mercedes Drive
Montvale, NJ 07345
201-573-2619
201-573-4383 fax
866-308-3782 pager

- Forwarded by Ernest Roberts/171/DCAG/DCX on 08/22/2003 10:10 AM -

  Kearns, Emile
  E   To:
[EMAIL PROTECTED]
  [EMAIL PROTECTED] cc:
  .ZA Subject: Re: message
CSQX558E
- too many channels?
  Sent by:
  MQSeries List
  [EMAIL PROTECTED]
  en.AC.AT


  08/22/2003 02:02
  AM
  Please respond
  to MQSeries List






CSQX558E csect-name Remote channel channel-name not available

Explanation: The channel channel-name at the remote queue manager is
currently stopped or is otherwise unavailable. For example, there might be
too many channels current to be able to start it.

Severity: 8

System Action: The channel does not start.

System Programmer Response: This might be a temporary situation, and the
channel will retry. If not, check the status of the channel at the remote
queue manager. If it is stopped, issue a START CHANNEL command to restart
it. If there are too many channels current, either wait for some of the
operating channels to terminate, or stop some channels manually, before
restarting the channel.

-Original Message-
From: EARmerc Roberts [mailto:[EMAIL PROTECTED]
Sent: 21 August 2003 10:00
To: [EMAIL PROTECTED]
Subject: Re: message CSQX558E - too many channels?


Thanx, Joep,

I looked and found values of 200, which should have been enough considering
our current level of activity. I am thinking that the message doc is
somewhat misleading. I am assuming that dynamic channels are included in
the specification and we are IP-only. I'll do some more research on the
problem.

again, Thanx.

Ernest Roberts
IT - Sr Sys Prog
MBUSA, LLC


__


Robert,

There are several parameters limiting the number of active channels. They
are in CSQ6CHIP. See sample SCSQPROC(CSQ4X4PRM).
Have a look at ACTCHL, CURRCHL, LU62CHL and TCPCHL.

Cheers, Joep


-Original Message-
From: EARmerc Roberts [mailto:[EMAIL PROTECTED]
Sent: donderdag 21 augustus 2003 16:18
To: [EMAIL PROTECTED]
Subject: message CSQX558E - too many channels?


Hello MQ persons,

We are running MQ v 5.3 on OS/390 v2.10.

Our qmgr reported message CSQX558E for a channel and the MC guide said
about this: there might be too many channels current to be able to start
it.
The channel appeared to have no problem after a STOP and then a START were
issued. What I want to do 

Re: Urgent help: Unexpected behavior in the Cluster!!!

2003-08-25 Thread Mike Davidson

I've done it before on both. My logic being that I wanted both full repositories to be rid of the rogue auto-defined channels and not knowing for sure if running the RESET command on one full repository would cause MQ to let the other full repository know of the change/deletion.

Mike Davidson
TSYS MQ Tech Support
[EMAIL PROTECTED]







Potkay, Peter M (PLC, IT) [EMAIL PROTECTED]
Sent by: MQSeries List [EMAIL PROTECTED]
08/25/2003 01:23 PM
Please respond to MQSeries List



To:[EMAIL PROTECTED]
cc:
Subject:Re: Urgent help: Unexpected behavior in the Cluster!!!
Mike/Ruzi, do you know if you have to issue the command on both your full
repositories? Or will one be enough?


-Original Message-
From: Ruzi R [mailto:[EMAIL PROTECTED]
Sent: Monday, August 25, 2003 11:36 AM
To: [EMAIL PROTECTED]
Subject: Re: Urgent help: Unexpected behavior in the Cluster!!!


Thanks everyone who has responded.The problem has been
resolved.

Just to answer Peter's question: As I said in my
original email, I had done REFRESH CLUSTER
REPOS(YES/NO) -- including the new repositories.
However, the mainframe did not like the REPOS parm. So
I issued it without the REPOS. The qmgrs on other
platforms still showed the CLUSSDrs to the old
repositories when issued DISPLAY CLUSQMGR(*), after
the REFRESH CLUSTER REPOS(YES) was issued.

Anyway, the problem has been resolved by issuing RESET
CLUSTER QMNAME(QM1/QM2) ACTION(FORCEREMOVE)
QUEUES(YES) command on the new full repositories. I
did not think that this command would be necessary as
I removed the full repositories and the cluster
channels gracefully (i.e. as per the steps indicated
in the Cluster manual).

Thanks,

Ruzi



--- Potkay, Peter M (PLC, IT)
[EMAIL PROTECTED] wrote:
 So you would issue this command from the QM where
 other QMS are still trying
 to send stuff to. In Ruzi's case, the old QM1 and
 QM2 queue managers, right?
 This would tell all other QMs in the cluster to
 delete any automatic
 CLUSSNDRs to QM1 or QM2?


 What do you do in the case where perhaps QM1 and QM2
 were already deleted?
 Is there any official way in this case? Perhaps
 issuing REFRESH CLUSTER
 REPOS(YES) on the QM that had the bad automatic
 CLUSSNDRs would flush them
 out? In Ruzi's case, the mainframe QM? (Sorry Ruzi,
 from your original post
 it was not clear if you issued this command on the
 mainframe or not).




 -Original Message-
 From: Mike Davidson [mailto:[EMAIL PROTECTED]
 Sent: Monday, August 25, 2003 9:48 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Urgent help: Unexpected behavior in the
 Cluster!!!



 I found in my testing that using the RESET command
 got rid of the
 automatically-defined channels from the DIS
 CLUSCHL(*) command. I tried it
 after reading the Queue Manager Clusters manual
 and, believe it or not, it
 worked. Here's an excerpt (p. 68):

 You might use the RESET CLUSTER command if, for
 example, a queue manager
 has been deleted but still has cluster-receiver
 channels defined to the
 cluster. Instead of waiting for WebSphere MQ to
 remove these definitions
 (which it does automatically) you can issue the
 RESET CLUSTER command to
 tidy up sooner. All other queue managers in the
 cluster are then informed
 that the queue manager is no longer available.

 I hope this helps.

 Mike Davidson
 TSYS MQ Tech Support
 [EMAIL PROTECTED]




 Potkay, Peter M (PLC, IT)
 [EMAIL PROTECTED]
 Sent by: MQSeries List [EMAIL PROTECTED]


 08/25/2003 08:55 AM


 Please respond to MQSeries List




 To:[EMAIL PROTECTED]
 cc:
 Subject:Re: Urgent help: Unexpected
 behavior in the
 Cluster!!!

 I would bet that the mainframe queue manager still
 has some automatic
 CLUSSNDRs leftover from when they were pointing to
 QM1 and QM2. just
 stopping / deleting / or modifying the manually
 defined ones does not do
 anything to the automatic ones.

 They will even continue to retry even now, hoping
 that the listener on QM!/@
 comes back.

 I know of no graceful way of eliminating Automatic
 CLUSSNDRs. :-( My
 biggest pet peeve with clustering. The only way I
 know how to do it is to
 completely blow away the repositories in the cluster
 (partial and full).
 Maybe someone knows a better way???


 The one thing I have learned in the past couple of
 weeks with clustering is
 never ever just delete something. You always have to
 un cluster it first
 (queues, channels), and then delete.



 -Original Message-
 From: Ruzi R [mailto:[EMAIL PROTECTED]
 Sent: Monday, August 25, 2003 8:39 AM
 To: [EMAIL PROTECTED]
 Subject: Urgent help: Unexpected behavior in the
 Cluster!!!


 Hi all,

 The QM1 and QM2 (on W2K, MQ 5.3) were full
 repositories for our cluster. I have just replaced
 them (full repositories) with QMA and QMB--
 removing
 QM1 and QM2 from the cluster. I have done this
 following the instructions given in the MQ Cluster
 manual. I have done the clean-up of the obsolete
 channels etc. (e.g. TO.QM1 and TO.QM2). In other
 

Hardware Clustering Considerations

2003-08-25 Thread Tom Fox
Does anyone have experience with MQ with Veritas clustering on IBM/AIX
hardware, specifically? Any positive/negative observations? Any pointers to
Veritas versus HACMP for IBM equipment and MQ?

Hope this makes sense. Any help/pointers are appreciated.

Regards,
-tom fox
Wachovia IT

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive


Re: message CSQX558E - too many channels?

2003-08-25 Thread Bullock, Rebecca (CSC)
Good luck, Ernest. One thing you might look into is using event monitoring.
I believe that an event is generated when a channel starts and stops.
Perhaps this will provide some useful information. -- Rebecca

Rebecca Bullock
Computer Sciences Corporation
MFCoE

Princeton, NJ  08541
email: [EMAIL PROTECTED] / [EMAIL PROTECTED]


-Original Message-
From: EARmerc Roberts [mailto:[EMAIL PROTECTED]
Sent: Monday, August 25, 2003 2:06 PM
To: [EMAIL PROTECTED]
Subject: Re: message CSQX558E - too many channels?

Thanx, Rebecca.

I understand what you're saying. Since several channels on my system are
consistently active, I have seen dynamic channels on the systems that
connect to mine and I assume that I may have dynamic channels created for
some or all of that activity.  But I can still say unequivocally that given
the numbers we produce in my environment, I can't see us reaching 200
channels in use. What I plan to do is monitor the traffic more closely. I
will also be checking the logs more closely for other messages which might
provide more information.

Ernest Roberts
IT - Sr Sys Prog
MBUSA, LLC

- Forwarded by Ernest Roberts/171/DCAG/DCX on 08/25/2003 01:58 PM -

  Bullock,
  Rebecca (CSC)   To:
[EMAIL PROTECTED]
  [EMAIL PROTECTED] cc:
  G   Subject: Re: message CSQX558E
- too many channels?
  Sent by:
  MQSeries List
  [EMAIL PROTECTED]
  en.AC.AT


  08/24/2003 01:19
  PM
  Please respond
  to MQSeries List






Ernest, you may have less than 50 channels defined, but are some of them
client channels with multiple clients using them? That will bump your
active
channel count up.

Rebecca Bullock
Computer Sciences Corporation
MFCoE

Princeton, NJ  08541
email: [EMAIL PROTECTED] / [EMAIL PROTECTED]


-Original Message-
From: EARmerc Roberts [mailto:[EMAIL PROTECTED]
Sent: Friday, August 22, 2003 10:19 AM
To: [EMAIL PROTECTED]
Subject: Re: message CSQX558E - too many channels?

I have the MC. The problem is that I have a limit of 200 channels that I
know was not even remotely approached during our peak operating periods,
yet the message shows up. The 'For example' part is useless and should have
been omitted because it is not accurate or truly specific about whatever
the actual problem was that caused the message to be generated. The useful
part is the procedure to stop and start the channel. the rest is just
guessing and has no place in a manual that is supposed to help with problem
diagnosis and resolution. I have less than 50 channels defined.

Ernest Roberts
IT - Sr Sys Prog
MBUSA, LLC
Three Mercedes Drive
Montvale, NJ 07345
201-573-2619
201-573-4383 fax
866-308-3782 pager

- Forwarded by Ernest Roberts/171/DCAG/DCX on 08/22/2003 10:10 AM -

  Kearns, Emile
  E   To:
[EMAIL PROTECTED]
  [EMAIL PROTECTED] cc:
  .ZA Subject: Re: message
CSQX558E
- too many channels?
  Sent by:
  MQSeries List
  [EMAIL PROTECTED]
  en.AC.AT


  08/22/2003 02:02
  AM
  Please respond
  to MQSeries List






CSQX558E csect-name Remote channel channel-name not available

Explanation: The channel channel-name at the remote queue manager is
currently stopped or is otherwise unavailable. For example, there might be
too many channels current to be able to start it.

Severity: 8

System Action: The channel does not start.

System Programmer Response: This might be a temporary situation, and the
channel will retry. If not, check the status of the channel at the remote
queue manager. If it is stopped, issue a START CHANNEL command to restart
it. If there are too many channels current, either wait for some of the
operating channels to terminate, or stop some channels manually, before
restarting the channel.

-Original Message-
From: EARmerc Roberts [mailto:[EMAIL PROTECTED]
Sent: 21 August 2003 10:00
To: [EMAIL PROTECTED]
Subject: Re: message CSQX558E - too many channels?


Thanx, Joep,

I looked and found values of 200, which should have been enough considering
our current level of activity. I am thinking that the message doc is
somewhat misleading. I am assuming that dynamic channels are included in
the specification and we are IP-only. I'll do some more research on the
problem.

again, Thanx.

Ernest Roberts
IT - Sr Sys Prog
MBUSA, LLC


__


Robert,

There are several parameters limiting the number of active channels. They
are in CSQ6CHIP. See sample SCSQPROC(CSQ4X4PRM).
Have a look at ACTCHL, CURRCHL, LU62CHL and TCPCHL.


Re: Hardware Clustering Considerations

2003-08-25 Thread Bullock, Rebecca (CSC)
Tom, not with AIX, no, but we had a Veritas cluster set up recently with Sun
Solaris systems. Veritas now provides an agent for that environment (for MQ
V5.3); this is in addition to the SupportPac Veritas cluster stuff. Don't
know if this helps (and I'm certainly not Unix expert by any stretch of the
imagination, so there's not a whole lot more I can provide).

Rebecca Bullock
Computer Sciences Corporation
MFCoE

Princeton, NJ  08541
email: [EMAIL PROTECTED] / [EMAIL PROTECTED]


-Original Message-
From: Tom Fox [mailto:[EMAIL PROTECTED]
Sent: Monday, August 25, 2003 2:24 PM
To: [EMAIL PROTECTED]
Subject: Hardware Clustering Considerations

Does anyone have experience with MQ with Veritas clustering on IBM/AIX
hardware, specifically? Any positive/negative observations? Any pointers to
Veritas versus HACMP for IBM equipment and MQ?

Hope this makes sense. Any help/pointers are appreciated.

Regards,
-tom fox
Wachovia IT

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive



**
This e-mail and any files transmitted with it may contain privileged or
confidential information. It is solely for use by the individual for whom
it is intended, even if addressed incorrectly. If you received this e-mail
in error, please notify the sender; do not disclose, copy, distribute, or
take any action in reliance on the contents of this information; and delete
it from your system. Any other use of this e-mail is prohibited. Thank you
for your compliance.

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive


Re: Urgent help: Unexpected behavior in the Cluster!!!

2003-08-25 Thread Ruzi R
Issuing the command on one full rep will be enough...

Ruzi
--- Potkay, Peter M (PLC, IT)
[EMAIL PROTECTED] wrote:
 Mike/Ruzi, do you know if you have to issue the
 command on both your full
 repositories? Or will one be enough?


 -Original Message-
 From: Ruzi R [mailto:[EMAIL PROTECTED]
 Sent: Monday, August 25, 2003 11:36 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Urgent help: Unexpected behavior in the
 Cluster!!!


 Thanks everyone who has responded.The problem has
 been
 resolved.

 Just to answer Peter's question:  As I said in my
 original email, I had done REFRESH CLUSTER
 REPOS(YES/NO) -- including the new repositories.
 However, the mainframe did not like the REPOS parm.
 So
 I issued it without the REPOS. The qmgrs on other
 platforms still showed the CLUSSDrs to the old
 repositories when issued DISPLAY CLUSQMGR(*),
 after
 the  REFRESH CLUSTER REPOS(YES) was issued.

 Anyway, the problem has been resolved by issuing
 RESET
 CLUSTER QMNAME(QM1/QM2) ACTION(FORCEREMOVE)
 QUEUES(YES) command on the new full repositories. I
 did not think that this command would be necessary
 as
 I removed the full repositories and the cluster
 channels gracefully (i.e. as per the steps indicated
 in the Cluster manual).

 Thanks,

 Ruzi



 --- Potkay, Peter M (PLC, IT)
 [EMAIL PROTECTED] wrote:
  So you would issue this command from the QM where
  other QMS are still trying
  to send stuff to. In Ruzi's case, the old QM1 and
  QM2 queue managers, right?
  This would tell all other QMs in the cluster to
  delete any automatic
  CLUSSNDRs to QM1 or QM2?
 
 
  What do you do in the case where perhaps QM1 and
 QM2
  were already deleted?
  Is there any official way in this case? Perhaps
  issuing  REFRESH CLUSTER
  REPOS(YES) on the QM that had the bad automatic
  CLUSSNDRs would flush them
  out? In Ruzi's case, the mainframe QM? (Sorry
 Ruzi,
  from your original post
  it was not clear if you issued this command on the
  mainframe or not).
 
 
 
 
  -Original Message-
  From: Mike Davidson [mailto:[EMAIL PROTECTED]
  Sent: Monday, August 25, 2003 9:48 AM
  To: [EMAIL PROTECTED]
  Subject: Re: Urgent help: Unexpected behavior in
 the
  Cluster!!!
 
 
 
  I found in my testing that using the RESET command
  got rid of the
  automatically-defined channels from the DIS
  CLUSCHL(*) command. I tried it
  after reading the Queue Manager Clusters manual
  and, believe it or not, it
  worked. Here's an excerpt (p. 68):
 
  You might use the RESET CLUSTER command if, for
  example, a queue manager
  has been deleted but still has cluster-receiver
  channels defined to the
  cluster. Instead of waiting for WebSphere MQ to
  remove these definitions
  (which it does automatically) you can issue the
  RESET CLUSTER command to
  tidy up sooner. All other queue managers in the
  cluster are then informed
  that the queue manager is no longer available.
 
  I hope this helps.
 
  Mike Davidson
  TSYS MQ Tech Support
  [EMAIL PROTECTED]
 
 
 
 
  Potkay, Peter M (PLC, IT)
  [EMAIL PROTECTED]
  Sent by: MQSeries List [EMAIL PROTECTED]
 
 
  08/25/2003 08:55 AM
 
 
  Please respond to MQSeries List
 
 
 
 
  To:[EMAIL PROTECTED]
  cc:
  Subject:Re: Urgent help:
 Unexpected
  behavior in the
  Cluster!!!
 
  I would bet that the mainframe queue manager still
  has some automatic
  CLUSSNDRs leftover from when they were pointing to
  QM1 and QM2. just
  stopping / deleting / or modifying the manually
  defined ones does not do
  anything to the automatic ones.
 
  They will even continue to retry even now, hoping
  that the listener on QM!/@
  comes back.
 
  I know of no graceful way of eliminating Automatic
  CLUSSNDRs. :-(  My
  biggest pet peeve with clustering. The only way I
  know how to do it is to
  completely blow away the repositories in the
 cluster
  (partial and full).
  Maybe someone knows a better way???
 
 
  The one thing I have learned in the past couple of
  weeks with clustering is
  never ever just delete something. You always have
 to
  un cluster it first
  (queues, channels), and then delete.
 
 
 
  -Original Message-
  From: Ruzi R [mailto:[EMAIL PROTECTED]
  Sent: Monday, August 25, 2003 8:39 AM
  To: [EMAIL PROTECTED]
  Subject: Urgent help: Unexpected behavior in the
  Cluster!!!
 
 
  Hi all,
 
  The QM1 and QM2  (on W2K, MQ 5.3) were full
  repositories for our cluster. I have just replaced
  them (full repositories) with  QMA and QMB--
  removing
  QM1 and QM2 from the cluster. I have done this
  following the instructions given in the MQ Cluster
  manual. I have done the clean-up of the obsolete
  channels etc. (e.g. TO.QM1 and TO.QM2). In other
  words, no cluster member has any CLUSSDR channel
  pointing to the old repositiories. The old
  repositories have their cluster channels deleted.
  Have done REFRESH CLUSTER REPOS(YES/NO) as per the
  instructions in the manual.
 
  As we don t need QM1 and MQ2 any longer, I  did
  

Re: message CSQX558E - too many channels?

2003-08-25 Thread EARmerc Roberts
Thanx 4 everything, Rebecca.

Ernest Roberts
IT - Sr Sys Prog
MBUSA, LLC




  Bullock,
  Rebecca (CSC)   To:  [EMAIL PROTECTED]
  [EMAIL PROTECTED] cc:
  G   Subject: Re: message CSQX558E - too 
many channels?
  Sent by:
  MQSeries List
  [EMAIL PROTECTED]
  en.AC.AT


  08/25/2003 03:06
  PM
  Please respond
  to MQSeries List






Good luck, Ernest. One thing you might look into is using event monitoring.
I believe that an event is generated when a channel starts and stops.
Perhaps this will provide some useful information. -- Rebecca

Rebecca Bullock
Computer Sciences Corporation
MFCoE

Princeton, NJ  08541
email: [EMAIL PROTECTED] / [EMAIL PROTECTED]


-Original Message-
From: EARmerc Roberts [mailto:[EMAIL PROTECTED]
Sent: Monday, August 25, 2003 2:06 PM
To: [EMAIL PROTECTED]
Subject: Re: message CSQX558E - too many channels?

Thanx, Rebecca.

I understand what you're saying. Since several channels on my system are
consistently active, I have seen dynamic channels on the systems that
connect to mine and I assume that I may have dynamic channels created for
some or all of that activity.  But I can still say unequivocally that given
the numbers we produce in my environment, I can't see us reaching 200
channels in use. What I plan to do is monitor the traffic more closely. I
will also be checking the logs more closely for other messages which might
provide more information.

Ernest Roberts
IT - Sr Sys Prog
MBUSA, LLC

- Forwarded by Ernest Roberts/171/DCAG/DCX on 08/25/2003 01:58 PM -

  Bullock,
  Rebecca (CSC)   To:
[EMAIL PROTECTED]
  [EMAIL PROTECTED] cc:
  G   Subject: Re: message
CSQX558E
- too many channels?
  Sent by:
  MQSeries List
  [EMAIL PROTECTED]
  en.AC.AT


  08/24/2003 01:19
  PM
  Please respond
  to MQSeries List






Ernest, you may have less than 50 channels defined, but are some of them
client channels with multiple clients using them? That will bump your
active
channel count up.

Rebecca Bullock
Computer Sciences Corporation
MFCoE

Princeton, NJ  08541
email: [EMAIL PROTECTED] / [EMAIL PROTECTED]


-Original Message-
From: EARmerc Roberts [mailto:[EMAIL PROTECTED]
Sent: Friday, August 22, 2003 10:19 AM
To: [EMAIL PROTECTED]
Subject: Re: message CSQX558E - too many channels?

I have the MC. The problem is that I have a limit of 200 channels that I
know was not even remotely approached during our peak operating periods,
yet the message shows up. The 'For example' part is useless and should have
been omitted because it is not accurate or truly specific about whatever
the actual problem was that caused the message to be generated. The useful
part is the procedure to stop and start the channel. the rest is just
guessing and has no place in a manual that is supposed to help with problem
diagnosis and resolution. I have less than 50 channels defined.

Ernest Roberts
IT - Sr Sys Prog
MBUSA, LLC
Three Mercedes Drive
Montvale, NJ 07345
201-573-2619
201-573-4383 fax
866-308-3782 pager

- Forwarded by Ernest Roberts/171/DCAG/DCX on 08/22/2003 10:10 AM -

  Kearns, Emile
  E   To:
[EMAIL PROTECTED]
  [EMAIL PROTECTED] cc:
  .ZA Subject: Re: message
CSQX558E
- too many channels?
  Sent by:
  MQSeries List
  [EMAIL PROTECTED]
  en.AC.AT


  08/22/2003 02:02
  AM
  Please respond
  to MQSeries List






CSQX558E csect-name Remote channel channel-name not available

Explanation: The channel channel-name at the remote queue manager is
currently stopped or is otherwise unavailable. For example, there might be
too many channels current to be able to start it.

Severity: 8

System Action: The channel does not start.

System Programmer Response: This might be a temporary situation, and the
channel will retry. If not, check the status of the channel at the remote
queue manager. If it is stopped, issue a START CHANNEL command to restart
it. If there are too many channels current, either wait for some of the
operating channels to terminate, or stop some channels manually, before
restarting the channel.

-Original Message-
From: EARmerc Roberts [mailto:[EMAIL PROTECTED]
Sent: 21 August 2003 10:00
To: [EMAIL PROTECTED]
Subject: Re: