Re: Client IP connections direct to an OS/390

2003-03-14 Thread Robert Broderick
Ole Den Boy brings up a VERY gud point. Adding to that. It is easier and
cheaper to scale a UNIX/NT concentrator than a zOS box. If the need occurs
you can certainly add additional concentrators to support additional client
connections if your connectivity solution is not zOS. Plus you don't have to
do battle with the Dark Lord of security and his sister the Queen of MQ
administration on the Mainframe!!
 bobbee






From: "Miller, Dennis" <[EMAIL PROTECTED]>
Reply-To: MQSeries List <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Subject: Re: Client IP connections direct to an OS/390
Date: Thu, 13 Mar 2003 08:01:21 -0800
TCP in, SNA out? Now I'm confused. I thought you were replacing TCP client
connections to the NT gateway/concentrators with TCP client connections
directly to OS/390. My point about performance is mostly from the
standpoint of the OS/390.  If you have message channels coming in from
another qmgr, you get a significant performance gain from the batching of
messages (especially with high volumes like what you mention). On the other
hand, if you have MQI (client) channels coming in, there is no batching
whatsoever. Each  MQI request comes in as one (or more) individual messages
on a channel dedicated to that MQI request. Furthermore, the return path
for that channel is blocked until the MQI request is satisfied. Batchsize
and interval do not pertain to those kind of channels.
As for client channels recyling, either the application is disconnecting or
TCP is. The qmgr doesn't have much control of it--another reason you may be
happier with message channels inbound from the concentrator.


> -Original Message-
> From: Dave Corbett [SMTP:[EMAIL PROTECTED]
> Sent: Wednesday, March 12, 2003 6:52 AM
> To:   [EMAIL PROTECTED]
> Subject:       Re: Client IP connections direct to an OS/390
>
> Dennis.
> Thanks for your thoughts on this.  I'm not sure that batching of
messages
> really matters, especially since we are TCP in and SNA out.  Our batch
size
> is set to 50 and the batch interval set 100.  We did some extensive
tuning
> with the VTAM settings to make sure it worked as well as possible
(setting
> RU sizes, pacing, and other NCP parameters).
>
> It just seems that skipping a box in between the app server and
mainframe
> should increase performance by itself.  My real outstanding question is,
> why do we get thousands of messages showing channels starting and going
> inactive in the CHIN log?  We had an occasion last Sunday where we had
> between 6 and 7 thousand messages in the space of an hour.  I don't
> understand what it is that drives MQ to be bouncing these connections
like
> this.  We have also sent this question to IBM for a better
understanding.
>
> We'll discuss your thoughts on putting the QMGRs on the app servers, but
we
> were originally told (in 1998) that we needed the MQ servers to be on
their
> own boxes.
>
> Thanks,
> David Corbett
> Work:  651-205-2714
> Pager: 651-610-3842
>
>
> |-+--->
> | |   "Miller, Dennis"|
> | |   <[EMAIL PROTECTED]|
> | |   COM>|
> | |   Sent by:|
> | |   "MQSeries List" |
> | |   <[EMAIL PROTECTED]|
> | |   en.AC.AT>   |
> | |   |
> | |   |
> | |   03/11/2003 09:37|
> | |   AM  |
> | |   Please respond  |
> | |   to "MQSeries|
> | |   List"   |
> | |   |
> |-+--->
>
>---|
>   |
                             |
>   |To:  [EMAIL PROTECTED]
 |
>   |cc:
 |
>   |Subject: Re: Client IP connections direct to an OS/390
 |
>
>---|
>
>
>
>
> Dave,
> With the workload you describe, I think the 390 would be better
optimised
> and easier to manage with the NT concentrators. I mean sender channels
can
> batch dozens of messages together, whilst the client channels go
one-at-a
> time. You also minimze the overhead of channel pooling and avoid (at
least
> from the 390 persp

Re: Client IP connections direct to an OS/390

2003-03-13 Thread Miller, Dennis
TCP in, SNA out? Now I'm confused. I thought you were replacing TCP client connections 
to the NT gateway/concentrators with TCP client connections directly to OS/390. My 
point about performance is mostly from the standpoint of the OS/390.  If you have 
message channels coming in from another qmgr, you get a significant performance gain 
from the batching of messages (especially with high volumes like what you mention). On 
the other hand, if you have MQI (client) channels coming in, there is no batching 
whatsoever. Each  MQI request comes in as one (or more) individual messages on a 
channel dedicated to that MQI request. Furthermore, the return path for that channel 
is blocked until the MQI request is satisfied. Batchsize and interval do not pertain 
to those kind of channels.

As for client channels recyling, either the application is disconnecting or TCP is. 
The qmgr doesn't have much control of it--another reason you may be happier with 
message channels inbound from the concentrator.



> -Original Message-
> From: Dave Corbett [SMTP:[EMAIL PROTECTED]
> Sent: Wednesday, March 12, 2003 6:52 AM
> To:   [EMAIL PROTECTED]
> Subject:   Re: Client IP connections direct to an OS/390
> 
> Dennis.
> Thanks for your thoughts on this.  I'm not sure that batching of messages
> really matters, especially since we are TCP in and SNA out.  Our batch size
> is set to 50 and the batch interval set 100.  We did some extensive tuning
> with the VTAM settings to make sure it worked as well as possible (setting
> RU sizes, pacing, and other NCP parameters).
> 
> It just seems that skipping a box in between the app server and mainframe
> should increase performance by itself.  My real outstanding question is,
> why do we get thousands of messages showing channels starting and going
> inactive in the CHIN log?  We had an occasion last Sunday where we had
> between 6 and 7 thousand messages in the space of an hour.  I don't
> understand what it is that drives MQ to be bouncing these connections like
> this.  We have also sent this question to IBM for a better understanding.
> 
> We'll discuss your thoughts on putting the QMGRs on the app servers, but we
> were originally told (in 1998) that we needed the MQ servers to be on their
> own boxes.
> 
> Thanks,
> David Corbett
> Work:  651-205-2714
> Pager: 651-610-3842
> 
> 
> |-+--->
> | |   "Miller, Dennis"|
> | |   <[EMAIL PROTECTED]|
> | |   COM>|
> | |   Sent by:|
> | |   "MQSeries List" |
> | |   <[EMAIL PROTECTED]|
> | |   en.AC.AT>   |
> | |   |
> | |   |
> | |   03/11/2003 09:37|
> | |   AM  |
> | |   Please respond  |
> | |   to "MQSeries|
> | |   List"   |
> | |   |
> |-+--->
>   
> >---|
>   |  
>  |
>   |To:  [EMAIL PROTECTED]            
>        |
>   |cc:   
>  |
>   |Subject: Re: Client IP connections direct to an OS/390
>  |
>   
> >---|
> 
> 
> 
> 
> Dave,
> With the workload you describe, I think the 390 would be better optimised
> and easier to manage with the NT concentrators. I mean sender channels can
> batch dozens of messages together, whilst the client channels go one-at-a
> time. You also minimze the overhead of channel pooling and avoid (at least
> from the 390 perspective) avoid the headaches implicit in one end of the
> channel not controlled by the qmgr. In my mind, the tougher question is
> whether to forgo the NT concentrators and put qmgrs directily on the
> webshere servers, which in effect are already acting as concentrators.  I'm
> skeptical about the advantage gained by having 4 inound qmgrs rather than
> 11.
> 
> 
> > -Original Message-
> > From: Dave Corbett 

Re: Client IP connections direct to an OS/390

2003-03-13 Thread Neil Johnston
If you wish to suppress certain messages (such as channel
started/disconnected, etc.) on z/OS then we document how to do this, using
the z/OS message processing facility list, in the 'System Setup Guide' at
the end of the 'Customizing your queue managers' chapter. There is also a
provided sample (CSQ4MPFL in SCSQPROC) which shows the settings for some
common messages you are recommended to suppress on busy systems (including
CSQX500I and CSQX501I).

Neil Johnston.
WebSphere MQ for z/OS Development & Service.

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive


Re: Client IP connections direct to an OS/390

2003-03-12 Thread Tim Armstrong
Hi Dave,

If you are referring to messages like the following then you are seeing an
application connecting to and then disconnecting from your queue manager as
a client.

CSQX500I MQD7 CSQXRESP Channel TEST.NOEXIT started
CSQX501I MQD7 CSQXRESP Channel TEST.NOEXIT is no longer active

Regards
Tim A



  Dave Corbett
  <[EMAIL PROTECTED]To:   [EMAIL PROTECTED]
  BANK.COM>cc:
  Sent by: MQSeriesSubject:  Re: Client IP connections 
direct to an OS/390
  List
  <[EMAIL PROTECTED]
  N.AC.AT>


  13/03/2003 06:26
  Please respond to
  MQSeries List





Glen,
This sounds good in theory, but client channels don't have options like
disconnect interval, heartbeat interval etc on the OS/390.  I don't even
think they have them on other platfoms either.  I think these are
controlled via global TCP setting like TCPKEEPALIVE.  Since we are using a
vendor package on the client side, we don't have a lot of control over the
options specified at channel connection time.

Thanks,
David Corbett
Work:  651-205-2714
Pager: 651-610-3842


|-+--->
| |   "Glen Shubert"  |
| |   <[EMAIL PROTECTED]|
| |   OM> |
| |   Sent by:|
| |   "MQSeries List" |
| |   <[EMAIL PROTECTED]|
| |   en.AC.AT>   |
| |   |
| |   |
| |   03/12/2003 09:17|
| |   AM  |
| |   Please respond  |
| |   to "MQSeries|
| |   List"   |
| |   |
|-+--->

>---|

  |
|
  |    To:  [EMAIL PROTECTED]
|
  |    cc:
|
  |Subject: Re: Client IP connections direct to an OS/390
|

>---|






Dave,

The messages could be from the Disconnect Interval on the channels.  You
may want to check those and possibly set them to zero.

Glen Shubert


   Dave Corbett
   <[EMAIL PROTECTED]> To:
   Sent by: MQSeries List [EMAIL PROTECTED]
   <[EMAIL PROTECTED]>  cc:
                  Subject:Re: Client
  IP connections direct to an OS/390
   03/12/2003 09:52 AM


   Please respond to MQSeries List






Dennis.
Thanks for your thoughts on this.  I'm not sure that batching of messages
really matters, especially since we are TCP in and SNA out.  Our batch size
is set to 50 and the batch interval set 100.  We did some extensive tuning
with the VTAM settings to make sure it worked as well as possible (setting
RU sizes, pacing, and other NCP parameters).

It just seems that skipping a box in between the app server and mainframe
should increase performance by itself.  My real outstanding question is,
why do we get thousands of messages showing channels starting and going
inactive in the CHIN log?  We had an occasion last Sunday where we had
between 6 and 7 thousand messages in the space of an hour.  I don't
understand what it is that drives MQ to be bouncing these connections like
this.  We have also sent this question to IBM for a better understanding.

We'll discuss your thoughts on putting the QMGRs on the app servers, but we
were originally told (in 1998) that we needed the MQ servers to be on their
own boxes.

Thanks,
David Corbett
Work:  651-205-2714
Pager: 651-610-3842


|-+--->
| |   "Miller, Dennis"|
| |   <[EMAIL PROTECTED]|
| |   COM>|
| |   Sent by:|
| |   "MQSeries List" |
| |   <[EMAIL PROTECTED]|
| |   en.AC.AT>   |
| |   |
| |   |
| |   03/11/2003 09:37|
| |   AM  |
| |   Please respond  |
| |   to "MQSeries|
| |   List"   |
| |   |
|-+--->
 >
----------------------------

Re: Client IP connections direct to an OS/390

2003-03-12 Thread Dave Corbett
Glen,
This sounds good in theory, but client channels don't have options like
disconnect interval, heartbeat interval etc on the OS/390.  I don't even
think they have them on other platfoms either.  I think these are
controlled via global TCP setting like TCPKEEPALIVE.  Since we are using a
vendor package on the client side, we don't have a lot of control over the
options specified at channel connection time.

Thanks,
David Corbett
Work:  651-205-2714
Pager: 651-610-3842


|-+--->
| |   "Glen Shubert"  |
| |   <[EMAIL PROTECTED]|
| |   OM> |
| |   Sent by:|
| |   "MQSeries List" |
| |   <[EMAIL PROTECTED]|
| |   en.AC.AT>   |
| |   |
| |   |
| |   03/12/2003 09:17|
| |   AM  |
| |   Please respond  |
| |   to "MQSeries|
| |   List"   |
| |   |
|-+--->
  
>---|
  |
   |
  |To:  [EMAIL PROTECTED]  
 |
  |cc: 
                           |
  |Subject: Re: Client IP connections direct to an OS/390  
   |
  
>---|





Dave,

The messages could be from the Disconnect Interval on the channels.  You
may want to check those and possibly set them to zero.

Glen Shubert


   Dave Corbett
   <[EMAIL PROTECTED]> To:
   Sent by: MQSeries List [EMAIL PROTECTED]
   <[EMAIL PROTECTED]>      cc:
          Subject:        Re: Client
  IP connections direct to an OS/390
   03/12/2003 09:52 AM


   Please respond to MQSeries List






Dennis.
Thanks for your thoughts on this.  I'm not sure that batching of messages
really matters, especially since we are TCP in and SNA out.  Our batch size
is set to 50 and the batch interval set 100.  We did some extensive tuning
with the VTAM settings to make sure it worked as well as possible (setting
RU sizes, pacing, and other NCP parameters).

It just seems that skipping a box in between the app server and mainframe
should increase performance by itself.  My real outstanding question is,
why do we get thousands of messages showing channels starting and going
inactive in the CHIN log?  We had an occasion last Sunday where we had
between 6 and 7 thousand messages in the space of an hour.  I don't
understand what it is that drives MQ to be bouncing these connections like
this.  We have also sent this question to IBM for a better understanding.

We'll discuss your thoughts on putting the QMGRs on the app servers, but we
were originally told (in 1998) that we needed the MQ servers to be on their
own boxes.

Thanks,
David Corbett
Work:  651-205-2714
Pager: 651-610-3842


|-+--->
| |   "Miller, Dennis"|
| |   <[EMAIL PROTECTED]|
| |   COM>|
| |   Sent by:|
| |   "MQSeries List" |
| |   <[EMAIL PROTECTED]|
| |   en.AC.AT>   |
| |   |
| |   |
| |   03/11/2003 09:37|
| |   AM  |
| |   Please respond  |
| |   to "MQSeries|
| |   List"   |
| |   |
|-+--->
 >
---------------------------|

 |
|
 |To:  [EMAIL PROTECTED]
|
 |cc:
|
 |Subject: Re: Client IP connections direct to an OS/390
|
 >
---|





Dave,
With the workload you describe, I think the 390 would be better optimised
and easier to manage with the NT concentrators. I mean sender channels can
batch dozens of messages together,

Re: Client IP connections direct to an OS/390

2003-03-12 Thread Glen Shubert

Dave,

The messages could be from the Disconnect Interval on the channels.  You may want to check those and possibly set them to zero.

Glen Shubert






Dave Corbett <[EMAIL PROTECTED]>
Sent by: MQSeries List <[EMAIL PROTECTED]>
03/12/2003 09:52 AM
Please respond to MQSeries List


        
        To:        [EMAIL PROTECTED]
        cc:        
        Subject:        Re: Client IP connections direct to an OS/390

Dennis.
Thanks for your thoughts on this.  I'm not sure that batching of messages
really matters, especially since we are TCP in and SNA out.  Our batch size
is set to 50 and the batch interval set 100.  We did some extensive tuning
with the VTAM settings to make sure it worked as well as possible (setting
RU sizes, pacing, and other NCP parameters).

It just seems that skipping a box in between the app server and mainframe
should increase performance by itself.  My real outstanding question is,
why do we get thousands of messages showing channels starting and going
inactive in the CHIN log?  We had an occasion last Sunday where we had
between 6 and 7 thousand messages in the space of an hour.  I don't
understand what it is that drives MQ to be bouncing these connections like
this.  We have also sent this question to IBM for a better understanding.

We'll discuss your thoughts on putting the QMGRs on the app servers, but we
were originally told (in 1998) that we needed the MQ servers to be on their
own boxes.

Thanks,
David Corbett
Work:  651-205-2714
Pager: 651-610-3842


|-+--->
|         |           "Miller, Dennis"|
|         |           <[EMAIL PROTECTED]|
|         |           COM>            |
|         |           Sent by:        |
|         |           "MQSeries List" |
|         |           <[EMAIL PROTECTED]|
|         |           en.AC.AT>       |
|         |                           |
|         |                           |
|         |           03/11/2003 09:37|
|         |           AM              |
|         |           Please respond  |
|         |           to "MQSeries    |
|         |           List"           |
|         |                           |
|-+--->
  >---|
  |                                                                                                                               |
  |        To:      [EMAIL PROTECTED]                                                                                       |
  |        cc:                                                                                                                    |
  |        Subject: Re: Client IP connections direct to an OS/390                                                                 |
  >---|




Dave,
With the workload you describe, I think the 390 would be better optimised
and easier to manage with the NT concentrators. I mean sender channels can
batch dozens of messages together, whilst the client channels go one-at-a
time. You also minimze the overhead of channel pooling and avoid (at least
from the 390 perspective) avoid the headaches implicit in one end of the
channel not controlled by the qmgr. In my mind, the tougher question is
whether to forgo the NT concentrators and put qmgrs directily on the
webshere servers, which in effect are already acting as concentrators.  I'm
skeptical about the advantage gained by having 4 inound qmgrs rather than
11.


> -Original Message-
> From: Dave Corbett [SMTP:[EMAIL PROTECTED]
> Sent: Monday, March 10, 2003 9:01 AM
> To:   [EMAIL PROTECTED]
> Subject:           Client IP connections direct to an OS/390
>
> Since nobody has replied to this, I am assuming that we are the only ones
> making extensive use of client connections direct to an OS/390 platform.
> So, in the hopes that someone else besides us has some experience with
this
> architectural model, I am resending it to entice anyone with thoughts on
> this to respond.
>
> Thanks,
> David Corbett
>
> IBM MQSeries Certified Solutions Expert
> IBM MQSeries Certified Developer
> IBM MQSeries Certified Specialist
>
>
> - Forwarded by David P Corbett/MN/USB on 03/10/2003 10:55 AM -
> |-+--->
> |         |           David P Corbett |
> |         |                           |
> |         |           03/06/2003 11:14|
> |         |           AM              |
> |         |                           |
> |-+--->
>   >
----------------------------

Re: Client IP connections direct to an OS/390

2003-03-12 Thread Dave Corbett
Dennis.
Thanks for your thoughts on this.  I'm not sure that batching of messages
really matters, especially since we are TCP in and SNA out.  Our batch size
is set to 50 and the batch interval set 100.  We did some extensive tuning
with the VTAM settings to make sure it worked as well as possible (setting
RU sizes, pacing, and other NCP parameters).

It just seems that skipping a box in between the app server and mainframe
should increase performance by itself.  My real outstanding question is,
why do we get thousands of messages showing channels starting and going
inactive in the CHIN log?  We had an occasion last Sunday where we had
between 6 and 7 thousand messages in the space of an hour.  I don't
understand what it is that drives MQ to be bouncing these connections like
this.  We have also sent this question to IBM for a better understanding.

We'll discuss your thoughts on putting the QMGRs on the app servers, but we
were originally told (in 1998) that we needed the MQ servers to be on their
own boxes.

Thanks,
David Corbett
Work:  651-205-2714
Pager: 651-610-3842


|-+--->
| |   "Miller, Dennis"|
| |   <[EMAIL PROTECTED]|
| |   COM>|
| |   Sent by:|
| |   "MQSeries List" |
| |   <[EMAIL PROTECTED]|
| |   en.AC.AT>   |
| |   |
| |   |
| |   03/11/2003 09:37|
| |   AM  |
| |   Please respond  |
| |   to "MQSeries|
| |   List"   |
| |   |
|-+--->
  
>---|
  |
   |
  |To:  [EMAIL PROTECTED]  
 |
  |cc: 
                           |
  |Subject: Re: Client IP connections direct to an OS/390  
   |
  
>---|




Dave,
With the workload you describe, I think the 390 would be better optimised
and easier to manage with the NT concentrators. I mean sender channels can
batch dozens of messages together, whilst the client channels go one-at-a
time. You also minimze the overhead of channel pooling and avoid (at least
from the 390 perspective) avoid the headaches implicit in one end of the
channel not controlled by the qmgr. In my mind, the tougher question is
whether to forgo the NT concentrators and put qmgrs directily on the
webshere servers, which in effect are already acting as concentrators.  I'm
skeptical about the advantage gained by having 4 inound qmgrs rather than
11.


> -Original Message-
> From: Dave Corbett [SMTP:[EMAIL PROTECTED]
> Sent: Monday, March 10, 2003 9:01 AM
> To:   [EMAIL PROTECTED]
> Subject:   Client IP connections direct to an OS/390
>
> Since nobody has replied to this, I am assuming that we are the only ones
> making extensive use of client connections direct to an OS/390 platform.
> So, in the hopes that someone else besides us has some experience with
this
> architectural model, I am resending it to entice anyone with thoughts on
> this to respond.
>
> Thanks,
> David Corbett
>
> IBM MQSeries Certified Solutions Expert
> IBM MQSeries Certified Developer
> IBM MQSeries Certified Specialist
>
>
> - Forwarded by David P Corbett/MN/USB on 03/10/2003 10:55 AM -
> |-+--->
> | |   David P Corbett |
> | |   |
> | |   03/06/2003 11:14|
> | |   AM  |
> | |   |
> |-+--->
>   >
-------------------------------|

>   |
|
>   |To:  [EMAIL PROTECTED]
|
>   |cc:
|
>   |Subject: Client IP connections direct to an OS/390
|
>   >
---|

>
>
>
> To anyone with experience,
>
> We originally had three NT MQ servers that would handle MQ 

Re: Client IP connections direct to an OS/390

2003-03-11 Thread Miller, Dennis
Dave,
With the workload you describe, I think the 390 would be better optimised and easier 
to manage with the NT concentrators. I mean sender channels can batch dozens of 
messages together, whilst the client channels go one-at-a time. You also minimze the 
overhead of channel pooling and avoid (at least from the 390 perspective) avoid the 
headaches implicit in one end of the channel not controlled by the qmgr. In my mind, 
the tougher question is whether to forgo the NT concentrators and put qmgrs directily 
on the webshere servers, which in effect are already acting as concentrators.  I'm 
skeptical about the advantage gained by having 4 inound qmgrs rather than 11.


> -Original Message-
> From: Dave Corbett [SMTP:[EMAIL PROTECTED]
> Sent: Monday, March 10, 2003 9:01 AM
> To:   [EMAIL PROTECTED]
> Subject:   Client IP connections direct to an OS/390
> 
> Since nobody has replied to this, I am assuming that we are the only ones
> making extensive use of client connections direct to an OS/390 platform.
> So, in the hopes that someone else besides us has some experience with this
> architectural model, I am resending it to entice anyone with thoughts on
> this to respond.
> 
> Thanks,
> David Corbett
> 
> IBM MQSeries Certified Solutions Expert
> IBM MQSeries Certified Developer
> IBM MQSeries Certified Specialist
> 
> 
> - Forwarded by David P Corbett/MN/USB on 03/10/2003 10:55 AM -
> |-+--->
> | |   David P Corbett |
> | |   |
> | |   03/06/2003 11:14|
> | |   AM  |
> | |   |
> |-+--->
>   
> >---|
>   |  
>  |
>   |To:  [EMAIL PROTECTED]
>|
>   |cc:                           
>  |
>   |Subject: Client IP connections direct to an OS/390
>  |
>   
> >---|
> 
> 
> 
> To anyone with experience,
> 
> We originally had three NT MQ servers that would handle MQ connections from
> 11 websphere app servers and then had SNA channels defined to our OS/390
> environment.  The 11 app servers are spread across three mainframe LPARS in
> 4x4x3 configuration.  The NT boxes served for the most part as a protocol
> converter from IP to SNA.  Since our OS/390 platform's tcp stack has become
> more robust, we have been migrating towards direct client channel
> connections to the 390.  We have an application that runs on websphere
> (3.5x) that makes connections to the 390 on an as needed basis.  If a
> request comes in, it searches for the first available connection, and if it
> doesn't find any it creates a new one.  We maintain the state of these
> connections such that 30-40 per websphere app server seems to handle our
> needs most of the time.  However, when an influx of requests occurs in a
> short span of time (say 3-5 seconds), and the mainframe is slightly slow in
> responding, we tend to spin through these connections creating many in a
> short time frame.  We only allow 100 per app server and have 4 app servers
> pointed at a specific LPAR on the 390.
> 
> We seem to have issues some times where connections go inactive, then new
> channels are started, more connections go inactive, more are started and> 
> this goes on for a few minutes where several hundred connections seem to
> get started and dropped.  It's basically impossible to determine if it's
> only old connections that are being dropped.  Some of the settings such as
> HBINT are left at the default of 300 and the TCPKEEPALIVE settings are also
> at default which I think is set pretty high to reduce network traffic.
> 
> Is anyone else out there trying anything similar to this?  Would I be
> better off keeping the NT MQ servers and defining server channels as TCP
> between NT * the 390 thereby allowing the NT boxes to function as a
> concentrator? We currently have a problem where one of the OS/390 QMGRs
> CHIN application crashes during a cycle of intense channel creation.
> 
> I'm just looking for someone who may be able to help with a 

Client IP connections direct to an OS/390

2003-03-10 Thread Dave Corbett
Since nobody has replied to this, I am assuming that we are the only ones
making extensive use of client connections direct to an OS/390 platform.
So, in the hopes that someone else besides us has some experience with this
architectural model, I am resending it to entice anyone with thoughts on
this to respond.

Thanks,
David Corbett

IBM MQSeries Certified Solutions Expert
IBM MQSeries Certified Developer
IBM MQSeries Certified Specialist


- Forwarded by David P Corbett/MN/USB on 03/10/2003 10:55 AM -
|-+--->
| |   David P Corbett |
| |   |
| |   03/06/2003 11:14|
| |   AM  |
| |   |
|-+--->
  
>---|
  |
   |
  |To:  [EMAIL PROTECTED]  
 |
  |cc: 
   |
  |Subject: Client IP connections direct to an OS/390  
   |
  
>---|



To anyone with experience,

We originally had three NT MQ servers that would handle MQ connections from
11 websphere app servers and then had SNA channels defined to our OS/390
environment.  The 11 app servers are spread across three mainframe LPARS in
4x4x3 configuration.  The NT boxes served for the most part as a protocol
converter from IP to SNA.  Since our OS/390 platform's tcp stack has become
more robust, we have been migrating towards direct client channel
connections to the 390.  We have an application that runs on websphere
(3.5x) that makes connections to the 390 on an as needed basis.  If a
request comes in, it searches for the first available connection, and if it
doesn't find any it creates a new one.  We maintain the state of these
connections such that 30-40 per websphere app server seems to handle our
needs most of the time.  However, when an influx of requests occurs in a
short span of time (say 3-5 seconds), and the mainframe is slightly slow in
responding, we tend to spin through these connections creating many in a
short time frame.  We only allow 100 per app server and have 4 app servers
pointed at a specific LPAR on the 390.

We seem to have issues some times where connections go inactive, then new
channels are started, more connections go inactive, more are started and
this goes on for a few minutes where several hundred connections seem to
get started and dropped.  It's basically impossible to determine if it's
only old connections that are being dropped.  Some of the settings such as
HBINT are left at the default of 300 and the TCPKEEPALIVE settings are also
at default which I think is set pretty high to reduce network traffic.

Is anyone else out there trying anything similar to this?  Would I be
better off keeping the NT MQ servers and defining server channels as TCP
between NT * the 390 thereby allowing the NT boxes to function as a
concentrator? We currently have a problem where one of the OS/390 QMGRs
CHIN application crashes during a cycle of intense channel creation.

I'm just looking for someone who may be able to help with a relatively high
volume MQ message application (approx. 3MM messages per day) who may have
run into something that explains the channel starts and stops a little
better (I realize it may just be the way our application is written, but
the application never specifically shuts down connections unless the
application itself shuts down).

Thanks,
David Corbett

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive


Client IP connections direct to an OS/390

2003-03-06 Thread Dave Corbett
To anyone with experience,

We originally had three NT MQ servers that would handle MQ connections from
11 websphere app servers and then had SNA channels defined to our OS/390
environment.  The 11 app servers are spread across three mainframe LPARS in
4x4x3 configuration.  The NT boxes served for the most part as a protocol
converter from IP to SNA.  Since our OS/390 platform's tcp stack has become
more robust, we have been migrating towards direct client channel
connections to the 390.  We have an application that runs on websphere
(3.5x) that makes connections to the 390 on an as needed basis.  If a
request comes in, it searches for the first available connection, and if it
doesn't find any it creates a new one.  We maintain the state of these
connections such that 30-40 per websphere app server seems to handle our
needs most of the time.  However, when an influx of requests occurs in a
short span of time (say 3-5 seconds), and the mainframe is slightly slow in
responding, we tend to spin through these connections creating many in a
short time frame.  We only allow 100 per app server and have 4 app servers
pointed at a specific LPAR on the 390.

We seem to have issues some times where connections go inactive, then new
channels are started, more connections go inactive, more are started and
this goes on for a few minutes where several hundred connections seem to
get started and dropped.  It's basically impossible to determine if it's
only old connections that are being dropped.  Some of the settings such as
HBINT are left at the default of 300 and the TCPKEEPALIVE settings are also
at default which I think is set pretty high to reduce network traffic.

Is anyone else out there trying anything similar to this?  Would I be
better off keeping the NT MQ servers and defining server channels as TCP
between NT * the 390 thereby allowing the NT boxes to function as a
concentrator? We currently have a problem where one of the OS/390 QMGRs
CHIN application crashes during a cycle of intense channel creation.

I'm just looking for someone who may be able to help with a relatively high
volume MQ message application (approx. 3MM messages per day) who may have
run into something that explains the channel starts and stops a little
better (I realize it may just be the way our application is written, but
the application never specifically shuts down connections unless the
application itself shuts down).

Thanks,
David Corbett
Work:  651-205-2714
Pager: 651-610-3842

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive