Re: How a MQSeries Hub does its thing with persistent / non-persi stent messages

2003-06-08 Thread Stephan C. Moen
In regards to sending a "100 MB lunker" I certainly wouldn't want to be
sending messages of this size on a production channel with a regular volume
of messages to transmit - especially one that handled interactive
applications and/or required predictable response times.  Instead, a second
channel needs to be created to handle messages of this size. Therefore the
question about "a higher priority message will somehow jump over the big
one" is a moot point.  It will never happen because you would never design a
system this way.  To answer your question directly, once a message is being
transmitted, it can't be interrupted by a higher-priority message.

In regards to "if the MCA is busy, priority will not help", I beg to differ.
It may not help in all cases, but it definitely improves the situation from
the spoke's perspective.  The only time where priority does not work is if a
higher-priority message comes in after a message is actively being
transmitted. Once that message is transmitted, the same sequence of events
applies again; highest priority messages get transmitted first in a FIFO
order.  Since the MCA takes messages out of the highest-priority queue down
to the lowest priority queue (10 queues in all; 0-9), if the NP messages are
tagged with a higher-priority then your persistent messages, the NP messages
get transmitted first.  This is a better solution then having NP and
persistent messages intertwined on the same priority queue - since they get
transmitted on a FIFO basis.

Though I have never looked into the specific case of NP messages being sent
with persistent messages within a channel batch job, I always assumed
(according to the MQ documentation) that NP messages sent on a FAST channel
fall outside the channel batch mechanism (BATCHINT, BATCHSZ), and don't have
to wait for the end of the batch (and associated MQCMIT) before the
application sees the messages; they are sent immediately without waiting for
completion of the current batch (sent out of syncpoint).  That is why NP
messages aren't counted against batch values (BATCHSZ, BATCHINT), since they
don't use sequence numbers (sequence number is not advanced when a NP
messages is sent) and are immediately visible on the receiver side.  The
side effect for this performance trade-off is that this may cause NP
messages to be processed out-of-order or lost due to transmission failures.
If this is not true, somebody please correct my erroneous assertion.

In regards to Question 1 ("is there any data/messages being written to disk
by QMHUB as the messages fly thru"), since the ORIGINAL question referenced
ALIAS queues on QMHUB and since an alias queue is not a real queue (and
therefore has no filename or disk allocated to it), I can only assume that
if an NP message on MQHUB encounters disk I/O, it must be related to PAGING
I/O (not enough memory on this box to handle the workload currently
configured).  Remember the original question refers to disk I/O on QMHUB -
not
the spokes.  If this is the case (someone please prove me wrong), then
whatever you do to the MQSeries configuration, won't make any difference.
The only solution is to add more memory to that box OR configure your memory
(if the OS will allow you to; use the 'vmtune' command on AIX) so that more
memory is allocated to computational memory (e.g., process working set)
versus persistent memory (e.g., files that have a hard disk location).  With
the exception of the XMIT queues on QMHUB,  'qalias' definitions don't have
an associated 'DefaultQBufferSize' attribute have some have stated, and
therefore don't apply as a solution to this issue.

For the case referenced earlier in this thread where the local queue file
grew to 1+ GB in size, that is expected since 1 GB of messages were
transmitted to it.  If the amount of shared memory used to hold NP messages
(default is 64KBs) is exceeded, overflow messages are forced to go to disk -
hence why you see the size of the local queue file growing.  If this alias
queue is not associated with a local queue but a remote queue on another
node, it is completely normal to see paging activity occur since there is no
disk file for overflow messages to be directed to, hence the only resource
available for storing messages yet to be
processed is memory.  I'm sure a closer inspection of this situation would
bear this out.

In regards to how to 'minimize' the impact of disk I/O with NP messages,
another point of concern is  "how are receiver channels created on the
MQHUB".  I bring this up because I believe IBM recommends using the
'runmqlsr' daemon on all distributed platforms to start requestor channels
(ala receiver).  This daemon is a multi-threaded listener program that will
execute as one process only, and runs multiple receiver channels as separate
individual threads. If a thread (one of the spoke channels) is suspended
waiting for disk I/O to complete AND the daemon process has not yet consumed
it's entire CPU timeslot, then the CPU is granted to

Choice of MQ products on 'publish and subscribe' function

2003-06-08 Thread K K



Dear All,
 
One user is asking for a publish-and-subscribe function using 
MQSeries on zSeries.   I have looked up IBM web site and it appears 
that there were many similar and confusing choices.   There are Event 
Broker (WMQSEB), Integration Message Broker (WMQSIB) and WMQI.   I 
would like to seek your advice/experience on these products.  
 
Besides, I would like to confirm the software requirement for 
running these products on zSeries.  We are OS 390 2.10 and WMQ 
5.3.
 
TIA
 
KK


Re: MQSeries in DMZ

2003-06-08 Thread Sid . Young
T.Bob,

You have raised a good point in using the MQ server in the DMZ as a pass
through and having the data processing on an internal server, perhaps a
cluster might achieve this, the source queues and processing of the clear
text data could occur on the internal server and the encrypted data resides
in the DMZ server, then using SSL to harden the channel and using a security
exit for authentication as well as firewall routing rules would give you a
reasonably secure environment.

I am only suggesting the cluster to give a bit of scope for expansion and
perhaps easier management. When I try it here at work sometime in the next
few months, I'll let everyone know how practical it is.

Sid


-Original Message-
From: Wyatt, T. Rob [mailto:[EMAIL PROTECTED]
Sent: Saturday, 7 June 2003 12:01 AM
To: [EMAIL PROTECTED]
Subject: Re: MQSeries in DMZ


David,

To expand a little further, the ideal situation is that encryption/signing
of messages occurs at the endpoints whereas connections need to be
authenticated point-to-point.  So you are right that some kind of app-to-app
encryption is best here.  SSL only authenticates or encrypts from
point-to-point and only on the wire.  The messages are in plaintext on the
queue making them vulnerable as they hop through the DMZ.

However, firewall IP filtering does not qualify as authentication.  SSL
between the DMZ and the client site hardens that communication path so I
still believe SSL is appropriate here.  Of course, I work for a bank and
take a pretty conservative approach to external connections.

As far as storing keys in the DMZ, my objection is that no processing or
manipulation of the messages should occur in the DMZ.  They should just pass
through.  Keys for point-to-point authentication (SSL for example) are
necessary, of course.  But as far as manipulating the messages, the whole
point of the DMZ is that you don't trust it, right?  If you did, you
wouldn't need a DMZ, you'd just put your app server right next to the
external firewall and forget about it.  So if we don't trust the DMZ, the
last thing we want to do is store the keys to the castle in it.  But as I
said, my employer requires me to take a rather conservative view of these
things.  Your mileage may vary.

-- T.Rob


-Original Message-
From: David C. Partridge [mailto:[EMAIL PROTECTED]
Sent: Friday, June 06, 2003 5:39 AM
To: [EMAIL PROTECTED]
Subject: Re: MQSeries in DMZ


Nice summary.   SSL is (probably) not appropriate here for encryption
purposes, and an application to application encryption product such as
Primeur DSMQ E2E (preferred by me anyway, but then I'm biased, as I designed
it), Candle MQSecure, or Tivoli AMBI is more appropriate.

Alternatives to SSL could also be considered such as channel exit based
solutions (strange we do that too!).   As far as the issue of storing keys
on the DMZ machine is concerned, I wouldn't be worried if the keys were
stored in an HSM.

Cheers
Dave

-Original Message-
From: MQSeries List [mailto:[EMAIL PROTECTED] Behalf Of Wyatt,
T. Rob
Sent: 05 June 2003 19:00
To: [EMAIL PROTECTED]
Subject: Re: MQSeries in DMZ


Tim,

Make sure you use MQ 5.3 in the DMZ.  One of the new features is a channel
attribute that binds the channel to a particular local IP address.  Your DMZ
will have two addresses we are concerned with - the IP address your trusted
network sees, and the IP address the world sees.  You will also have two
categories of channel: customer-facing channels and internal-facing
channels.  By specifying the LOCLADDR, you can insure that your
internal-facing channels are not hijacked by external users.

For example, assume your DMZ server has a RCVR or RQSTR channel called
APPSVR.DMZ with no exit.  One of your customers could create a SDR or SVR
channel called APPSVR.DMZ and try to start it. Without a LOCLADDR specified,
the channel would bind to the external-facing IP and the firewall would
allow the connection.  On the other hand, if you set the LOCLADDR attribute
to your internal-facing IP address, the external firewall will disallow the
connection.

There are a LOT of other considerations.  For example, if the data is
sensitive and encrypted, you don't want to store the keys on the DMZ.  It
would be better to have the messages signed/encrypted at either end and have
them pass the DMZ unaltered.

Also, IP filtering as provided at the firewall is strong but not 100%
reliable.  It is possible to spoof an IP address and it is possible that an
intrusion attempt could come from a trusted business partner.  If your
listeners are running as mqm, they can potentially be hijacked.  Better to
run your listeners as low-privileged IDs.  Use different listeners and ports
for internal-facing and external-facing connections.  In fact, use different
listeners for each customer if you want to be really safe.

Finally, consider the implications of multiple clients using the same QMgr.
If one client can put messages onto another client's remote queue, 

WMQI - BIP2066E for all execution groups

2003-06-08 Thread tony madden
All of a sudden, WMQI isn't working for me. Every time I deploy, I get
BIP2066E (Configuration Timeout). This even occurs with a broker with empty
Execution Groups and with no Message Sets.
As a last resort, I have re-installed WMQI (v2.1 CSD 4) and defined a
configmgr and a broker. I added the broker to the topology, but when I
deploy the topology I get BIP2066E for the $SYS_mqsi Execution Group. I
assume this is a system Execution Group?
I tried adding an Execution Group called EG1 and deploying, and get a
similar result - three BIP2066E error, one each for the $SYS_mqsi, default
and EG1 Execution Groups. The Windows Event Viewer shows 3 DataFlowEngines,
so the broker is starting the Execution Groups, but the Execution Groups
aren't responding to commands.
There are no MQ or DB2 errors.
Has anyone else seen anything like this?
Setup: WMQI v2.1 CSD4, W2k,DB2

cheers
Tony
_
Stay in touch with absent friends - get MSN Messenger
http://www.msn.co.uk/messenger
Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive