Hi Raymond

Few more options to toy with:

1) As Dennis suggested, you could design one polling transaction(program) on
the transaction queue - read one message, issue EXEC CICS START read another
message and so on...

However, be careful about the disadvantages of this scenario -ie: once you
fired of the START, you don't know the success or failure of that "started"
transaction. So for example if the started transaction fails
/rollbacks/aborts for some unknown reason, your polling transaction doesn't
know about it, and happily will delete the message from transaction queue.
There is no easy way of implementing a business recovery plan here.

Please also note that, in CICS there are always restrictions on how many
instances of a particular transaction can run at any given time and say it
is set at 50. If you are issuing START TRNX faster than this threshold, CICS
simply rejects the subsequent STARTs and worse thing is the caller doesn't
even know that the STARTS are being ignored. We have recently bitten by
this, and have to modify our polling program to enquire this maximum limit
and wait internally before firing more.


2) Retain your current logic, remove the CLOSE on transaction queue and
change the trigger type to EVERY. As a best practice, even after making
EVERY, your program should issue a SYNCPOINT COMMIT after step-6 and should
go for STEP-1 to 6 in a loop, until it gets 2033.  This will safeguard the
other problem I mentioned earlier - ie: CICS discarding the starts after
crossing the max. threshold (Because in this case CKTI is the one, that is
issuing CICS STARTS and the problem could arise even there).

My preference is to go for option-2.

Have fun

Rao Adiraju
WebSphere MQ Specialist
The National Bank of NZ Ltd.
Wellington - New Zealand
Tel:  +64-4-494 4299
Fax: +64-4-802 8509
Mbl: +64-211-216-116




-----Original Message-----
From: Kinzler, Raymond C [mailto:[EMAIL PROTECTED]
Sent: 13 February 2004 11:27 AM
To: [EMAIL PROTECTED]
Subject: Re: A novice question

Thanks, Dennis.  I am really at a loss but things are getting clearer little
by little.  I need to play with this more tomorrow.

-----Original Message-----
From: Miller, Dennis [mailto:[EMAIL PROTECTED]
Sent: Thursday, February 12, 2004 5:15 PM
To: [EMAIL PROTECTED]
Subject: Re: A novice question


There is nothing single threaded about it.  The EXEC CICS START is
asynchronous and does NOT hold up VMS-MQ while the 500 data messages are
being consumed. Unless you implement some kind of governor to slow down
VMS-MQ, CICS transactions will start as fast as request messages arrive.


Regards,
Dennis


-----Original Message-----
From: Kinzler, Raymond C [mailto:[EMAIL PROTECTED]
Sent: Thursday, February 12, 2004 4:10 AM
To: [EMAIL PROTECTED]
Subject: Re: A novice question


Dennis,

You really pegged how our process works; you are exactly right.  I sort of,
kind of follow what you are saying we should do because, to me, it seems
like the process will be more-or-less single threaded.  We originally had
VMS-MQ loop until it encountered a 2033 condition but we encountered
bottlenecks when we experienced high-volume.  We thought the answer was to
close the transaction (trigger) queue ASAP but what you are saying sort of
makes sense.

I am surely no MQ expert (heck, I think a novice has more understanding than
I do!) and I also have little experience with CICS, truthfully.  I have
recently been tossed into this position to try and, obviously, my first
attempt at addressing a problem by closing the transaction queue quickly may
have caused a bigger problem than what was happening originally.

Thanks for the information.  This discussion list is certainly one of the
most active and most helpful ones that I have come across.  I have learned
more in the past couple days that I have read and posted here about MQ than
I have in all the time I have spent prior.  Thanks again.




-----Original Message-----
From: Miller, Dennis [mailto:[EMAIL PROTECTED]
Sent: Wednesday, February 11, 2004 8:09 PM
To: [EMAIL PROTECTED]
Subject: Re: A novice question


My hunch is that your problem is caused by a delayed commit of the MQGET off
the transaction queue. It's exaggerated because you are leaving the
transaction request uncommitted for a relatively long period of time. In
other words, all the while you are processing 500 data messages, it appears
to the rest of the world like the transaction message (that you already
read) is still on the triggered queue. In other words, the close after
reading message 1 triggers another transaction to process message
1 because message 1 is still on the queue.

It works like this:

Transaction 1 Open
Transaction 1 Read Message 1
Transaction 1 Close  ----> this will raise another trigger because message 1
is still queued (the read is uncommitted) Transaction 2 Open Transaction 2
Read RC=2033 Transaction 2 Close ----> this will raise another trigger
because message 1 is still queued Transaction 3 Open Transaction 3 Read
RC=2033 Transaction 3 Close ----> this will raise another trigger because
message 1 is still queued Transaction 4 Open Transaction 4 Read RC=2033
Transaction 1 Commit Transaction 4 Close
----> this will not raise another trigger because message 1 is finally
gone

The problem compounds when you have more transaction messages arriving.

You said:=>
"We found that if we close the transaction (trigger) queue as soon as we
extract a message, another instance of our extract program (VMS-MQ) will be
kicked off and start another process even while the first one is running.
Good.  Good."

Sometimes that technigue works, but it's what I call the "poor man's"
version of a Service Initiation Layer (SIL). It essentially drives a
trigger-first queue to re-trigger once for each (transaction) message.
The downside is you end up with multiple programs competing for the
transaction queue and can flood CICS with "bezerk" transaction initiation
requests.  I describe them as "bezerk" because they are initiated without
respect to the specific request message that they will eventually process.
As you've discovered, there's even a tendency to fire instances of the
triggered application with no messages to process.

I expect your design looks like this:
Trigger fires VMS-MQ
   VMS-MQ opens transaction request queue
   VMS-MQ reads transaction request message
   VMS-MQ closes transaction request queue  <==uncontrolled concurrency
occurs here
   VMS-MQ "kicks" off natural program
        Natural program exhausts data queue, possibly sending data replies
   VMS-MQ sends transaction reply
   VMS-MQ quits


You can solve the problem by putting a commit before you close the
transaction queue. But before you run off and do that, be aware of the risk
you accept by putting the transaction message and its data messages in
separate units-of-work. It means if something goes south, you may not have a
transaction message to take care of data messages that get rolled back.

A better design looks like this:
Trigger fires VMS-MQ
   VMS-MQ opens transaction request queue
   LOOP until RC=2033
        VMS-MQ browses the transaction queue
        VMS-MQ EXEC CICS START CICS transaction, passing msgid
<==controllable concurrency occurs here
            CICS transaction reads transaction message by msgid
                        Natural program exhausts data queue, possibly
sending data replies
                CICS transaction VMS-MQ sends transaction reply
            CICS program commits (transaction messages and data
messages)
   END LOOP
   VMS-MQ closes queue
   VMS-MQ quits

Some advantages of the preferred design are:
1. You don't need multiple programs contending for the request queue 2.
The CICS transaction that processes the data messages can be initiated in
context of the request message
        For example, it could carry the userid of the message originator. 3.
VMS-MQ can control the concurrency level 4. Commit points can be managed
independtly of the concurrency mechansim










-----Original Message-----
From: Kinzler, Raymond C [mailto:[EMAIL PROTECTED]
Sent: Wednesday, February 11, 2004 8:15 AM
To: [EMAIL PROTECTED]
Subject: Re: A novice question


Rick,

Call me Ray.  :-)

The VMS-MQ program is written in NATURAL and it is the one receiving the
2033 errors when it attempts to read the trigger queue.

I have no idea about using expiry.  I assume that is a parameter passed when
the java program originally PUT the message.  Or is it a setting on the
QLOCAL queues?

BUT...you may have something with CICS being the problem.  We have run into
that situation in the past.  Correct me is I am wrong but if I run a process
under CICS and it is a hog, CICS may hang onto it until it is finished,
right?  Even though I am reading records from an MQ queue and writing them
to an ADABAS file.  I do not know CICS very well but I do know there are no
EXEC CICS involved after VMS-MQ is invoked.

Arrrghhhhh!!!



-----Original Message-----
From: Rick Tsujimoto [mailto:[EMAIL PROTECTED]
Sent: Wednesday, February 11, 2004 10:32 AM
To: [EMAIL PROTECTED]
Subject: Re: A novice question


Raymond,

We run a similar type of application, e.g. order system accessed by
customers on the web, which in turn, sends the request to our m/f via MQ,
and processes the order on CICS.  We use IDMS for our DBMS.  If the CICS
application receives a large order, it can easily monopolize CICS because
its processing does not involve any EXEC CICS commands that allows CICS to
dispatch other tasks.  In addition, heavy DB processing can cause other CICS
tasks to hang until the processing is complete.  In effect, these sort of
CICS applications tend to behave like a batch program when it has a lot of
orders to process.

But, I'm confused by your statement that "we are getting 2033 errors every
time we try to read other transaction (trigger) queues".  Is it the Cobol
program that's getting the 2033, or the VMS-MQ program, or the Natural
program?  Also, are you using expiry for the message your put to your
"transaction queue"?




                      "Kinzler,
                      Raymond C"               To:
[EMAIL PROTECTED]
                      <RaymondCKinzler         cc:
                      @EATON.COM>              Subject: A novice
question
                      Sent by:
                      MQSeries List
                      <[EMAIL PROTECTED]
                      en.AC.AT>


                      02/11/2004 09:26
                      AM
                      Please respond
                      to MQSeries List





Hello,

I am NOT the MQ administrator for our company but I cannot seem to get a
good answer to my question, so I thought I would sign up for this discussion
group and hopefully get some answer.

First, here is how I understand we use MQ Series in our shop:

We re-wrote a fairly comprehensive on-line system that was written in
NATURAL/ADABAS on the mainframe and put a web front end on it.  All of the
business logic remains on the back end written in NATURAL/ADABAS.
These subsystems include things like Order Entry, Returns, etc.  Our
distributors access it over the web (doh!) and are using it quite heavily
now.  The screens are quite simple and there are never more than about 10-15
rows on a screen that need to be passed back-and-forth from the web to the
mainframe. We wrote it to mimic an on-line system, obviously.

The web creates data records for each 'row' on the screen.  It PUTs them on
what we call the DATA QUEUE which is NOT triggered.  Once all of the records
are on it, the web will PUT another record on what we call a TRANSACTION
QUEUE which IS a triggered queue.

This is where I get all weirded out.

Somehow, this information is sitting on their respective queues on a Unix
box.  This information somehow gets over to the mainframe and I am not
exactly sure of everything that happens to do this.  But when a record
arrives on the corresponding TRANSACTION queue on the mainframe, it triggers
a CICS transaction that kicks off a COBOL program to determine the queue
that has the record and then kicks off another CICS transaction to start an
instance of NATURAL and run a program (called
VMS-MQ) that will read the TRANSACTION record.  Within the TRANSACTION
record, the web includes the NATURAL program it wants to use to process the
DATA records.  VMS-MQ will do the following:

1.) Open the transaction queue

2.) Read the transaction queue

3.) Close the transaction queue (so another instance of VMS-MQ can be
initiated just in case another transaction record arrives on the same
transaction queue before the processing of the current transaction is
complete)

4.) Kick off the passed NATURAL program which will, in turn, exhaust the
data record from the data queue and, if needed, PUT data records to the DATA
REPLY queue (which passes information back to the web application)

5.) Control returns to VMS-MQ which will then PUT a record to the
TRANSACTION REPLY queue

6.) Then all open queues are closed.


Somehow, again, the information on the REPLY queues gets sent back to the
Unix box and the web application recognizes the TRANSACTION REPLY message
and exhausts the DATA REPLY messages.  (Again, we write all the data
messages and then the transaction message to ensure we stay in
sync.)

The web starts a timer as soon as it writes the TRANSACTION REQUEST message.
It waits up to 30 seconds for the corresponding TRANSACTION REPLY to come
back.  If it does not appear within 30 seconds, the web will abend and
display a screen that says it has timed out.  However, that does not mean
the business logic hasn't been performed or won't be performed; it just
means the mainframe didn't respond in a timely manner.

-----

I hope I explained that well enough.

Everything worked hunky-dory.  We had a problem for a while where many
messages were arriving on the same set of queues and we were processing them
in a single-thread manner.  We found that if we close the transaction
(trigger) queue as soon as we extract a message, another instance of our
extract program (VMS-MQ) will be kicked off and start another process even
while the first one is running.  Good.  Good.

Everything is working exactly like we want it to work.  However, somebody
recently came up with a new application they want to include on the web
front-end.  They want to be able to parse an Excel file that contains
roughly 8,000-10,000 rows on a normal basis using this method.
They pass 500 rows at a time which corresponds to 500 data request messages
and one transaction request (trigger) message.  They pass this through to
the mainframe and all the mainframe does is read the records and post them
to an ADABAS file--no auditing occurs during the initial load of the file.
When the mainframe is done, it passes ONE data reply message back and the
web passes the next 500 data request messages with another transaction
(trigger)
message.

Here is the problem:  When this process starts, it takes over.  It has its
own set of queues that are shared by no other application.  But when it
runs, it gets to the point that things seem like they are getting, well,
clogged.  Eventually, nothing else runs.  But the strange thing is that we
are getting 2033 errors every time we try to read other transaction
(trigger) queues.  It is almost like MQ said it sent a message to the
mainframe but the message died on it's way to the mainframe and by the time
the mainframe reads the transaction (trigger) queue, there is nothing there.

Does this sound normal?  Did I explain it enough to make a guess?

Here is what I know about the trigger queue:

1.)  It is trigger first

2.)  Trigger depth is 1

3.)  Index type is none

4.)  Message delivery is FIFO


Again, there is probably more you need to know but all I can see are the
QLOCAL queues on the mainframe.

Any opinions welcome!


Thanks,

Raymond C. Kinzler, Jr.
ADABAS DBA
Eaton Electrical
Moon Township, PA 15108
Tel: 412-893-4463  (Adnet 227-4463)
Fax: 412-893-2156
Cell: 412-716-3368
[EMAIL PROTECTED]
www.EatonElectrical.com

Instructions for managing your mailing list subscription are provided in the
Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive

Instructions for managing your mailing list subscription are provided in the
Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive

Instructions for managing your mailing list subscription are provided in the
Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive

Instructions for managing your mailing list subscription are provided in the
Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive

Instructions for managing your mailing list subscription are provided in the
Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive

Instructions for managing your mailing list subscription are provided in the
Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive

Instructions for managing your mailing list subscription are provided in the
Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive

This communication is confidential and may contain privileged material.
If you are not the intended recipient you must not use, disclose, copy or retain it.
If you have received it in error please immediately notify me by return email
and delete the emails.
Thank you.

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive

Reply via email to