On a Unix system, when a queue manager starts it acquires shared memory and semaphore resources. It uses these during normal operation. If you then end the queue manager with "endmqm QMGRNAME" then all of these resources will get cleaned up and there will be no MQ artifacts left in memory (semaphores and shared memory segments). On the other hand if you issue endmqm with the -i parameter (as I did during our test) or the -p parameter then these shared memory and semaphore resources will NOT get cleaned up. That means that the queue manager will end but the resources it used in memory will not get cleaned up. This will happen whether my MDBs are connected or not, which is why it doesn't have anything to do really with MDBs specifically.
The problem with this is that when you restart the queue manager it will attempt to acquire new shared memory and semaphore resources, as it normally does. However, if OLD resources are still resident in memory (because they weren't cleaned up during the last MQ shutdown) then it is possible that the current instance of the queue manager (that is starting) might interpret these resources as belonging to processes that are still running. This is why we got the error messages we saw. In most cases the queue manager will start OK even if it finds old resources in memory, but it can encounter the error we saw. The -i switch on the endmqm command means shut down immediately. Do not wait for connected programs to end when they want to (like MDBs). Without that -i switch, we run the risk of our QM shutdown waiting forever for the MDBs to end. This is why I always use the -i switch when ending a QM. At 5.3, there is the undocumented amqiclen command that cleans up these resources. We are testing this command at the end of our shutdown scripts and at the beginning of our startup scripts. It cleans up old memory resources if they are present after a QM has come down, eliminating the problem we saw on subsequent startups. Back to MDBs: I still think we will research the polling interval setting to help MDBs ralize that the QM is gone, as that can only help prevent problems. -----Original Message----- From: Potkay, Peter M (PLC, IT) [mailto:[EMAIL PROTECTED] Sent: Wednesday, January 07, 2004 1:22 PM To: [EMAIL PROTECTED] Subject: Re: Message Driven Beans and FAIL_IF_QUIESCING I don't know. I am asking the customer to help me test this. (or maybe someone out there knows) Hopefully it will not write to the log every time it simply checks to see if the connection is still valid (meaning the QM is up). -----Original Message----- From: Wyatt, T. Rob [mailto:[EMAIL PROTECTED] Sent: Wednesday, January 07, 2004 1:15 PM To: [EMAIL PROTECTED] Subject: Re: Message Driven Beans and FAIL_IF_QUIESCING Peter, Do unsatisfied GETs really get logged? I thought the log was driven only by message movement and rcdmqimg. -- T.Rob -----Original Message----- From: Potkay, Peter M (PLC, IT) [mailto:[EMAIL PROTECTED] Sent: Wednesday, January 07, 2004 12:03 PM To: [EMAIL PROTECTED] Subject: Re: Message Driven Beans and FAIL_IF_QUIESCING WL MDBs are pure JMS implementations of MDBs, and do not have IBM specific capabilities, like WAS MDBs do. If a JMS object has an IBM specific property (like FAIL_IF_QUIESCING), it will just be ignored by the WL MDB. A WL MDB can check to see if its cached connection to the QM is still valid every X seconds, and if it is not, it can end, throwing an exception to be handling by the container. The restart of that MDB can then be handled in whatever manner is suitable. Meanwhile the QM should be able to end cleanly with no MDB connections that would prevent a clean start up. If I understand it correctly, jms-polling-interval-seconds is what we need to set. This is an optional element that can be declared in the weblogic-ejb-jar.xml deployment descriptor. Typically this is not set, so my thought is that if we actually do set it, it will reach out to the QM every interval and if the Connection is lost, kill the connection and kill the MDB. The issue, if this does work as I am hoping, and kills the connection, is that the Container will then continually poll the QM every interval consuming resources and writing the log. If this occurs over an extended period of time, the log file has the potential to grow to rather large proportions. Not being a JMS expert, I am not 100% sure of this solution, and would like to see it tested, but it seems to make sense, no? -----Original Message----- From: Christopher Frank [mailto:[EMAIL PROTECTED] Sent: Monday, January 05, 2004 2:44 PM To: [EMAIL PROTECTED] Subject: Re: Message Driven Beans and FAIL_IF_QUIESCING Hi Peter, >>>Do MDBs not allow this? Will I have to insure that all MDBs are >>>ended before I bring down the QM, or if the QM ends before the MDBs >>>come down, then I have to kill them before trying to bring up the QM? Hmmm. I don't know whether there is anything in the MDB spec that precludes this working. There is no explicit support for it, though, just like there is no explicit support for FAIL_IF_QUIESCING in the JMS spec, it's something unique to WMQ implementation. If I had to guess I would say that it's something the MDB container would have to "honor" - perhaps WL does not? Is there a way to define "custom properties" associated with the MDB, as there is in WAS? Or perhaps WL is using an older version of the WMQ JMS libraries that does not support the FAIL_IF_QUIESCING property? You should be certain that the WMQ V5.3 JMS libraries are being used, and are at least at the CSD03 level, because I know there were some fixes to the FAIL_IF_QUIESCING code in CSD03. You could run a trace and see if the FAIL_IF_QUIESCING property is being set when the MDB starts. I'm afraid that's about all I can tell you, as I know nothing about WLs MDB implementation. Sorry not to be of more help. Regards, Christopher Frank Certified I/T Specialist - WebSphere Software IBM Certified Solutions Expert - Websphere MQ & MQ Integrator -------------------------------------- Phone: 612-397-5532 (t/l 653-5532) mobile: 612-669-3008 e-mail: [EMAIL PROTECTED] Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com Archive: http://vm.akh-wien.ac.at/MQSeries.archive This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return email and delete this communication and destroy all copies. Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com Archive: http://vm.akh-wien.ac.at/MQSeries.archive Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com Archive: http://vm.akh-wien.ac.at/MQSeries.archive Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com Archive: http://vm.akh-wien.ac.at/MQSeries.archive Instructions for managing your mailing list subscription are provided in the Listserv General Users Guide available at http://www.lsoft.com Archive: http://vm.akh-wien.ac.at/MQSeries.archive