Possible Bug in JBoss 4.0.0 and JBoss 4.0.1 Tested with: JBoss 4.0.0 and JBoss 4.0.1 Redhat ES 3.0 Update 3 JDK 1.4.2_06
Please be patient as you read through all of this. I've tried to provide as much information as possible so that we can work towards a solution quickly. We have a system which is distributed over 3 separate clusters. During various processes, we require messages to be sent from one cluster to another. We are using HAJNDI to send and receive the JMS messages. However, any remote cluster HAJNDI lookup fails. It seems that, no matter what provider url we enter when we are creating the InitialContext for the lookup, it will always do the lookup on the local cluster! We will get a NameNotFoundException on the sending node, and if we hotdeploy the mdb that was waiting on the destination cluster, the previously sent message is immediately received on the sending node. There is no activity on the destination cluster. This configuration worked perfectly in JBoss 3.2.5. If we add our own clusters on top of a shared DefaultPartition, the messages are of course sent and received successfully. However, we need to have completely separate clusters, (DefaultPartitions and our own partitions) because we have MDB's that we only want deployed on certain clusters that we have created, and we don't want to share the same singleton DestinationManager that runs on the DefaultPartition accross all of our nodes. As a result, we've set up 3 different DefaultPartitions that each use a different UDP Multicast IP, configured in cluster-service.xml. Is this working as intended? Is HAJNDI designed to ONLY work within the same cluster? (it was working in earlier versions of JBoss) If you would like more detail, please read through the various config, code and log file excerpts below. If you need more clarification or more info, please ask. We set up a test environment with 2 different DefaultPartitions Server1 and Server2, distinguished from each other by Mulicast ip's. Neither node is multihomed. (If the DefaultPartitions are the same, this example works.) The only thing that we change is the Multicast ip for the DefaultPartition on Server2. When the two separate clusters are running, each has its own DestinationManager service running, as expected. The sender is an MBean with a method that accepts a String parameter running on Server1. This parameter is the provider url (192.168.129.49:1100 for the example Server2's ip) that will be used to create the InitialContext that is used for the lookup of the destination Queue on Server 2. The destination is an MDB bound to the jndi name of "queue/TestingMDB". When we invoke the method in our MBean(Server1) it tries to Connect to the local HAJNDI port not Server2's HAJNDI port. We endup seeing the following error on Server1: anonymous wrote : ERROR [testing.QueueTester] NamingException in QueueTester.sendMessage(): queue/TestingMDB | Nothing is logged on Server2 when this fails. If we change the Provider url to 192.168.129.49:1099 everything works as planned.(except no High Availablity is involved on Server2's Cluster). >From cluster-service.xml on the sender node: anonymous wrote : | ... | <UDP mcast_addr="228.1.2.3" mcast_port="45566" | >From cluster-service.xml on the destination node: anonymous wrote : | ... | <UDP mcast_addr="228.1.2.4" mcast_port="45566" | MBean test code that is sending the JMS message: (providerUrl has been verified at runtime to contain the correct destination ip and correct destination port of the remote destination (ie: 192.168.129.49:1100)) anonymous wrote : public void connect(String providerUrl) throws NamingException, JMSException | { | Properties props = new Properties(); | props.put("java.naming.factory.initial","org.jnp.interfaces.NamingContextFactory"); | props.put("java.naming.factory.url.pkgs","org.jnp.interfaces"); | props.put(Context.PROVIDER_URL, providerUrl); | Context context = new InitialContext(props); | | // Lookup the managed connection factory for a topic | QueueConnectionFactory factory = | (QueueConnectionFactory) context.lookup("UIL2XAConnectionFactory"); | | // Create a connection to the JMS provider | queueConnection = factory.createQueueConnection(); | | // Lookup the destination you want to send to | queue = (Queue) context.lookup("queue/TestingMDB"); | Test MDB on the destination node that has been verified to be bound to the jndi name "queue/TestingMDB": anonymous wrote : public void onMessage(Message _incMessage) { | log = Logger.getLogger(this.getClass()); | try | { | TextMessage message = (TextMessage) _incMessage; | log.error("JMS TEXT MESSAGE RECEIVED!: " + message.getText()); | } | catch (Exception e) | { | log.error("Exception caught in TestingMDBBean.onMessage(): " + e.getMessage()); | } | } Both jboss instances have had run.conf modified to include the following JAVA_OPTS: anonymous wrote : JAVA_OPTS="-server -Xms128m -Xmx384m -Djava.awt.headless=true -Djboss.bind.address=[**IPofServer**]" | JAVA_OPTS="-Xdebug -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n $JAVA_OPTS" | Thank you for reading all of this and I hope we can work this out, whatever the problem may be. View the original post : http://www.jboss.org/index.html?module=bb&op=viewtopic&p=3863191#3863191 Reply to the post : http://www.jboss.org/index.html?module=bb&op=posting&mode=reply&p=3863191 ------------------------------------------------------- This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting Tool for open source databases. Create drag-&-drop reports. Save time by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc. Download a FREE copy at http://www.intelliview.com/go/osdn_nl _______________________________________________ JBoss-user mailing list JBoss-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/jboss-user