[jira] Resolved: (AMQ-1191) JDBC based Master/Slave not supported for TransactSQL based databases (SQL Server and Sybase)

2009-07-29 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/activemq/browse/AMQ-1191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-1191.
-

Resolution: Fixed

resolved by adding patch to sqlserver specific database locker implementation. 
DefaultLocker remains unchanged. A locker implementation is now resolved using 
the same adapter loader mechanism and currently only sqlserver uses an 
override. Overrides are provided using driver specific property files in 
META-INF/services/org/apache/activemq/store/jdbc/lock/
rev #798602

 JDBC based Master/Slave not supported for TransactSQL based databases (SQL 
 Server and Sybase)
 -

 Key: AMQ-1191
 URL: https://issues.apache.org/activemq/browse/AMQ-1191
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Reporter: James Strachan
Assignee: Gary Tully
 Fix For: 5.3.0

 Attachments: patchfile


 The main issue is figuring out the exclusive lock SQL syntax. I think the 
 following is valid...
 SELECT * FROM TABLE WITH XLOCK

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: AMQ-1191 for 5.3 broke master/slave locking for Oracle

2009-07-29 Thread Gary Tully
reworked the resolution of AMQ-1191 to split out the locker implementation
for ms sqlserver. The current snapshot should now work with Oracle without
modification.

2009/7/24 Gary Tully gary.tu...@gmail.com

 yea, exactly like that, where bean id=mydblocker is an instance of your
 OracleDatabaseLocker.

 2009/7/24 bwtaylor bryan_w_tay...@yahoo.com


 We have been using Oracle without issue with activemq 5.1, so prevoius
 versions worked with Oracle. We can try creating our own
 OracleDatabaseLocker if needed.

 I found setDatabaseLocker() on JDBCPersistenceAdapter, so in the custom
 spring dialect, can I say something like

  persistenceAdapter
jdbcPersistenceAdapter dataSource=#mydatasource
 databaseLocker=#mydblocker /
  /persistenceAdapter

 and then define mydblocker via regular spring config?


 Gary Tully wrote:
 
  thanks for the heads up. It looks like it is time to have more than one
  database locker implementation in the box.
  A database locker implementation can be injected into the persistence
  adapter via config if that helps in the short term. Can you validate
 that
  Oracle works fine without the getMetaData call?
 
  I have reopened AMQ-1191, thanks.
 
 

 --
 View this message in context:
 http://www.nabble.com/AMQ-1191-for-5.3-broke-master-slave-locking-for-Oracle-tp24648059p24649622.html
 Sent from the ActiveMQ - Dev mailing list archive at Nabble.com.




 --
 http://blog.garytully.com

 Open Source Integration
 http://fusesource.com




-- 
http://blog.garytully.com

Open Source Integration
http://fusesource.com


[jira] Updated: (AMQ-2324) Forwarded message cannot be distributed to the original broker

2009-07-29 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/activemq/browse/AMQ-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-2324:


Fix Version/s: (was: 5.3.0)
   5.4.0

This behavior won't be changed for 5.3 as we need to finalize this release 
asap. Suppression of forwarded messages back to their origin is there to 
prevent looping in this case, but it could be implemented with loop detection 
and ttl and possibly with periodic backoff and also using some of the logic you 
describe. 

 Forwarded message cannot be distributed to the original broker
 --

 Key: AMQ-2324
 URL: https://issues.apache.org/activemq/browse/AMQ-2324
 Project: ActiveMQ
  Issue Type: Improvement
Affects Versions: 5.2.0
Reporter: ying
 Fix For: 5.4.0


 I have a simple cause which can cause dispatch problem:
 1. setup a network of broker1, broker2, bridged by multicast discovery
 2. make a producer send 5 msg to queueA to broker2
 3. make a consumer to consume from broker1 queueA ( make it slow, so it only 
 consumer 1 msg) but make sure all 5 msg from broker2 are forwared to broker1
 4. stop the consumer to broke1, restart it to consume from broker2 queueA
 5. the 4 msgs originally published to broker2 and forwarded to broker1 and 
 has not yet been consumed will stuck on broker1 and will not forwarded to 
 broker2 for the consumer to consume. 
 Here is an solution: it checks forwarded to broker( eg, broker1) to see 
 whether it has any active consumer, it will be able forward the message back 
 to the original broker when there is no active consumer on the forwarded to 
 broker.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (AMQ-2075) Intermittent test failure - BrokerTest

2009-07-29 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/activemq/browse/AMQ-2075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-2075.
-

   Resolution: Fixed
Fix Version/s: 5.3.0

BrokerInfo on a new connection is dispatched async and could end up being 
dispatched after messages which were being picked up by poll(). A receive will 
ignore them so using receiveMessage resolves this issue. The intermittent 
nature is the result of the async dispatch thread scheduling. Think the other 
browser issue has been also resolved through other changes.
r798842

 Intermittent test failure - BrokerTest
 --

 Key: AMQ-2075
 URL: https://issues.apache.org/activemq/browse/AMQ-2075
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.3.0
 Environment: mac os x 10.5.6
Reporter: David Jencks
Assignee: Gary Tully
 Fix For: 5.3.0


 Only info I have is from surefire report:
   testcase time=4.017 name=testQueueBrowserWith2Consumers 
 {deliveryMode=2}
 failure type=junit.framework.AssertionFailedError message=m1 is null 
 for index: 0junit.framework.AssertionFailedError: m1 is null for index: 0
 at junit.framework.Assert.fail(Assert.java:47)
 at junit.framework.Assert.assertTrue(Assert.java:20)
 at junit.framework.Assert.assertNotNull(Assert.java:220)
 at 
 org.apache.activemq.broker.BrokerTest.testQueueBrowserWith2Consumers(BrokerTest.java:148)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:585)
 at junit.framework.TestCase.runTest(TestCase.java:154)
 at junit.framework.TestCase.runBare(TestCase.java:127)
 at 
 org.apache.activemq.CombinationTestSupport.runBare(CombinationTestSupport.java:90)
 at junit.framework.TestResult$1.protect(TestResult.java:106)
 at junit.framework.TestResult.runProtected(TestResult.java:124)
 at junit.framework.TestResult.run(TestResult.java:109)
 at junit.framework.TestCase.run(TestCase.java:118)
 at junit.framework.TestSuite.runTest(TestSuite.java:208)
 at junit.framework.TestSuite.run(TestSuite.java:203)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:585)
 at 
 org.apache.maven.surefire.junit.JUnitTestSet.execute(JUnitTestSet.java:210)
 at 
 org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:135)
 at 
 org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:160)
 at org.apache.maven.surefire.Surefire.run(Surefire.java:81)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:585)
 at 
 org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:182)
 at 
 org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:743)
 /failure
   /testcase

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Reopened: (AMQ-2158) Raise the logging level and improve logging for memory usage changes

2009-07-29 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/activemq/browse/AMQ-2158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully reopened AMQ-2158:
-


think this has got to be left at debug level as limiting memory usage is a 
typical use case and this warn will blow logs as users typically leave warn 
level logging enabled.

 Raise the logging level and improve logging for memory usage changes 
 -

 Key: AMQ-2158
 URL: https://issues.apache.org/activemq/browse/AMQ-2158
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.0.0, 5.1.0, 5.2.0
Reporter: Bruce Snyder
Assignee: Gary Tully
 Fix For: 5.3.0

 Attachments: AMQ-2158.patch


 Currently the logging for memory usage changes in the {{Usage.fireEvent()}} 
 method is restricted to debug output. This should be raised to info level by 
 default and warning level if the memory usage is over a certain threshold, 
 say 80%. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (AMQ-2158) Raise the logging level and improve logging for memory usage changes

2009-07-29 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/activemq/browse/AMQ-2158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-2158.
-

Resolution: Won't Fix

reverted the additional warn level logging as it is too verbose in a production 
env where memory limits are in force. We need an alternative solution to 
logging and I guess the JMX api gives it to us atm. It is not a warn event if a 
limit is reached when it is expected.
rev. 798931

 Raise the logging level and improve logging for memory usage changes 
 -

 Key: AMQ-2158
 URL: https://issues.apache.org/activemq/browse/AMQ-2158
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.0.0, 5.1.0, 5.2.0
Reporter: Bruce Snyder
Assignee: Gary Tully
 Fix For: 5.3.0

 Attachments: AMQ-2158.patch


 Currently the logging for memory usage changes in the {{Usage.fireEvent()}} 
 method is restricted to debug output. This should be raised to info level by 
 default and warning level if the memory usage is over a certain threshold, 
 say 80%. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (AMQ-2334) getJMSRedelivered() incorrectly returns false after a MasterSlave failover

2009-07-29 Thread Kyle Anderson (JIRA)

 [ 
https://issues.apache.org/activemq/browse/AMQ-2334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kyle Anderson updated AMQ-2334:
---

Description: 
Shared master/slave setup, described here 
http://activemq.apache.org/shared-file-system-master-slave.html
Scenario:
1. Transacted consumer receives a message
2. Transacted consumer disconnects prior to committing
3. Transacted consumer #2 receives the same message.

Normally consumer #2 sees that message as getJMSRedelivered() = true.  However, 
if the broker fails and another takes over from the data dir between step 1 and 
3, the redelivery is set as false - even though a consumer has, in fact, seen 
the message before.  See attached unit test.

  was:
Shared master/slave setup, described here 
http://activemq.apache.org/shared-file-system-master-slave.html
Normally if one transacted consumer receives a message, then disconnects 
without committing, the message is marked as getJMSRedelivered() as true.  If 
the broker fails and another takes over before the consumer disconnect, the 
message isn't marked as redelivered to the next consumer.


In my application, I'd prefer a message marked as redelivered if it wasn't to a 
re-delivered message that wasn't marked as such.

So a quick fix, I modified org.apache.activemq.broker.region.queue to increment 
the message count prior to persistent storage, then decremented immediately 
after on the version in memory:
message.incrementRedeliveryCounter();
store.addMessage(context, message);
message.setRedeliveryCounter(message.getRedeliveryCounter()-1);

Aside from marking all messages as redelivered upon recovery or a page in, any 
possible issues with this?  Any better solutions out there?

 getJMSRedelivered() incorrectly returns false after a MasterSlave failover
 --

 Key: AMQ-2334
 URL: https://issues.apache.org/activemq/browse/AMQ-2334
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.2.0
Reporter: Kyle Anderson
 Attachments: SanRedeliver.java

   Original Estimate: 3 hours
  Remaining Estimate: 3 hours

 Shared master/slave setup, described here 
 http://activemq.apache.org/shared-file-system-master-slave.html
 Scenario:
 1. Transacted consumer receives a message
 2. Transacted consumer disconnects prior to committing
 3. Transacted consumer #2 receives the same message.
 Normally consumer #2 sees that message as getJMSRedelivered() = true.  
 However, if the broker fails and another takes over from the data dir between 
 step 1 and 3, the redelivery is set as false - even though a consumer has, in 
 fact, seen the message before.  See attached unit test.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (AMQ-2334) getJMSRedelivered() incorrectly returns false after a MasterSlave failover

2009-07-29 Thread Gary Tully (JIRA)

[ 
https://issues.apache.org/activemq/browse/AMQ-2334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=53022#action_53022
 ] 

Gary Tully commented on AMQ-2334:
-

that seems like a reasonable (and smart) solution but it may make sense as part 
of a broker plugin and/or destination filter so that it can be easily enabled 
or disabled. If redelivery semantics are vital then it can be enabled through 
config etc.
From what I can see, any other solution will require storing the message twice 
which would kill performance.

 getJMSRedelivered() incorrectly returns false after a MasterSlave failover
 --

 Key: AMQ-2334
 URL: https://issues.apache.org/activemq/browse/AMQ-2334
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.2.0
Reporter: Kyle Anderson
 Attachments: SanRedeliver.java

   Original Estimate: 3 hours
  Remaining Estimate: 3 hours

 Shared master/slave setup, described here 
 http://activemq.apache.org/shared-file-system-master-slave.html
 Scenario:
 1. Transacted consumer receives a message
 2. Transacted consumer disconnects prior to committing
 3. Transacted consumer #2 receives the same message.
 Normally consumer #2 sees that message as getJMSRedelivered() = true.  
 However, if the broker fails and another takes over from the data dir between 
 step 1 and 3, the redelivery is set as false - even though a consumer has, in 
 fact, seen the message before.  See attached unit test.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (AMQ-1846) Provide tags to set defaultPrefetchSize in activemq.xml

2009-07-29 Thread Rob Davies (JIRA)

 [ 
https://issues.apache.org/activemq/browse/AMQ-1846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rob Davies resolved AMQ-1846.
-

   Resolution: Fixed
Fix Version/s: (was: 5.4.0)
   5.3.0

Fixed by SVN revision 799090

 Provide tags to set defaultPrefetchSize in activemq.xml
 ---

 Key: AMQ-1846
 URL: https://issues.apache.org/activemq/browse/AMQ-1846
 Project: ActiveMQ
  Issue Type: Task
  Components: Broker
Reporter: Badri
Assignee: Rob Davies
Priority: Minor
 Fix For: 5.3.0


 Hi
 If we could have facility to set the defaultPrefetchSize in activemq.xml, it 
 will be great feature.
 Thanks
 Badri

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (AMQ-2333) Active MQ performance issues when there are more than a 1000 queue'd up messages

2009-07-29 Thread Marcus Malcom (JIRA)

[ 
https://issues.apache.org/activemq/browse/AMQ-2333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=53024#action_53024
 ] 

Marcus Malcom commented on AMQ-2333:


This just happened w/ only 142 messages in a queue

 Active MQ performance issues when there are more than a 1000 queue'd up 
 messages
 

 Key: AMQ-2333
 URL: https://issues.apache.org/activemq/browse/AMQ-2333
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, JMS client
Affects Versions: 5.2.0
Reporter: Marcus Malcom

 Over the past couple of days some of our queues get rather full because of 
 downstream problems. The messages start numbering in the 1000's. When that 
 happens ActiveMQ slows way down. I believe is slows down because we are 
 trying to produce a message to the overloaded queue and it's taking a long 
 time (minutes instead of seconds). Once the overloaded queue is emptied the 
 problems go away.
 Our system pretty much has all the defaults.
 Note: this was not a problem before upgrading to 5.2.0
 Any ideas on what should be done?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (AMQ-2333) Active MQ performance issues when there are more than a 100 queue'd up messages

2009-07-29 Thread Marcus Malcom (JIRA)

 [ 
https://issues.apache.org/activemq/browse/AMQ-2333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Malcom updated AMQ-2333:
---

Summary: Active MQ performance issues when there are more than a 100 
queue'd up messages  (was: Active MQ performance issues when there are more 
than a 1000 queue'd up messages)

 Active MQ performance issues when there are more than a 100 queue'd up 
 messages
 ---

 Key: AMQ-2333
 URL: https://issues.apache.org/activemq/browse/AMQ-2333
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, JMS client
Affects Versions: 5.2.0
Reporter: Marcus Malcom

 Over the past couple of days some of our queues get rather full because of 
 downstream problems. The messages start numbering in the 1000's. When that 
 happens ActiveMQ slows way down. I believe is slows down because we are 
 trying to produce a message to the overloaded queue and it's taking a long 
 time (minutes instead of seconds). Once the overloaded queue is emptied the 
 problems go away.
 Our system pretty much has all the defaults.
 Note: this was not a problem before upgrading to 5.2.0
 Any ideas on what should be done?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (AMQ-2333) Active MQ performance issues when there are more than 100 queue'd up messages

2009-07-29 Thread Marcus Malcom (JIRA)

 [ 
https://issues.apache.org/activemq/browse/AMQ-2333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Malcom updated AMQ-2333:
---

Summary: Active MQ performance issues when there are more than 100 queue'd 
up messages  (was: Active MQ performance issues when there are more than a 100 
queue'd up messages)

 Active MQ performance issues when there are more than 100 queue'd up messages
 -

 Key: AMQ-2333
 URL: https://issues.apache.org/activemq/browse/AMQ-2333
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, JMS client
Affects Versions: 5.2.0
Reporter: Marcus Malcom

 Over the past couple of days some of our queues get rather full because of 
 downstream problems. The messages start numbering in the 1000's. When that 
 happens ActiveMQ slows way down. I believe is slows down because we are 
 trying to produce a message to the overloaded queue and it's taking a long 
 time (minutes instead of seconds). Once the overloaded queue is emptied the 
 problems go away.
 Our system pretty much has all the defaults.
 Note: this was not a problem before upgrading to 5.2.0
 Any ideas on what should be done?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (AMQ-2333) Active MQ performance issues when there are more than 100 queue'd up messages

2009-07-29 Thread Marcus Malcom (JIRA)

 [ 
https://issues.apache.org/activemq/browse/AMQ-2333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Malcom updated AMQ-2333:
---

Priority: Critical  (was: Major)

 Active MQ performance issues when there are more than 100 queue'd up messages
 -

 Key: AMQ-2333
 URL: https://issues.apache.org/activemq/browse/AMQ-2333
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, JMS client
Affects Versions: 5.2.0
Reporter: Marcus Malcom
Priority: Critical

 Over the past couple of days some of our queues get rather full because of 
 downstream problems. The messages start numbering in the 1000's. When that 
 happens ActiveMQ slows way down. I believe is slows down because we are 
 trying to produce a message to the overloaded queue and it's taking a long 
 time (minutes instead of seconds). Once the overloaded queue is emptied the 
 problems go away.
 Our system pretty much has all the defaults.
 Note: this was not a problem before upgrading to 5.2.0
 Any ideas on what should be done?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.