Cody Burleson created JCR-3588:
----------------------------------

             Summary: Response time higher on Node1 with load when Node2 has no 
load
                 Key: JCR-3588
                 URL: https://issues.apache.org/jira/browse/JCR-3588
             Project: Jackrabbit Content Repository
          Issue Type: Bug
          Components: clustering
    Affects Versions: 2.4.3
         Environment: CentOS 6.4 running WebSphere Application Server 7.0.0.19. 
Jackrabbit cluster configuration with 2 WAS servers. Repository on DB2 9.7.
            Reporter: Cody Burleson
             Fix For: 2.4.3
         Attachments: JackrabbitCluster-ResponseTime.png, Node1repository.xml, 
Node2repository.xml

In our performance analysis, we are seeing a strange effect, which we does not 
make sense to us. It may or may not be a defect, but we need to understand why 
the effect occurs. In a 2 node cluster, we can run a certain load (reading and 
writing) directly on Node1 and an equivalent load (reading and writing on 
Node2). We measure the response time on both nodes, and it's less than 2 
seconds. If we stop the load to one of the servers, the response time on the 
other server triples (with no additional load). See attached image 
"JackrabbitCluster-ResponseTime.png". The left side of the report shows when 
only one node (Node1) has load and Node2 has no load. In this case, the 
response times on Node1 are at about 6 seconds. Then, on the right side of the 
report, we add an equivalent load to Node2 and then the response times on Node1 
drop to 2 seconds. So, the load on Node1 was always consistent, yet ADDING load 
to Node2 actually improves response time on Node1. Logically, it doesn't make 
much sense, eh? Someone, please, at least help us understand why this may be 
happening.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to