[jira] [Commented] (GEODE-837) Jenkins is not picking up test results

2016-06-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321804#comment-15321804
 ] 

ASF subversion and git services commented on GEODE-837:
---

Commit cfe3b65166685b8c4b300fe4a2adddaf5f25f877 in incubator-geode's branch 
refs/heads/feature/GEODE-837 from [~apa...@the9muses.net]
[ https://git-wip-us.apache.org/repos/asf?p=incubator-geode.git;h=cfe3b65 ]

Merge remote-tracking branch 'origin/develop' into feature/GEODE-837


> Jenkins is not picking up test results
> --
>
> Key: GEODE-837
> URL: https://issues.apache.org/jira/browse/GEODE-837
> Project: Geode
>  Issue Type: Bug
>  Components: build
>Reporter: Dan Smith
>Assignee: Kirk Lund
>
> After c5efb80518abc2a2c7390af1d46e7c5892801e55, where we stopped searching 
> for specific test names, jenkins is no longer reporting dunit test results.
> The tests are still being run, but the XML reports that jenkins uses are 
> empty.
> I tracked the issue down partially. It looks like what is happening is the 
> dunit tests are running and reporting results, but then when the integration 
> tests run, it generates new XML files that overrwrite the dunit results in 
> gemfire-core/build/test-results with files that look like this (see how there 
> are no test results reported)
> {noformat}
>  tests="0" skipped="0" failures="0" errors="0" timestamp="1970-01-01T00:00:00" 
> hostname="dsmith-virtual" time="0.0">
> {noformat}
> It looks like this has something to do with the junit category stuff. Unit 
> tests files aren't getting stomped on like this, but dunit test files are. 
> Perhaps something to do with the DistributedTestCase hierarchy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (GEODE-986) CI Failure: MultiuserAPIDUnitTest.testMultiUserUnsupportedAPIs failed with SocketException

2016-06-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321802#comment-15321802
 ] 

ASF subversion and git services commented on GEODE-986:
---

Commit afa7cc815f0dcb47baaa5d81146cf50cdecce83a in incubator-geode's branch 
refs/heads/feature/GEODE-837 from [~bschuchardt]
[ https://git-wip-us.apache.org/repos/asf?p=incubator-geode.git;h=afa7cc8 ]

GEODE-986 CI Failure: MultiuserAPIDUnitTest.testMultiUserUnsupportedAPIs failed 
with SocketException

I'm reverting the change in TcpClient that defaults to v9.0 if a locator
does not respond to a RequestVersion message.  That change wasn't necessary
to fix GEODE_986 and caused some other problems.


> CI Failure: MultiuserAPIDUnitTest.testMultiUserUnsupportedAPIs failed with 
> SocketException
> --
>
> Key: GEODE-986
> URL: https://issues.apache.org/jira/browse/GEODE-986
> Project: Geode
>  Issue Type: Bug
>  Components: membership
>Reporter: Barry Oglesby
>Assignee: Bruce Schuchardt
>  Labels: ci
> Fix For: 1.0.0-incubating.M2
>
>
> Geode_develop_DistributedTests
> Private Build #1662
> Revision: e685fd85ac7e2607f70b47bfb448b1d91a56b103
> {noformat}
> [vm_1][error 2016/02/18 16:24:59.642 PST  Connection(18)-10.118.32.91> tid=0x12] Unexpected problem starting up 
> membership services
> [vm_1]com.gemstone.gemfire.ToDataException: toData failed on DataSerializable 
> class 
> com.gemstone.gemfire.distributed.internal.membership.InternalDistributedMember
> [vm_1]at 
> com.gemstone.gemfire.internal.InternalDataSerializer.invokeToData(InternalDataSerializer.java:2453)
> [vm_1]at 
> com.gemstone.gemfire.internal.InternalDataSerializer.writeDSFID(InternalDataSerializer.java:1412)
> [vm_1]at 
> com.gemstone.gemfire.internal.InternalDataSerializer.basicWriteObject(InternalDataSerializer.java:2150)
> [vm_1]at 
> com.gemstone.gemfire.DataSerializer.writeObject(DataSerializer.java:3241)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.membership.gms.locator.FindCoordinatorRequest.toData(FindCoordinatorRequest.java:87)
> [vm_1]at 
> com.gemstone.gemfire.internal.InternalDataSerializer.invokeToData(InternalDataSerializer.java:2419)
> [vm_1]at 
> com.gemstone.gemfire.internal.InternalDataSerializer.writeDSFID(InternalDataSerializer.java:1412)
> [vm_1]at 
> com.gemstone.gemfire.internal.InternalDataSerializer.basicWriteObject(InternalDataSerializer.java:2150)
> [vm_1]at 
> com.gemstone.gemfire.DataSerializer.writeObject(DataSerializer.java:3241)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.tcpserver.TcpClient.requestToServer(TcpClient.java:145)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.membership.gms.membership.GMSJoinLeave$TcpClientWrapper.sendCoordinatorFindRequest(GMSJoinLeave.java:988)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.membership.gms.membership.GMSJoinLeave.findCoordinator(GMSJoinLeave.java:910)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.membership.gms.membership.GMSJoinLeave.join(GMSJoinLeave.java:242)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.membership.gms.mgr.GMSMembershipManager.join(GMSMembershipManager.java:676)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.membership.gms.mgr.GMSMembershipManager.joinDistributedSystem(GMSMembershipManager.java:765)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.membership.gms.Services.start(Services.java:174)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.membership.gms.GMSMemberFactory.newMembershipManager(GMSMemberFactory.java:105)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.membership.MemberFactory.newMembershipManager(MemberFactory.java:93)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.DistributionManager.(DistributionManager.java:1159)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.DistributionManager.(DistributionManager.java:1211)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.DistributionManager.create(DistributionManager.java:573)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.InternalDistributedSystem.initialize(InternalDistributedSystem.java:652)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.InternalDistributedSystem.newInstance(InternalDistributedSystem.java:277)
> [vm_1]at 
> com.gemstone.gemfire.distributed.DistributedSystem.connect(DistributedSystem.java:1641)
> [vm_1]at 
> com.gemstone.gemfire.test.dunit.DistributedTestCase.getSystem(DistributedTestCase.java:145)
> [vm_1]at 
> 

[jira] [Commented] (GEODE-744) Incorrect use of APP_FETCH_SIZE in GFSH

2016-06-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321800#comment-15321800
 ] 

ASF subversion and git services commented on GEODE-744:
---

Commit ec466fc38e8cc5fc65450362b8d313c35e6ea14f in incubator-geode's branch 
refs/heads/feature/GEODE-837 from [~kduling]
[ https://git-wip-us.apache.org/repos/asf?p=incubator-geode.git;h=ec466fc ]

GEODE-744: Incorrect use of APP_FETCH_SIZE in GFSH

This closes #151


> Incorrect use of APP_FETCH_SIZE in GFSH
> ---
>
> Key: GEODE-744
> URL: https://issues.apache.org/jira/browse/GEODE-744
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh
>Reporter: Jens Deppe
>Assignee: Kevin Duling
> Attachments: workspace (1).zip
>
>
> A customer is facing an easily reproducible issue when executing queries from 
> GFSH. It appears that the APP_FETCH_SIZE is being set only when parts of the 
> query are in lower case. It happens in 7.0.X, 8.0.X and 8.1.X.
> Attached to the TRAC is the reproducible scenario, steps to reproduce:
> Uncompress the file.
> Modify variables "GEMFIRE" and "JAVA_HOME" in file setenv.txt.
> Execute "./start_cluster.sh".
> Exceute "./run.sh". This script inserts 1500 entries in the region and, 
> afterwards, executes two queries, one using lower case and other using upper 
> case. You can see from the console that ouput is different, one returns the 
> actual size (1500) and the other one returns the default APP_FETCH_SIZE 
> (1000).
> Exceute "./stop_cluster.sh".
> The fix seems pretty easy to implement, the method "addLimit" of the inner 
> class "SelectExecStep?" in "DataCommandFunction?" class should be modified to 
> compare strings without using the actual word case. Is not enough to add more 
> "or" to the comparison like we are currently doing with since keywords like 
> "Count" or "coUn" will still break the functionallity. We should compare 
> everything using lower case or upper case, it doesn't matter which one, or at 
> least make sure that gfsh converts the query to upper/lower case before 
> actually executing them.
> The actual code with the problem is below:
> {noformat}
> private String addLimit(String query) {
> boolean containsLimitOrAggregate = query.contains(" limit")
> query.contains(" LIMIT")  query.contains("count(*)");
> if (!containsLimitOrAggregate){
> String limitQuery = query + " limit " + getFetchSize();
> return limitQuery;
> } else {
> return query;
> }
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (GEODE-1463) Legacy OperationContexts do not set the appropriate Shiro permission tuple

2016-06-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-1463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321801#comment-15321801
 ] 

ASF subversion and git services commented on GEODE-1463:


Commit 4af707237ad1095f09a27a60a5813c4280d93f4d in incubator-geode's branch 
refs/heads/feature/GEODE-837 from [~jens.deppe]
[ https://git-wip-us.apache.org/repos/asf?p=incubator-geode.git;h=4af7072 ]

Revert "GEODE-1463: Legacy OperationContexts do not set the appropriate Shiro"

This reverts commit 670fae4b3950fa1ce302461312dd1251d8ea2d8a.


> Legacy OperationContexts do not set the appropriate Shiro permission tuple
> --
>
> Key: GEODE-1463
> URL: https://issues.apache.org/jira/browse/GEODE-1463
> Project: Geode
>  Issue Type: Bug
>  Components: security
>Reporter: Jens Deppe
>Assignee: Jens Deppe
>
> Also need to move ResourceOperationContext out of 'internal' as it is a 
> user-visible class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (GEODE-1495) CQEvent is not properly getting generated after a destoy operation on Partioned Region.

2016-06-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-1495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321803#comment-15321803
 ] 

ASF subversion and git services commented on GEODE-1495:


Commit e4994c7b3fd42b6804b909796b8589729e9861ea in incubator-geode's branch 
refs/heads/feature/GEODE-837 from [~agingade]
[ https://git-wip-us.apache.org/repos/asf?p=incubator-geode.git;h=e4994c7 ]

GEODE-1495: Changes are made to remove the cached destroyed token/events from 
the CQ.

The CQEvents as seen by CQs are cached in order to avoid applying CQ queries on 
old values.

In case of a destory CQEvent, the CQEvents are marked with destroy tokens and 
removed from
the cache after the CQEvent is added to HAQueue.
This works fine for the CQs registered locally, but for the CQs registered on 
peer server, the
CQs weren't removed from the cache, which resulted in generating wrong CQEvent 
for subsequent
operation.
This change removes the destroy CQevent from the cache after the CQEvent is 
distributed to
peer server.


> CQEvent is not properly getting generated after a destoy operation on 
> Partioned Region.
> ---
>
> Key: GEODE-1495
> URL: https://issues.apache.org/jira/browse/GEODE-1495
> Project: Geode
>  Issue Type: Bug
>  Components: cq
>Reporter: Anilkumar Gingade
>Assignee: Anilkumar Gingade
>
> In multiple server groups setup, when a CQ is registered on one server group 
> and that gets processed/evaluated on other server group (on which data 
> buckets are present), the CQ event is wrongly getting generated after a 
> destroy operation.
> Configuration.
> -- A Geode cluster with two server groups:
> Server Group1
> Server Group2 
> -- PR region created on both server groups, with accessor buckets/regions on 
> Group1 and Data bucket/regions on Group2.
> -- CQ is registered on Server Group1
> For the following cache operation on Server Group2, the CQEvents are getting 
> generated as:
> Cache op on same Key - CQEvent
> 
> Create - Create CQEvent (as expected)
> Update - The CQ is no more satisfied, Destroy CQEvent is generated (as 
> expected)
> Update - The CQ is satisfied, but generating, Update CQEvent, instead of 
> Create.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (GEODE-1422) CI Failure: ParallelGatewaySenderOperationsOffHeapDUnitTest.testParallelPropagationSenderStartAfterStop_Scenario2

2016-06-08 Thread Dan Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-1422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321701#comment-15321701
 ] 

Dan Smith commented on GEODE-1422:
--

I did a little digging into this failure. When I look at the state with a 
debugger, it looks like this extra event is sitting in 
AbstractGatewaySender.tmpQueuedEvents data structure. The size of that 
structure is added on to the size of the queue.

It looks like the tmpQueuedEvents is only added to while the queue is stopped - 
once it is started, this map is drained. There is some synchronization between 
adding events a draining the queue. But I see this code in 
ParallelAsyncEventQueueImpl.start, which appears to check the tmpQueuedEvents 
size outside of the synchronization:

{code}
  if (!tmpQueuedEvents.isEmpty()) {
enqueueTempEvents();
  }
{code}

> CI Failure: 
> ParallelGatewaySenderOperationsOffHeapDUnitTest.testParallelPropagationSenderStartAfterStop_Scenario2
> -
>
> Key: GEODE-1422
> URL: https://issues.apache.org/jira/browse/GEODE-1422
> Project: Geode
>  Issue Type: Bug
>  Components: wan
>Reporter: Sai Boorlagadda
>  Labels: CI
>
> {noformat}
> Error Message
> com.gemstone.gemfire.test.dunit.RMIException: While invoking 
> com.gemstone.gemfire.internal.cache.wan.parallel.ParallelGatewaySenderOperationsDUnitTest$$Lambda$998/238210599.run
>  in VM 4 running on Host kuwait.gemstone.com with 8 VMs
> Stacktrace
> com.gemstone.gemfire.test.dunit.RMIException: While invoking 
> com.gemstone.gemfire.internal.cache.wan.parallel.ParallelGatewaySenderOperationsDUnitTest$$Lambda$998/238210599.run
>  in VM 4 running on Host kuwait.gemstone.com with 8 VMs
>   at com.gemstone.gemfire.test.dunit.VM.invoke(VM.java:389)
>   at com.gemstone.gemfire.test.dunit.VM.invoke(VM.java:355)
>   at com.gemstone.gemfire.test.dunit.VM.invoke(VM.java:293)
>   at 
> com.gemstone.gemfire.internal.cache.wan.parallel.ParallelGatewaySenderOperationsDUnitTest.testParallelPropagationSenderStartAfterStop_Scenario2(ParallelGatewaySenderOperationsDUnitTest.java:358)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at junit.framework.TestCase.runBare(TestCase.java:141)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at junit.framework.TestSuite.runTest(TestSuite.java:252)
>   at junit.framework.TestSuite.run(TestSuite.java:247)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:86)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:112)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor197.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor196.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> 

[jira] [Created] (GEODE-1515) Using --bind-address when starting a locator causes problems

2016-06-08 Thread Jens Deppe (JIRA)
Jens Deppe created GEODE-1515:
-

 Summary: Using --bind-address when starting a locator causes 
problems
 Key: GEODE-1515
 URL: https://issues.apache.org/jira/browse/GEODE-1515
 Project: Geode
  Issue Type: Bug
  Components: gfsh
Reporter: Jens Deppe


>From a slack conversation:
{noformat}
start locator --name=locator --J=-Dgemfire.http-service-port=7575 
--bind-address=192.168.11.1[10334]
start server --name=server1 
--cache-xml-file=src/main/resources/server-cache.xml 
--J=-Dgemfire.start-dev-rest-api=true 
--J=-Dgemfire.http-service-bind-address=192.168.11.1 
--J=-Dgemfire.http-service-port= --locators=192.168.11.1[10334] 
--hostname-for-clients=192.168.11.1 --server-bind-address=192.168.11.1

fails to connect and gives up

removing the —bind-address from locator works (but then it binds to all 
addresses)

got what I needed with  --hostname-for-clients on the locator (so changed 
bind-address to hostname-for-clients)
{noformat}

Original can possibly be found here: 
https://pivotal.slack.com/archives/gemfire/p1465422522000117



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (GEODE-745) include-locators in shutdown command is ignored

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321519#comment-15321519
 ] 

ASF GitHub Bot commented on GEODE-745:
--

Github user asfgit closed the pull request at:

https://github.com/apache/incubator-geode/pull/153


> include-locators in shutdown command is ignored
> ---
>
> Key: GEODE-745
> URL: https://issues.apache.org/jira/browse/GEODE-745
> Project: Geode
>  Issue Type: Bug
>  Components: rest (admin)
>Reporter: Jens Deppe
>
> The management REST API endpoint for shutdown, does not accept the 
> include-locators parameter, and hence does not shutdown the locators.
> To reproduce connect to cluster using http:
> {noformat}
> gfsh
> connect --use-http --url=...
> shutdown --include-locators=true
> {noformat}
> Observe that the locators are not shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (GEODE-1514) CI failure: RolePerformanceDUnitTest.testRolePerformance

2016-06-08 Thread Eric Shu (JIRA)
Eric Shu created GEODE-1514:
---

 Summary: CI failure: RolePerformanceDUnitTest.testRolePerformance
 Key: GEODE-1514
 URL: https://issues.apache.org/jira/browse/GEODE-1514
 Project: Geode
  Issue Type: Bug
  Components: regions
Reporter: Eric Shu






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (GEODE-1514) CI failure: RolePerformanceDUnitTest.testRolePerformance

2016-06-08 Thread Eric Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-1514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-1514:

Labels: ci flaky-test  (was: )

> CI failure: RolePerformanceDUnitTest.testRolePerformance
> 
>
> Key: GEODE-1514
> URL: https://issues.apache.org/jira/browse/GEODE-1514
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Eric Shu
>  Labels: ci, flaky-test
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (GEODE-1513) geode-web-api war contains duplicate jars

2016-06-08 Thread Jens Deppe (JIRA)
Jens Deppe created GEODE-1513:
-

 Summary: geode-web-api war contains duplicate jars
 Key: GEODE-1513
 URL: https://issues.apache.org/jira/browse/GEODE-1513
 Project: Geode
  Issue Type: Bug
  Components: build, rest (dev)
Reporter: Jens Deppe


The war file produced by geode-web-api appears to have all of the 3rd party 
jars duplicated. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (GEODE-1495) CQEvent is not properly getting generated after a destoy operation on Partioned Region.

2016-06-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-1495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321381#comment-15321381
 ] 

ASF subversion and git services commented on GEODE-1495:


Commit e4994c7b3fd42b6804b909796b8589729e9861ea in incubator-geode's branch 
refs/heads/develop from [~agingade]
[ https://git-wip-us.apache.org/repos/asf?p=incubator-geode.git;h=e4994c7 ]

GEODE-1495: Changes are made to remove the cached destroyed token/events from 
the CQ.

The CQEvents as seen by CQs are cached in order to avoid applying CQ queries on 
old values.

In case of a destory CQEvent, the CQEvents are marked with destroy tokens and 
removed from
the cache after the CQEvent is added to HAQueue.
This works fine for the CQs registered locally, but for the CQs registered on 
peer server, the
CQs weren't removed from the cache, which resulted in generating wrong CQEvent 
for subsequent
operation.
This change removes the destroy CQevent from the cache after the CQEvent is 
distributed to
peer server.


> CQEvent is not properly getting generated after a destoy operation on 
> Partioned Region.
> ---
>
> Key: GEODE-1495
> URL: https://issues.apache.org/jira/browse/GEODE-1495
> Project: Geode
>  Issue Type: Bug
>  Components: cq
>Reporter: Anilkumar Gingade
>Assignee: Anilkumar Gingade
>
> In multiple server groups setup, when a CQ is registered on one server group 
> and that gets processed/evaluated on other server group (on which data 
> buckets are present), the CQ event is wrongly getting generated after a 
> destroy operation.
> Configuration.
> -- A Geode cluster with two server groups:
> Server Group1
> Server Group2 
> -- PR region created on both server groups, with accessor buckets/regions on 
> Group1 and Data bucket/regions on Group2.
> -- CQ is registered on Server Group1
> For the following cache operation on Server Group2, the CQEvents are getting 
> generated as:
> Cache op on same Key - CQEvent
> 
> Create - Create CQEvent (as expected)
> Update - The CQ is no more satisfied, Destroy CQEvent is generated (as 
> expected)
> Update - The CQ is satisfied, but generating, Update CQEvent, instead of 
> Create.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (GEODE-745) include-locators in shutdown command is ignored

2016-06-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321129#comment-15321129
 ] 

ASF GitHub Bot commented on GEODE-745:
--

GitHub user gracemeilen opened a pull request:

https://github.com/apache/incubator-geode/pull/153

GEODE-745: add include-locator parameter in the command



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gracemeilen/incubator-geode feature/GEODE-745

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-geode/pull/153.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #153


commit da438bef8a9c95561e653dd461b518355ad76b5e
Author: gmeilen 
Date:   2016-06-08T18:14:30Z

GEODE-745: add include-locator parameter in the command




> include-locators in shutdown command is ignored
> ---
>
> Key: GEODE-745
> URL: https://issues.apache.org/jira/browse/GEODE-745
> Project: Geode
>  Issue Type: Bug
>  Components: rest (admin)
>Reporter: Jens Deppe
>
> The management REST API endpoint for shutdown, does not accept the 
> include-locators parameter, and hence does not shutdown the locators.
> To reproduce connect to cluster using http:
> {noformat}
> gfsh
> connect --use-http --url=...
> shutdown --include-locators=true
> {noformat}
> Observe that the locators are not shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (GEODE-1470) Upgrade log4j to 2.6

2016-06-08 Thread Kevin Duling (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-1470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Duling reassigned GEODE-1470:
---

Assignee: Kevin Duling

> Upgrade log4j to 2.6
> 
>
> Key: GEODE-1470
> URL: https://issues.apache.org/jira/browse/GEODE-1470
> Project: Geode
>  Issue Type: Improvement
>  Components: logging
>Reporter: Swapnil Bawaskar
>Assignee: Kevin Duling
>
> The new version of log4j (2.6) has made improvements to make it "garbage 
> free" (source: https://www.infoq.com/news/2016/05/log4j-garbage-free). We 
> should upgrade to this version to reap the benefits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (GEODE-986) CI Failure: MultiuserAPIDUnitTest.testMultiUserUnsupportedAPIs failed with SocketException

2016-06-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321018#comment-15321018
 ] 

ASF subversion and git services commented on GEODE-986:
---

Commit afa7cc815f0dcb47baaa5d81146cf50cdecce83a in incubator-geode's branch 
refs/heads/develop from [~bschuchardt]
[ https://git-wip-us.apache.org/repos/asf?p=incubator-geode.git;h=afa7cc8 ]

GEODE-986 CI Failure: MultiuserAPIDUnitTest.testMultiUserUnsupportedAPIs failed 
with SocketException

I'm reverting the change in TcpClient that defaults to v9.0 if a locator
does not respond to a RequestVersion message.  That change wasn't necessary
to fix GEODE_986 and caused some other problems.


> CI Failure: MultiuserAPIDUnitTest.testMultiUserUnsupportedAPIs failed with 
> SocketException
> --
>
> Key: GEODE-986
> URL: https://issues.apache.org/jira/browse/GEODE-986
> Project: Geode
>  Issue Type: Bug
>  Components: membership
>Reporter: Barry Oglesby
>Assignee: Bruce Schuchardt
>  Labels: ci
> Fix For: 1.0.0-incubating.M2
>
>
> Geode_develop_DistributedTests
> Private Build #1662
> Revision: e685fd85ac7e2607f70b47bfb448b1d91a56b103
> {noformat}
> [vm_1][error 2016/02/18 16:24:59.642 PST  Connection(18)-10.118.32.91> tid=0x12] Unexpected problem starting up 
> membership services
> [vm_1]com.gemstone.gemfire.ToDataException: toData failed on DataSerializable 
> class 
> com.gemstone.gemfire.distributed.internal.membership.InternalDistributedMember
> [vm_1]at 
> com.gemstone.gemfire.internal.InternalDataSerializer.invokeToData(InternalDataSerializer.java:2453)
> [vm_1]at 
> com.gemstone.gemfire.internal.InternalDataSerializer.writeDSFID(InternalDataSerializer.java:1412)
> [vm_1]at 
> com.gemstone.gemfire.internal.InternalDataSerializer.basicWriteObject(InternalDataSerializer.java:2150)
> [vm_1]at 
> com.gemstone.gemfire.DataSerializer.writeObject(DataSerializer.java:3241)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.membership.gms.locator.FindCoordinatorRequest.toData(FindCoordinatorRequest.java:87)
> [vm_1]at 
> com.gemstone.gemfire.internal.InternalDataSerializer.invokeToData(InternalDataSerializer.java:2419)
> [vm_1]at 
> com.gemstone.gemfire.internal.InternalDataSerializer.writeDSFID(InternalDataSerializer.java:1412)
> [vm_1]at 
> com.gemstone.gemfire.internal.InternalDataSerializer.basicWriteObject(InternalDataSerializer.java:2150)
> [vm_1]at 
> com.gemstone.gemfire.DataSerializer.writeObject(DataSerializer.java:3241)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.tcpserver.TcpClient.requestToServer(TcpClient.java:145)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.membership.gms.membership.GMSJoinLeave$TcpClientWrapper.sendCoordinatorFindRequest(GMSJoinLeave.java:988)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.membership.gms.membership.GMSJoinLeave.findCoordinator(GMSJoinLeave.java:910)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.membership.gms.membership.GMSJoinLeave.join(GMSJoinLeave.java:242)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.membership.gms.mgr.GMSMembershipManager.join(GMSMembershipManager.java:676)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.membership.gms.mgr.GMSMembershipManager.joinDistributedSystem(GMSMembershipManager.java:765)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.membership.gms.Services.start(Services.java:174)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.membership.gms.GMSMemberFactory.newMembershipManager(GMSMemberFactory.java:105)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.membership.MemberFactory.newMembershipManager(MemberFactory.java:93)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.DistributionManager.(DistributionManager.java:1159)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.DistributionManager.(DistributionManager.java:1211)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.DistributionManager.create(DistributionManager.java:573)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.InternalDistributedSystem.initialize(InternalDistributedSystem.java:652)
> [vm_1]at 
> com.gemstone.gemfire.distributed.internal.InternalDistributedSystem.newInstance(InternalDistributedSystem.java:277)
> [vm_1]at 
> com.gemstone.gemfire.distributed.DistributedSystem.connect(DistributedSystem.java:1641)
> [vm_1]at 
> com.gemstone.gemfire.test.dunit.DistributedTestCase.getSystem(DistributedTestCase.java:145)
> [vm_1]at 
> 

[jira] [Created] (GEODE-1512) JUnit4CacheTestCase leaves DiskDirs around after tests

2016-06-08 Thread Kirk Lund (JIRA)
Kirk Lund created GEODE-1512:


 Summary: JUnit4CacheTestCase leaves DiskDirs around after tests
 Key: GEODE-1512
 URL: https://issues.apache.org/jira/browse/GEODE-1512
 Project: Geode
  Issue Type: Wish
  Components: tests
Reporter: Kirk Lund


JUnit4CacheTestCase should be changed to use JUnit4 TemporaryFolder for 
DiskDirs to ensure they get cleaned up during tearDown.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (GEODE-1512) JUnit4CacheTestCase leaves DiskDirs around after tests

2016-06-08 Thread Kirk Lund (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk Lund reassigned GEODE-1512:


Assignee: Kirk Lund

> JUnit4CacheTestCase leaves DiskDirs around after tests
> --
>
> Key: GEODE-1512
> URL: https://issues.apache.org/jira/browse/GEODE-1512
> Project: Geode
>  Issue Type: Wish
>  Components: tests
>Reporter: Kirk Lund
>Assignee: Kirk Lund
>
> JUnit4CacheTestCase should be changed to use JUnit4 TemporaryFolder for 
> DiskDirs to ensure they get cleaned up during tearDown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (GEODE-1372) Geode UDP communications are not secure when SSL is configured

2016-06-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-1372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320864#comment-15320864
 ] 

ASF subversion and git services commented on GEODE-1372:


Commit 49e86cd6e6874a8e33aabe7df590bc0687c3f11e in incubator-geode's branch 
refs/heads/feature/GEODE-1372 from [~hitesh.khamesra]
[ https://git-wip-us.apache.org/repos/asf?p=incubator-geode.git;h=49e86cd ]

GEODE-1372 Added security-udp-dhalgo property.

Added this property in test and code. Fixed issue with InternalDistributedMember
where it was using viewId for equal method.


> Geode UDP communications are not secure when SSL is configured
> --
>
> Key: GEODE-1372
> URL: https://issues.apache.org/jira/browse/GEODE-1372
> Project: Geode
>  Issue Type: New Feature
>  Components: membership
>Reporter: Bruce Schuchardt
>Assignee: Hitesh Khamesra
>
> Gemfire servers use UDP requests to communicate membership views, suspect 
> processing and other information. When gemfire SSL is enabled, only the TCP 
> requests are encrypted and UDP requests are not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (GEODE-1463) Legacy OperationContexts do not set the appropriate Shiro permission tuple

2016-06-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-1463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320717#comment-15320717
 ] 

ASF subversion and git services commented on GEODE-1463:


Commit 4af707237ad1095f09a27a60a5813c4280d93f4d in incubator-geode's branch 
refs/heads/develop from [~jens.deppe]
[ https://git-wip-us.apache.org/repos/asf?p=incubator-geode.git;h=4af7072 ]

Revert "GEODE-1463: Legacy OperationContexts do not set the appropriate Shiro"

This reverts commit 670fae4b3950fa1ce302461312dd1251d8ea2d8a.


> Legacy OperationContexts do not set the appropriate Shiro permission tuple
> --
>
> Key: GEODE-1463
> URL: https://issues.apache.org/jira/browse/GEODE-1463
> Project: Geode
>  Issue Type: Bug
>  Components: security
>Reporter: Jens Deppe
>Assignee: Jens Deppe
>
> Also need to move ResourceOperationContext out of 'internal' as it is a 
> user-visible class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)