[jira] [Resolved] (GEODE-7414) SSL ClientHello server_name extension

2020-05-06 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac resolved GEODE-7414.
-
Resolution: Fixed

> SSL ClientHello server_name extension
> -
>
> Key: GEODE-7414
> URL: https://issues.apache.org/jira/browse/GEODE-7414
> Project: Geode
>  Issue Type: Improvement
>  Components: security
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {color:#172b4d}We propose to add the {color}*server_name extension to the 
> ClientHello message*{color:#172b4d}. The extension would hold the distributed 
> system ID of the site where the connection originated from.{color}
> {color:#172b4d}This will be used to determine internal geode communication, 
> and communication between geode sites.{color}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7414) SSL ClientHello server_name extension

2020-05-06 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac updated GEODE-7414:

Labels: pull-request-available  (was: needs-review pull-request-available)

> SSL ClientHello server_name extension
> -
>
> Key: GEODE-7414
> URL: https://issues.apache.org/jira/browse/GEODE-7414
> Project: Geode
>  Issue Type: Improvement
>  Components: security
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {color:#172b4d}We propose to add the {color}*server_name extension to the 
> ClientHello message*{color:#172b4d}. The extension would hold the distributed 
> system ID of the site where the connection originated from.{color}
> {color:#172b4d}This will be used to determine internal geode communication, 
> and communication between geode sites.{color}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8083) Add API check job to CI

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101282#comment-17101282
 ] 

ASF GitHub Bot commented on GEODE-8083:
---

onichols-pivotal commented on pull request #5066:
URL: https://github.com/apache/geode/pull/5066#issuecomment-624959652


   Is this job likely to be red or green initially?  Since our last release 
version is 1.12.0, if there have been any inadvertent breaking API changes 
since then, do they need to be addressed first?
   
   Is there any mechanism to add an exclude/override if this tool flags 
something that we decide we are ok with?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add API check job to CI
> ---
>
> Key: GEODE-8083
> URL: https://issues.apache.org/jira/browse/GEODE-8083
> Project: Geode
>  Issue Type: Improvement
>  Components: ci
>Reporter: Sean Goller
>Priority: Major
>
> In order to combat API breaking changes, we need a CI job that compares the 
> current API against the last release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8083) Add API check job to CI

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101281#comment-17101281
 ] 

ASF GitHub Bot commented on GEODE-8083:
---

onichols-pivotal commented on a change in pull request #5066:
URL: https://github.com/apache/geode/pull/5066#discussion_r421165685



##
File path: ci/pipelines/shared/jinja.variables.yml
##
@@ -202,3 +202,15 @@ tests:
   PLATFORM: windows
   RAM: '64'
   name: WindowsUnit
+- ARTIFACT_SLUG: apicheck
+  CALL_STACK_TIMEOUT: '20700'

Review comment:
   maybe not applicable to this job? But in general CALL_STACK_TIMEOUT 
should be 10-15 minutes less than EXECUTE_TEST_TIMEOUT, which you've set to 1h. 
 So `2700` might be a better choice.

##
File path: ci/pipelines/shared/jinja.variables.yml
##
@@ -202,3 +202,15 @@ tests:
   PLATFORM: windows
   RAM: '64'
   name: WindowsUnit
+- ARTIFACT_SLUG: apicheck
+  CALL_STACK_TIMEOUT: '20700'
+  CPUS: '4'
+  DUNIT_PARALLEL_FORKS: '0'
+  EXECUTE_TEST_TIMEOUT: 1h
+  GRADLE_TASK: geode-assembly:japicmp
+  MAX_IN_FLIGHT: 1
+  PARALLEL_DUNIT: 'false'
+  PARALLEL_GRADLE: 'false'
+  PLATFORM: linux
+  RAM: '16'
+  name: ApiCheck

Review comment:
   will this result in both ApiCheckOpenJdk8 and ApiCheckOpenJdk11 jobs?  
Is there any value to having both?  Ideally we would add a new flag in 
jinja.variables to allow each job to specify JDK version(s) desired per job.  
But that can be a future enhancement.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add API check job to CI
> ---
>
> Key: GEODE-8083
> URL: https://issues.apache.org/jira/browse/GEODE-8083
> Project: Geode
>  Issue Type: Improvement
>  Components: ci
>Reporter: Sean Goller
>Priority: Major
>
> In order to combat API breaking changes, we need a CI job that compares the 
> current API against the last release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (GEODE-8073) NullPointerException thrown in PartitionedRegion.handleOldNodes

2020-05-06 Thread Nabarun Nag (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nabarun Nag closed GEODE-8073.
--

> NullPointerException thrown in PartitionedRegion.handleOldNodes
> ---
>
> Key: GEODE-8073
> URL: https://issues.apache.org/jira/browse/GEODE-8073
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: caching-applications
> Fix For: 1.12.1, 1.13.0, 1.14.0
>
>
> The NPE can be thrown when a remote node is gone unexpectedly.
> {noformat}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.geode.internal.cache.PartitionedRegion.handleOldNodes(PartitionedRegion.java:4610)
> at 
> org.apache.geode.internal.cache.PartitionedRegion.fetchEntries(PartitionedRegion.java:4689)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVKeysPR(BaseCommand.java:1191)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVAllKeys(BaseCommand.java:1124)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKeysValuesPolicy(BaseCommand.java:973)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.fillAndSendRegisterInterestResponseChunks(BaseCommand.java:905)
> at 
> org.apache.geode.internal.cache.tier.sockets.command.RegisterInterest61.cmdExecute(RegisterInterest61.java:260)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:183)
> at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:848)
> at 
> org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:72)
> at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1212)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at 
> org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:686)
> at 
> org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8063) Access violation during Cache destrucition caused by ClientMetadataService

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101231#comment-17101231
 ] 

ASF GitHub Bot commented on GEODE-8063:
---

pdxcodemonkey commented on a change in pull request #597:
URL: https://github.com/apache/geode-native/pull/597#discussion_r421125677



##
File path: cppcache/src/CacheImpl.cpp
##
@@ -640,13 +640,16 @@ void CacheImpl::readyForEvents() {
   }
 }
 
-bool CacheImpl::isPoolInMultiuserMode(std::shared_ptr regionPtr) {
-  const auto& poolName = regionPtr->getAttributes().getPoolName();
+bool CacheImpl::isPoolInMultiuserMode(std::shared_ptr region) {
+  const auto& poolName = region->getAttributes().getPoolName();
 
   if (!poolName.empty()) {
-auto poolPtr = regionPtr->getCache().getPoolManager().find(poolName);
-if (poolPtr != nullptr && !poolPtr->isDestroyed()) {
-  return poolPtr->getMultiuserAuthentication();
+auto pool = static_cast(*region)
+.getCacheImpl()
+->getPoolManager()
+.find(poolName);
+if (pool && !pool->isDestroyed()) {
+  return pool->getMultiuserAuthentication();

Review comment:
   Also is there a reasonable way to write a test to force this race 
condition or something?
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Access violation during Cache destrucition caused by ClientMetadataService
> --
>
> Key: GEODE-8063
> URL: https://issues.apache.org/jira/browse/GEODE-8063
> Project: Geode
>  Issue Type: Bug
>  Components: native client
>Reporter: Jacob Barrett
>Assignee: Jacob Barrett
>Priority: Major
>
> During the destruction of {{Cache}} the pointer to {{CacheImpl}} is invalid. 
> Calls from {{ClientMetadataService}} attempt to access {{Pool}} instances 
> through the {{Cache}} instance rather than {{CacheImpl}} resulting in access 
> violation. This issue is timing dependent and isn't always reproducible. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8060) GemFireCacheImplCloseTest.close_blocksUntilFirstCallToCloseCompletes fails intermittently

2020-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101226#comment-17101226
 ] 

ASF subversion and git services commented on GEODE-8060:


Commit 9e957f1db7760aa2bdc767c9c469e926cd490cad in geode's branch 
refs/heads/develop from Kirk Lund
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=9e957f1 ]

GEODE-8060: Fix flakiness in GemFireCacheImplCloseTest (#5041)



> GemFireCacheImplCloseTest.close_blocksUntilFirstCallToCloseCompletes fails 
> intermittently
> -
>
> Key: GEODE-8060
> URL: https://issues.apache.org/jira/browse/GEODE-8060
> Project: Geode
>  Issue Type: Bug
>  Components: tests
>Reporter: Kirk Lund
>Assignee: Kirk Lund
>Priority: Major
>  Labels: GeodeOperationAPI
>
> {noformat}
> org.apache.geode.internal.cache.GemFireCacheImplCloseTest > 
> close_blocksUntilFirstCallToCloseCompletes FAILED
> org.junit.ComparisonFailure: [ThreadId1=47 and threadId2=49] 
> expected:<4[7]L> but was:<4[9]L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> org.apache.geode.internal.cache.GemFireCacheImplCloseTest.close_blocksUntilFirstCallToCloseCompletes(GemFireCacheImplCloseTest.java:225)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8083) Add API check job to CI

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101224#comment-17101224
 ] 

ASF GitHub Bot commented on GEODE-8083:
---

smgoller opened a new pull request #5066:
URL: https://github.com/apache/geode/pull/5066


   * Add API check gradle task.
   * Add CI job that calls API check gradle task.
   * Fix spotless rules so it doesn't mistakenly mess up our gradle.
   
   Co-authored-by: Sean Goller 
   Co-authored-by: Robert Houghton 
   
   Thank you for submitting a contribution to Apache Geode.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in 
the commit message?
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `develop`)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   - [ ] Does `gradlew build` run cleanly?
   
   - [ ] Have you written or updated unit tests to verify your changes?
   
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   
   ### Note:
   Please ensure that once the PR is submitted, check Concourse for build 
issues and
   submit an update to your PR as soon as possible. If you need help, please 
send an
   email to d...@geode.apache.org.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add API check job to CI
> ---
>
> Key: GEODE-8083
> URL: https://issues.apache.org/jira/browse/GEODE-8083
> Project: Geode
>  Issue Type: Improvement
>  Components: ci
>Reporter: Sean Goller
>Priority: Major
>
> In order to combat API breaking changes, we need a CI job that compares the 
> current API against the last release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-8083) Add API check job to CI

2020-05-06 Thread Sean Goller (Jira)
Sean Goller created GEODE-8083:
--

 Summary: Add API check job to CI
 Key: GEODE-8083
 URL: https://issues.apache.org/jira/browse/GEODE-8083
 Project: Geode
  Issue Type: Improvement
  Components: ci
Reporter: Sean Goller


In order to combat API breaking changes, we need a CI job that compares the 
current API against the last release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-7864) Code improvement refactoring

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101217#comment-17101217
 ] 

ASF GitHub Bot commented on GEODE-7864:
---

DonalEvans commented on pull request #5049:
URL: https://github.com/apache/geode/pull/5049#issuecomment-624910507


   > easy way to test is try changing Region.SEPARATOR to something else, right?
   
   This is just a first-pass, fixing uses like `getRegion("/" + regionName)`. 
It doesn't fix uses like `getRegion("/regionName")`. If you think it's better 
to take an all or nothing approach rather than incremental improvements, I'm 
okay with that too, but it might make proper review of the changes difficult. 
The last thing I want to do is replace a filepath separator with 
Region.SEPARATOR and have it slip through unnoticed.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Code improvement refactoring
> 
>
> Key: GEODE-7864
> URL: https://issues.apache.org/jira/browse/GEODE-7864
> Project: Geode
>  Issue Type: Improvement
>Reporter: Nabarun Nag
>Priority: Major
>  Time Spent: 13h 10m
>  Remaining Estimate: 0h
>
> This is a placeholder ticket.
>  * this is used to do refactoring.
>  * this ticket number is used to number the commit message.
>  * this ticket will never be closed.
>  * it will be used to mark improvements like correcting spelling mistakes, 
> efficient java code, etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8041) Create ManagementService Interface

2020-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101216#comment-17101216
 ] 

ASF subversion and git services commented on GEODE-8041:


Commit 353192ee8600a4ee94c661d4ac22fd2c8bf34702 in geode's branch 
refs/heads/feature/GEODE-8067 from Patrick Johnson
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=353192e ]

GEODE-8041 - Create ManagementService interface. (#5062)



> Create ManagementService Interface
> --
>
> Key: GEODE-8041
> URL: https://issues.apache.org/jira/browse/GEODE-8041
> Project: Geode
>  Issue Type: Sub-task
>Reporter: Patrick Johnsn
>Assignee: Patrick Johnsn
>Priority: Major
>
> Create a ManagementService interface. The ManagementService will configure 
> and start Geode (create a Cache given some configuration properties) after 
> the BootstrappingService has bootstrapped the environment and loaded the 
> relevant modules.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8068) Revert GEODE-8033 and GEODE-8044

2020-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101210#comment-17101210
 ] 

ASF subversion and git services commented on GEODE-8068:


Commit 0ef8c5f3da9b648f056fb16252ae941449d01177 in geode's branch 
refs/heads/develop from Patrick Johnson
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=0ef8c5f ]

GEODE-8068 - Revert GEODE-8044 and GEODE-8033. (#5045)



> Revert GEODE-8033 and GEODE-8044
> 
>
> Key: GEODE-8068
> URL: https://issues.apache.org/jira/browse/GEODE-8068
> Project: Geode
>  Issue Type: New Feature
>Reporter: Patrick Johnsn
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8044) Move services related to classloader-isolation to geode-common-services.

2020-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101211#comment-17101211
 ] 

ASF subversion and git services commented on GEODE-8044:


Commit 0ef8c5f3da9b648f056fb16252ae941449d01177 in geode's branch 
refs/heads/develop from Patrick Johnson
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=0ef8c5f ]

GEODE-8068 - Revert GEODE-8044 and GEODE-8033. (#5045)



> Move services related to classloader-isolation to geode-common-services.
> 
>
> Key: GEODE-8044
> URL: https://issues.apache.org/jira/browse/GEODE-8044
> Project: Geode
>  Issue Type: Sub-task
>Reporter: Patrick Johnsn
>Assignee: Patrick Johnsn
>Priority: Major
> Fix For: 1.13.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8033) Create ModuleService Interface

2020-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101212#comment-17101212
 ] 

ASF subversion and git services commented on GEODE-8033:


Commit 0ef8c5f3da9b648f056fb16252ae941449d01177 in geode's branch 
refs/heads/develop from Patrick Johnson
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=0ef8c5f ]

GEODE-8068 - Revert GEODE-8044 and GEODE-8033. (#5045)



> Create ModuleService Interface
> --
>
> Key: GEODE-8033
> URL: https://issues.apache.org/jira/browse/GEODE-8033
> Project: Geode
>  Issue Type: Sub-task
>Reporter: Patrick Johnsn
>Assignee: Patrick Johnsn
>Priority: Major
> Fix For: 1.13.0
>
>
> Introduce a new Gradle sub-project called `geode-module` and create a 
> ModuleService interface within it. This interface will be used to load/unload 
> modules and services in a classloader-isolated way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8078) Exceptions in locator logs when hitting members REST endpoint

2020-05-06 Thread Geode Integration (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101209#comment-17101209
 ] 

Geode Integration commented on GEODE-8078:
--

A Pivotal Tracker story has been created for this Issue: 
https://www.pivotaltracker.com/story/show/172708465

> Exceptions in locator logs when hitting members REST endpoint
> -
>
> Key: GEODE-8078
> URL: https://issues.apache.org/jira/browse/GEODE-8078
> Project: Geode
>  Issue Type: Bug
>  Components: management
>Reporter: Aaron Lindsey
>Priority: Major
>  Labels: GeodeOperationAPI
>
> I'm seeing the following exceptions in locator logs when I try to hit the 
> REST endpoint /management/v1/members/\{id} before the member has finished 
> starting up. The reason I need to do this is because I have a program that is 
> polling that endpoint to wait until the member is online. Ideally these 
> errors would not show up in logs, but instead be reflected in the status code 
> of the REST response.
> {quote}[error 2020/04/06 22:05:59.086 UTC  tid=0x31] class 
> org.apache.geode.cache.CacheClosedException cannot be cast to class 
> org.apache.geode.management.runtime.RuntimeInfo 
> (org.apache.geode.cache.CacheClosedException and 
> org.apache.geode.management.runtime.RuntimeInfo are in unnamed module of 
> loader 'app')
> java.lang.ClassCastException: class 
> org.apache.geode.cache.CacheClosedException cannot be cast to class 
> org.apache.geode.management.runtime.RuntimeInfo 
> (org.apache.geode.cache.CacheClosedException and 
> org.apache.geode.management.runtime.RuntimeInfo are in unnamed module of 
> loader 'app')
> at 
> org.apache.geode.management.internal.api.LocatorClusterManagementService.list(LocatorClusterManagementService.java:417)
> at 
> org.apache.geode.management.internal.api.LocatorClusterManagementService.get(LocatorClusterManagementService.java:434)
> at 
> org.apache.geode.management.internal.rest.controllers.MemberManagementController.getMember(MemberManagementController.java:50)
> at 
> org.apache.geode.management.internal.rest.controllers.MemberManagementController$$FastClassBySpringCGLIB$$3634e452.invoke()
> at 
> org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
> at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:769)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
> at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747)
> at 
> org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:69)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
> at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747)
> at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689)
> at 
> org.apache.geode.management.internal.rest.controllers.MemberManagementController$$EnhancerBySpringCGLIB$$2893b195.getMember()
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> at 
> org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:190)
> at 
> org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138)
> at 
> org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:106)
> at 
> org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:888)
> at 
> org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:793)
> at 
> org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)
> at 
> org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1040)
> at 
> org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:943)
> at 
> 

[jira] [Commented] (GEODE-7864) Code improvement refactoring

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101208#comment-17101208
 ] 

ASF GitHub Bot commented on GEODE-7864:
---

onichols-pivotal commented on pull request #5049:
URL: https://github.com/apache/geode/pull/5049#issuecomment-624908788


   easy way to test is try changing Region.SEPARATOR to something else, right?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Code improvement refactoring
> 
>
> Key: GEODE-7864
> URL: https://issues.apache.org/jira/browse/GEODE-7864
> Project: Geode
>  Issue Type: Improvement
>Reporter: Nabarun Nag
>Priority: Major
>  Time Spent: 13h 10m
>  Remaining Estimate: 0h
>
> This is a placeholder ticket.
>  * this is used to do refactoring.
>  * this ticket number is used to number the commit message.
>  * this ticket will never be closed.
>  * it will be used to mark improvements like correcting spelling mistakes, 
> efficient java code, etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-8078) Exceptions in locator logs when hitting members REST endpoint

2020-05-06 Thread Jinmei Liao (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinmei Liao updated GEODE-8078:
---
Labels: GeodeOperationAPI  (was: )

> Exceptions in locator logs when hitting members REST endpoint
> -
>
> Key: GEODE-8078
> URL: https://issues.apache.org/jira/browse/GEODE-8078
> Project: Geode
>  Issue Type: Bug
>  Components: management
>Reporter: Aaron Lindsey
>Priority: Major
>  Labels: GeodeOperationAPI
>
> I'm seeing the following exceptions in locator logs when I try to hit the 
> REST endpoint /management/v1/members/\{id} before the member has finished 
> starting up. The reason I need to do this is because I have a program that is 
> polling that endpoint to wait until the member is online. Ideally these 
> errors would not show up in logs, but instead be reflected in the status code 
> of the REST response.
> {quote}[error 2020/04/06 22:05:59.086 UTC  tid=0x31] class 
> org.apache.geode.cache.CacheClosedException cannot be cast to class 
> org.apache.geode.management.runtime.RuntimeInfo 
> (org.apache.geode.cache.CacheClosedException and 
> org.apache.geode.management.runtime.RuntimeInfo are in unnamed module of 
> loader 'app')
> java.lang.ClassCastException: class 
> org.apache.geode.cache.CacheClosedException cannot be cast to class 
> org.apache.geode.management.runtime.RuntimeInfo 
> (org.apache.geode.cache.CacheClosedException and 
> org.apache.geode.management.runtime.RuntimeInfo are in unnamed module of 
> loader 'app')
> at 
> org.apache.geode.management.internal.api.LocatorClusterManagementService.list(LocatorClusterManagementService.java:417)
> at 
> org.apache.geode.management.internal.api.LocatorClusterManagementService.get(LocatorClusterManagementService.java:434)
> at 
> org.apache.geode.management.internal.rest.controllers.MemberManagementController.getMember(MemberManagementController.java:50)
> at 
> org.apache.geode.management.internal.rest.controllers.MemberManagementController$$FastClassBySpringCGLIB$$3634e452.invoke()
> at 
> org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
> at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:769)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
> at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747)
> at 
> org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:69)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
> at 
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747)
> at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689)
> at 
> org.apache.geode.management.internal.rest.controllers.MemberManagementController$$EnhancerBySpringCGLIB$$2893b195.getMember()
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> at 
> org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:190)
> at 
> org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138)
> at 
> org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:106)
> at 
> org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:888)
> at 
> org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:793)
> at 
> org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)
> at 
> org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1040)
> at 
> org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:943)
> at 
> org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006)
> at 
> 

[jira] [Updated] (GEODE-8055) can not create index on sub regions

2020-05-06 Thread Jinmei Liao (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinmei Liao updated GEODE-8055:
---
Labels: GeodeOperationAPI  (was: )

> can not create index on sub regions
> ---
>
> Key: GEODE-8055
> URL: https://issues.apache.org/jira/browse/GEODE-8055
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh
>Affects Versions: 1.7.0, 1.8.0, 1.10.0, 1.9.2, 1.11.0, 1.12.0
>Reporter: Jinmei Liao
>Priority: Major
>  Labels: GeodeOperationAPI
> Fix For: 1.12.1, 1.13.0, 1.14.0
>
>
> When trying to use "create index" command in gfsh to create index on sub 
> regions, we get the following message:
> "Sub-regions are unsupported"
> Pre-1.6, we were able to do that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8035) Parallel Disk Store Recovery when Cluster Restarts

2020-05-06 Thread Geode Integration (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101200#comment-17101200
 ] 

Geode Integration commented on GEODE-8035:
--

A Pivotal Tracker story has been created for this Issue: 
https://www.pivotaltracker.com/story/show/172708308

> Parallel Disk Store Recovery when Cluster Restarts
> --
>
> Key: GEODE-8035
> URL: https://issues.apache.org/jira/browse/GEODE-8035
> Project: Geode
>  Issue Type: Improvement
>Reporter: Jianxia Chen
>Assignee: Jianxia Chen
>Priority: Major
>  Labels: GeodeCommons, GeodeOperationAPI
>
> Currently, when Geode cluster restarts, the disk store recovery is 
> serialized. When all regions share the same disk store, the restart process 
> is time consuming. To improve the performance, different regions can use 
> different disk stores with different disk controllers. And adding parallel 
> disk store recovery. This is expected to significantly reduce the time to 
> restart Geode cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-8035) Parallel Disk Store Recovery when Cluster Restarts

2020-05-06 Thread Anilkumar Gingade (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anilkumar Gingade updated GEODE-8035:
-
Labels: GeodeCommons GeodeOperationAPI  (was: GeodeCommons GeodeOpertionApi)

> Parallel Disk Store Recovery when Cluster Restarts
> --
>
> Key: GEODE-8035
> URL: https://issues.apache.org/jira/browse/GEODE-8035
> Project: Geode
>  Issue Type: Improvement
>Reporter: Jianxia Chen
>Assignee: Jianxia Chen
>Priority: Major
>  Labels: GeodeCommons, GeodeOperationAPI
>
> Currently, when Geode cluster restarts, the disk store recovery is 
> serialized. When all regions share the same disk store, the restart process 
> is time consuming. To improve the performance, different regions can use 
> different disk stores with different disk controllers. And adding parallel 
> disk store recovery. This is expected to significantly reduce the time to 
> restart Geode cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-8035) Parallel Disk Store Recovery when Cluster Restarts

2020-05-06 Thread Anilkumar Gingade (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anilkumar Gingade updated GEODE-8035:
-
Labels: GeodeCommons GeodeOpertionApi  (was: GeodeCommons)

> Parallel Disk Store Recovery when Cluster Restarts
> --
>
> Key: GEODE-8035
> URL: https://issues.apache.org/jira/browse/GEODE-8035
> Project: Geode
>  Issue Type: Improvement
>Reporter: Jianxia Chen
>Assignee: Jianxia Chen
>Priority: Major
>  Labels: GeodeCommons, GeodeOpertionApi
>
> Currently, when Geode cluster restarts, the disk store recovery is 
> serialized. When all regions share the same disk store, the restart process 
> is time consuming. To improve the performance, different regions can use 
> different disk stores with different disk controllers. And adding parallel 
> disk store recovery. This is expected to significantly reduce the time to 
> restart Geode cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-7678) Partitioned Region clear operations must invoke cache level listeners

2020-05-06 Thread Geode Integration (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101197#comment-17101197
 ] 

Geode Integration commented on GEODE-7678:
--

Anilkumar Gingade deleted the linked story in Pivotal Tracker

> Partitioned Region clear operations must invoke cache level listeners
> -
>
> Key: GEODE-7678
> URL: https://issues.apache.org/jira/browse/GEODE-7678
> Project: Geode
>  Issue Type: Sub-task
>  Components: regions
>Reporter: Nabarun Nag
>Assignee: Anilkumar Gingade
>Priority: Major
>  Labels: GeodeCommons, GeodeOperationAPI
>
> Clear operations are successful and CacheListener.afterRegionClear(), 
> CacheWriter.beforeRegionClear() are invoked.
>  
> Acceptance :
>  * DUnit tests validating the above behavior.
>  * Test coverage to when a member departs in this scenario
>  * Test coverage to when a member restarts in this scenario
>  * Unit tests with complete code coverage for the newly written code.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-8073) NullPointerException thrown in PartitionedRegion.handleOldNodes

2020-05-06 Thread Eric Shu (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Shu updated GEODE-8073:

Fix Version/s: 1.14.0
   1.13.0
   1.12.1

> NullPointerException thrown in PartitionedRegion.handleOldNodes
> ---
>
> Key: GEODE-8073
> URL: https://issues.apache.org/jira/browse/GEODE-8073
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: caching-applications
> Fix For: 1.12.1, 1.13.0, 1.14.0
>
>
> The NPE can be thrown when a remote node is gone unexpectedly.
> {noformat}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.geode.internal.cache.PartitionedRegion.handleOldNodes(PartitionedRegion.java:4610)
> at 
> org.apache.geode.internal.cache.PartitionedRegion.fetchEntries(PartitionedRegion.java:4689)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVKeysPR(BaseCommand.java:1191)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVAllKeys(BaseCommand.java:1124)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKeysValuesPolicy(BaseCommand.java:973)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.fillAndSendRegisterInterestResponseChunks(BaseCommand.java:905)
> at 
> org.apache.geode.internal.cache.tier.sockets.command.RegisterInterest61.cmdExecute(RegisterInterest61.java:260)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:183)
> at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:848)
> at 
> org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:72)
> at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1212)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at 
> org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:686)
> at 
> org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8073) NullPointerException thrown in PartitionedRegion.handleOldNodes

2020-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101189#comment-17101189
 ] 

ASF subversion and git services commented on GEODE-8073:


Commit 302cf0c9c823b04f8872618fd5c706bd5e3ccac2 in geode's branch 
refs/heads/support/1.12 from Eric Shu
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=302cf0c ]

GEODE-8073: Fix NPE after FetchKeysMessage failed. (#5055)


(cherry picked from commit 643c617ec681918db3508030bd22922c76b87b25)


> NullPointerException thrown in PartitionedRegion.handleOldNodes
> ---
>
> Key: GEODE-8073
> URL: https://issues.apache.org/jira/browse/GEODE-8073
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: caching-applications
>
> The NPE can be thrown when a remote node is gone unexpectedly.
> {noformat}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.geode.internal.cache.PartitionedRegion.handleOldNodes(PartitionedRegion.java:4610)
> at 
> org.apache.geode.internal.cache.PartitionedRegion.fetchEntries(PartitionedRegion.java:4689)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVKeysPR(BaseCommand.java:1191)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVAllKeys(BaseCommand.java:1124)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKeysValuesPolicy(BaseCommand.java:973)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.fillAndSendRegisterInterestResponseChunks(BaseCommand.java:905)
> at 
> org.apache.geode.internal.cache.tier.sockets.command.RegisterInterest61.cmdExecute(RegisterInterest61.java:260)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:183)
> at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:848)
> at 
> org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:72)
> at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1212)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at 
> org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:686)
> at 
> org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8073) NullPointerException thrown in PartitionedRegion.handleOldNodes

2020-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101180#comment-17101180
 ] 

ASF subversion and git services commented on GEODE-8073:


Commit 37345287327a5b5cd993ee266d23ffc83befff1a in geode's branch 
refs/heads/support/1.13 from Eric Shu
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=3734528 ]

GEODE-8073: Fix NPE after FetchKeysMessage failed. (#5055)


(cherry picked from commit 643c617ec681918db3508030bd22922c76b87b25)


> NullPointerException thrown in PartitionedRegion.handleOldNodes
> ---
>
> Key: GEODE-8073
> URL: https://issues.apache.org/jira/browse/GEODE-8073
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: caching-applications
>
> The NPE can be thrown when a remote node is gone unexpectedly.
> {noformat}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.geode.internal.cache.PartitionedRegion.handleOldNodes(PartitionedRegion.java:4610)
> at 
> org.apache.geode.internal.cache.PartitionedRegion.fetchEntries(PartitionedRegion.java:4689)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVKeysPR(BaseCommand.java:1191)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVAllKeys(BaseCommand.java:1124)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKeysValuesPolicy(BaseCommand.java:973)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.fillAndSendRegisterInterestResponseChunks(BaseCommand.java:905)
> at 
> org.apache.geode.internal.cache.tier.sockets.command.RegisterInterest61.cmdExecute(RegisterInterest61.java:260)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:183)
> at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:848)
> at 
> org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:72)
> at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1212)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at 
> org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:686)
> at 
> org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8040) BootstrappingFunctionIntegrationTest unintentionally creates a full Cache stack

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101156#comment-17101156
 ] 

ASF GitHub Bot commented on GEODE-8040:
---

kirklund edited a comment on pull request #5033:
URL: https://github.com/apache/geode/pull/5033#issuecomment-624869797


   @jdeppe-pivotal do you have any ideas why any changes to 
BootstrappingFunction causes the Tomcat session tests in UpgradeTest to fail? I 
only made changes to BootstrappingFunction to enable it to be well unit tested, 
but after a week I still can't get it to pass. I'm reluctant to throw away the 
changes but I'm out of time. I moved BootstrappingFunctionTest to 
BootstrappingFunctionIntegrationTest because it creates a full Cache despite 
the use of Mockito. The when/thenReturn stubbing for the BootstrappingFunction 
spy is incorrect -- the test executes the real code and THEN returns the 
mockCache -- but the real code creates a real Cache that then isn't used.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> BootstrappingFunctionIntegrationTest unintentionally creates a full Cache 
> stack
> ---
>
> Key: GEODE-8040
> URL: https://issues.apache.org/jira/browse/GEODE-8040
> Project: Geode
>  Issue Type: Bug
>  Components: tests
>Reporter: Kirk Lund
>Assignee: Kirk Lund
>Priority: Major
>
> BootstrappingFunctionIntegrationTest unintentionally creates a full Cache 
> stack. I have renamed and moved BootstrappingFunctionTest from src/test to 
> src/integrationTest because it creates a full Cache/DistributedSystem and 
> leaves the SocketCreatorFactory in an initialized state which causes other 
> unit tests such as SocketCreatorFactoryJUnitTest to fail:
> {noformat}
> org.apache.geode.internal.net.SocketCreatorFactoryJUnitTest > 
> testNewSSLConfigSSLComponentLocator FAILED
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.geode.internal.net.SocketCreatorFactoryJUnitTest.testNewSSLConfigSSLComponentLocator(SocketCreatorFactoryJUnitTest.java:106)
> {noformat}
> The cause is improper syntax in the test for partial mocking a Spy. This 
> when/thenReturn:
> {noformat}
> when(bootstrappingFunction.verifyCacheExists()).thenReturn(mockCache);
> {noformat}
> ...first invokes the actual verifyCacheExists() method which creates a real 
> Cache and then returns mockCache. This should instead be:
> {noformat}
> doReturn(mockCache).when(bootstrappingFunction).verifyCacheExists();
> {noformat}
> Unfortunately, the class BootstrappingFunction requires additional changes to 
> make it unit testable. The next test failure after fixing the Mockito syntax 
> is:
> {noformat}
> org.apache.geode.cache.CacheClosedException: A cache has not yet been created.
>   at 
> org.apache.geode.internal.cache.CacheFactoryStatics.getAnyInstance(CacheFactoryStatics.java:87)
>   at 
> org.apache.geode.modules.util.CreateRegionFunction.(CreateRegionFunction.java:59)
>   at 
> org.apache.geode.modules.util.BootstrappingFunction.registerFunctions(BootstrappingFunction.java:124)
>   at 
> org.apache.geode.modules.util.BootstrappingFunction.execute(BootstrappingFunction.java:67)
>   at 
> org.apache.geode.modules.util.BootstrappingFunctionIntegrationTest.registerFunctionGetsCalledOnNonLocators(BootstrappingFunctionIntegrationTest.java:101)
> {noformat}
> When I make the changes necessary to make BootstrappingFunction unit 
> testable, the tomcat session backwards compatibility tests all start failing 
> in UpgradeTest:
> {noformat}
> org.apache.geode.session.tests.TomcatSessionBackwardsCompatibilityTomcat8WithOldModuleCanDoPutsTest
>  > test[0] FAILED
> java.lang.RuntimeException: Something very bad happened when trying to 
> start container 
> TOMCAT8_client-server_test0_0_6b9ba51c-9690-47aa-8e56-8e5b6cb22af4_
> at 
> org.apache.geode.session.tests.ContainerManager.startContainer(ContainerManager.java:80)
> at 
> org.apache.geode.session.tests.ContainerManager.startContainers(ContainerManager.java:91)
> at 
> org.apache.geode.session.tests.ContainerManager.startAllInactiveContainers(ContainerManager.java:98)
> at 
> org.apache.geode.session.tests.TomcatSessionBackwardsCompatibilityTestBase.doPutAndGetSessionOnAllClients(TomcatSessionBackwardsCompatibilityTestBase.java:187)
> at 
> 

[jira] [Created] (GEODE-8082) refactor (reformat and renames) in GeodeRedisServer file Geode-Redis-Module

2020-05-06 Thread John Hutchison (Jira)
John Hutchison created GEODE-8082:
-

 Summary: refactor (reformat and renames) in GeodeRedisServer file  
Geode-Redis-Module
 Key: GEODE-8082
 URL: https://issues.apache.org/jira/browse/GEODE-8082
 Project: Geode
  Issue Type: Test
Reporter: John Hutchison






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8040) BootstrappingFunctionIntegrationTest unintentionally creates a full Cache stack

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101154#comment-17101154
 ] 

ASF GitHub Bot commented on GEODE-8040:
---

kirklund edited a comment on pull request #5033:
URL: https://github.com/apache/geode/pull/5033#issuecomment-624869797


   @jdeppe-pivotal do you have any ideas why any changes to 
BootstrappingFunction causes the Tomcat session tests in UpgradeTest to fail? I 
only made changes to BootstrappingFunction to enable it to be well unit tested, 
but after a week I still can't get it to pass. I'm reluctant to throw away the 
changes but I can't get it to pass.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> BootstrappingFunctionIntegrationTest unintentionally creates a full Cache 
> stack
> ---
>
> Key: GEODE-8040
> URL: https://issues.apache.org/jira/browse/GEODE-8040
> Project: Geode
>  Issue Type: Bug
>  Components: tests
>Reporter: Kirk Lund
>Assignee: Kirk Lund
>Priority: Major
>
> BootstrappingFunctionIntegrationTest unintentionally creates a full Cache 
> stack. I have renamed and moved BootstrappingFunctionTest from src/test to 
> src/integrationTest because it creates a full Cache/DistributedSystem and 
> leaves the SocketCreatorFactory in an initialized state which causes other 
> unit tests such as SocketCreatorFactoryJUnitTest to fail:
> {noformat}
> org.apache.geode.internal.net.SocketCreatorFactoryJUnitTest > 
> testNewSSLConfigSSLComponentLocator FAILED
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.geode.internal.net.SocketCreatorFactoryJUnitTest.testNewSSLConfigSSLComponentLocator(SocketCreatorFactoryJUnitTest.java:106)
> {noformat}
> The cause is improper syntax in the test for partial mocking a Spy. This 
> when/thenReturn:
> {noformat}
> when(bootstrappingFunction.verifyCacheExists()).thenReturn(mockCache);
> {noformat}
> ...first invokes the actual verifyCacheExists() method which creates a real 
> Cache and then returns mockCache. This should instead be:
> {noformat}
> doReturn(mockCache).when(bootstrappingFunction).verifyCacheExists();
> {noformat}
> Unfortunately, the class BootstrappingFunction requires additional changes to 
> make it unit testable. The next test failure after fixing the Mockito syntax 
> is:
> {noformat}
> org.apache.geode.cache.CacheClosedException: A cache has not yet been created.
>   at 
> org.apache.geode.internal.cache.CacheFactoryStatics.getAnyInstance(CacheFactoryStatics.java:87)
>   at 
> org.apache.geode.modules.util.CreateRegionFunction.(CreateRegionFunction.java:59)
>   at 
> org.apache.geode.modules.util.BootstrappingFunction.registerFunctions(BootstrappingFunction.java:124)
>   at 
> org.apache.geode.modules.util.BootstrappingFunction.execute(BootstrappingFunction.java:67)
>   at 
> org.apache.geode.modules.util.BootstrappingFunctionIntegrationTest.registerFunctionGetsCalledOnNonLocators(BootstrappingFunctionIntegrationTest.java:101)
> {noformat}
> When I make the changes necessary to make BootstrappingFunction unit 
> testable, the tomcat session backwards compatibility tests all start failing 
> in UpgradeTest:
> {noformat}
> org.apache.geode.session.tests.TomcatSessionBackwardsCompatibilityTomcat8WithOldModuleCanDoPutsTest
>  > test[0] FAILED
> java.lang.RuntimeException: Something very bad happened when trying to 
> start container 
> TOMCAT8_client-server_test0_0_6b9ba51c-9690-47aa-8e56-8e5b6cb22af4_
> at 
> org.apache.geode.session.tests.ContainerManager.startContainer(ContainerManager.java:80)
> at 
> org.apache.geode.session.tests.ContainerManager.startContainers(ContainerManager.java:91)
> at 
> org.apache.geode.session.tests.ContainerManager.startAllInactiveContainers(ContainerManager.java:98)
> at 
> org.apache.geode.session.tests.TomcatSessionBackwardsCompatibilityTestBase.doPutAndGetSessionOnAllClients(TomcatSessionBackwardsCompatibilityTestBase.java:187)
> at 
> org.apache.geode.session.tests.TomcatSessionBackwardsCompatibilityTomcat8WithOldModuleCanDoPutsTest.test(TomcatSessionBackwardsCompatibilityTomcat8WithOldModuleCanDoPutsTest.java:35)
> Caused by:
> java.lang.RuntimeException: Something very bad happened to this 
> container when starting. Check the cargo_logs folder for container logs.
> at 
> 

[jira] [Commented] (GEODE-8063) Access violation during Cache destrucition caused by ClientMetadataService

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101155#comment-17101155
 ] 

ASF GitHub Bot commented on GEODE-8063:
---

pdxcodemonkey commented on a change in pull request #597:
URL: https://github.com/apache/geode-native/pull/597#discussion_r421069121



##
File path: cppcache/src/CacheImpl.cpp
##
@@ -640,13 +640,16 @@ void CacheImpl::readyForEvents() {
   }
 }
 
-bool CacheImpl::isPoolInMultiuserMode(std::shared_ptr regionPtr) {
-  const auto& poolName = regionPtr->getAttributes().getPoolName();
+bool CacheImpl::isPoolInMultiuserMode(std::shared_ptr region) {
+  const auto& poolName = region->getAttributes().getPoolName();
 
   if (!poolName.empty()) {
-auto poolPtr = regionPtr->getCache().getPoolManager().find(poolName);
-if (poolPtr != nullptr && !poolPtr->isDestroyed()) {
-  return poolPtr->getMultiuserAuthentication();
+auto pool = static_cast(*region)
+.getCacheImpl()
+->getPoolManager()
+.find(poolName);
+if (pool && !pool->isDestroyed()) {
+  return pool->getMultiuserAuthentication();

Review comment:
   Before I approve and merge this, can you explain why a method on 
CacheImpl is checking whether its pool is in multi-user mode by getting hold of 
_another_ CacheImpl pointer and extracting the pool from there?  Why can't 
CacheImpl call getPoolManager() on itself?  This code is super weird without 
any context...





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Access violation during Cache destrucition caused by ClientMetadataService
> --
>
> Key: GEODE-8063
> URL: https://issues.apache.org/jira/browse/GEODE-8063
> Project: Geode
>  Issue Type: Bug
>  Components: native client
>Reporter: Jacob Barrett
>Assignee: Jacob Barrett
>Priority: Major
>
> During the destruction of {{Cache}} the pointer to {{CacheImpl}} is invalid. 
> Calls from {{ClientMetadataService}} attempt to access {{Pool}} instances 
> through the {{Cache}} instance rather than {{CacheImpl}} resulting in access 
> violation. This issue is timing dependent and isn't always reproducible. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-8081) create sub-directories within distributedTest in GeodeRedisModule

2020-05-06 Thread John Hutchison (Jira)
John Hutchison created GEODE-8081:
-

 Summary: create sub-directories within distributedTest in 
GeodeRedisModule
 Key: GEODE-8081
 URL: https://issues.apache.org/jira/browse/GEODE-8081
 Project: Geode
  Issue Type: Test
Reporter: John Hutchison






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8040) BootstrappingFunctionIntegrationTest unintentionally creates a full Cache stack

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101151#comment-17101151
 ] 

ASF GitHub Bot commented on GEODE-8040:
---

kirklund commented on pull request #5033:
URL: https://github.com/apache/geode/pull/5033#issuecomment-624869797


   @jdeppe-pivotal do you have any ideas why any changes to 
BootstrappingFunction causes the Tomcat session tests in UpgradeTest to fail?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> BootstrappingFunctionIntegrationTest unintentionally creates a full Cache 
> stack
> ---
>
> Key: GEODE-8040
> URL: https://issues.apache.org/jira/browse/GEODE-8040
> Project: Geode
>  Issue Type: Bug
>  Components: tests
>Reporter: Kirk Lund
>Assignee: Kirk Lund
>Priority: Major
>
> BootstrappingFunctionIntegrationTest unintentionally creates a full Cache 
> stack. I have renamed and moved BootstrappingFunctionTest from src/test to 
> src/integrationTest because it creates a full Cache/DistributedSystem and 
> leaves the SocketCreatorFactory in an initialized state which causes other 
> unit tests such as SocketCreatorFactoryJUnitTest to fail:
> {noformat}
> org.apache.geode.internal.net.SocketCreatorFactoryJUnitTest > 
> testNewSSLConfigSSLComponentLocator FAILED
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.geode.internal.net.SocketCreatorFactoryJUnitTest.testNewSSLConfigSSLComponentLocator(SocketCreatorFactoryJUnitTest.java:106)
> {noformat}
> The cause is improper syntax in the test for partial mocking a Spy. This 
> when/thenReturn:
> {noformat}
> when(bootstrappingFunction.verifyCacheExists()).thenReturn(mockCache);
> {noformat}
> ...first invokes the actual verifyCacheExists() method which creates a real 
> Cache and then returns mockCache. This should instead be:
> {noformat}
> doReturn(mockCache).when(bootstrappingFunction).verifyCacheExists();
> {noformat}
> Unfortunately, the class BootstrappingFunction requires additional changes to 
> make it unit testable. The next test failure after fixing the Mockito syntax 
> is:
> {noformat}
> org.apache.geode.cache.CacheClosedException: A cache has not yet been created.
>   at 
> org.apache.geode.internal.cache.CacheFactoryStatics.getAnyInstance(CacheFactoryStatics.java:87)
>   at 
> org.apache.geode.modules.util.CreateRegionFunction.(CreateRegionFunction.java:59)
>   at 
> org.apache.geode.modules.util.BootstrappingFunction.registerFunctions(BootstrappingFunction.java:124)
>   at 
> org.apache.geode.modules.util.BootstrappingFunction.execute(BootstrappingFunction.java:67)
>   at 
> org.apache.geode.modules.util.BootstrappingFunctionIntegrationTest.registerFunctionGetsCalledOnNonLocators(BootstrappingFunctionIntegrationTest.java:101)
> {noformat}
> When I make the changes necessary to make BootstrappingFunction unit 
> testable, the tomcat session backwards compatibility tests all start failing 
> in UpgradeTest:
> {noformat}
> org.apache.geode.session.tests.TomcatSessionBackwardsCompatibilityTomcat8WithOldModuleCanDoPutsTest
>  > test[0] FAILED
> java.lang.RuntimeException: Something very bad happened when trying to 
> start container 
> TOMCAT8_client-server_test0_0_6b9ba51c-9690-47aa-8e56-8e5b6cb22af4_
> at 
> org.apache.geode.session.tests.ContainerManager.startContainer(ContainerManager.java:80)
> at 
> org.apache.geode.session.tests.ContainerManager.startContainers(ContainerManager.java:91)
> at 
> org.apache.geode.session.tests.ContainerManager.startAllInactiveContainers(ContainerManager.java:98)
> at 
> org.apache.geode.session.tests.TomcatSessionBackwardsCompatibilityTestBase.doPutAndGetSessionOnAllClients(TomcatSessionBackwardsCompatibilityTestBase.java:187)
> at 
> org.apache.geode.session.tests.TomcatSessionBackwardsCompatibilityTomcat8WithOldModuleCanDoPutsTest.test(TomcatSessionBackwardsCompatibilityTomcat8WithOldModuleCanDoPutsTest.java:35)
> Caused by:
> java.lang.RuntimeException: Something very bad happened to this 
> container when starting. Check the cargo_logs folder for container logs.
> at 
> org.apache.geode.session.tests.ServerContainer.start(ServerContainer.java:218)
> at 
> org.apache.geode.session.tests.ContainerManager.startContainer(ContainerManager.java:77)
> ... 4 

[jira] [Assigned] (GEODE-7678) Partitioned Region clear operations must invoke cache level listeners

2020-05-06 Thread Anilkumar Gingade (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anilkumar Gingade reassigned GEODE-7678:


Assignee: Anilkumar Gingade

> Partitioned Region clear operations must invoke cache level listeners
> -
>
> Key: GEODE-7678
> URL: https://issues.apache.org/jira/browse/GEODE-7678
> Project: Geode
>  Issue Type: Sub-task
>  Components: regions
>Reporter: Nabarun Nag
>Assignee: Anilkumar Gingade
>Priority: Major
>  Labels: GeodeCommons, GeodeOperationAPI
>
> Clear operations are successful and CacheListener.afterRegionClear(), 
> CacheWriter.beforeRegionClear() are invoked.
>  
> Acceptance :
>  * DUnit tests validating the above behavior.
>  * Test coverage to when a member departs in this scenario
>  * Test coverage to when a member restarts in this scenario
>  * Unit tests with complete code coverage for the newly written code.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8012) JMX managers may fail to broadcast notifications for other members

2020-05-06 Thread Geode Integration (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101096#comment-17101096
 ] 

Geode Integration commented on GEODE-8012:
--

A Pivotal Tracker story has been created for this Issue: 
https://www.pivotaltracker.com/story/show/172705195

> JMX managers may fail to broadcast notifications for other members
> --
>
> Key: GEODE-8012
> URL: https://issues.apache.org/jira/browse/GEODE-8012
> Project: Geode
>  Issue Type: Bug
>  Components: jmx
>Reporter: Kirk Lund
>Assignee: Kirk Lund
>Priority: Major
>  Labels: GeodeOperationAPI
>
> This is related to *GEODE-7739*.
> JMX Manager may fail to broadcast notifications for other members because of 
> a race condition during startup. When NotificationCacheListener is first 
> constructed, it is in a state that will ignore all callbacks because the 
> field readyForEvents is false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-7739) JMX managers may fail to federate mbeans for other members

2020-05-06 Thread Geode Integration (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101097#comment-17101097
 ] 

Geode Integration commented on GEODE-7739:
--

A Pivotal Tracker story has been created for this Issue: 
https://www.pivotaltracker.com/story/show/172705197

> JMX managers may fail to federate mbeans for other members
> --
>
> Key: GEODE-7739
> URL: https://issues.apache.org/jira/browse/GEODE-7739
> Project: Geode
>  Issue Type: Bug
>  Components: jmx
>Reporter: Kirk Lund
>Assignee: Kirk Lund
>Priority: Major
>  Labels: GeodeOperationAPI
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> JMX Manager may fail to federate one or more MXBeans for other members 
> because of a race condition during startup. When ManagementCacheListener is 
> first constructed, it is in a state that will ignore all callbacks because 
> the field readyForEvents is false.
> 
> Debugging with JMXMBeanReconnectDUnitTest revealed this bug.
> The test starts two locators with jmx manager configured and started. 
> Locator1 always has all of locator2's mbeans, but locator2 is intermittently 
> missing the personal mbeans of locator1. 
> I think this is caused by some sort of race condition in the code that 
> creates the monitoring regions for other members in locator2.
> It's possible that the jmx manager that hits this bug might fail to have 
> mbeans for servers as well as other locators but I haven't seen a test case 
> for this scenario.
> The exposure of this bug means that a user running more than one locator 
> might have a locator that is missing one or more mbeans for the cluster.
> 
> Studying the JMX code also reveals the existence of *GEODE-8012*.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8060) GemFireCacheImplCloseTest.close_blocksUntilFirstCallToCloseCompletes fails intermittently

2020-05-06 Thread Geode Integration (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101095#comment-17101095
 ] 

Geode Integration commented on GEODE-8060:
--

A Pivotal Tracker story has been created for this Issue: 
https://www.pivotaltracker.com/story/show/172705193

> GemFireCacheImplCloseTest.close_blocksUntilFirstCallToCloseCompletes fails 
> intermittently
> -
>
> Key: GEODE-8060
> URL: https://issues.apache.org/jira/browse/GEODE-8060
> Project: Geode
>  Issue Type: Bug
>  Components: tests
>Reporter: Kirk Lund
>Assignee: Kirk Lund
>Priority: Major
>  Labels: GeodeOperationAPI
>
> {noformat}
> org.apache.geode.internal.cache.GemFireCacheImplCloseTest > 
> close_blocksUntilFirstCallToCloseCompletes FAILED
> org.junit.ComparisonFailure: [ThreadId1=47 and threadId2=49] 
> expected:<4[7]L> but was:<4[9]L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> org.apache.geode.internal.cache.GemFireCacheImplCloseTest.close_blocksUntilFirstCallToCloseCompletes(GemFireCacheImplCloseTest.java:225)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-8055) can not create index on sub regions

2020-05-06 Thread Jinmei Liao (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinmei Liao updated GEODE-8055:
---
Fix Version/s: 1.12.1

> can not create index on sub regions
> ---
>
> Key: GEODE-8055
> URL: https://issues.apache.org/jira/browse/GEODE-8055
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh
>Affects Versions: 1.7.0, 1.8.0, 1.10.0, 1.9.2, 1.11.0, 1.12.0
>Reporter: Jinmei Liao
>Priority: Major
> Fix For: 1.12.1, 1.13.0, 1.14.0
>
>
> When trying to use "create index" command in gfsh to create index on sub 
> regions, we get the following message:
> "Sub-regions are unsupported"
> Pre-1.6, we were able to do that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-8012) JMX managers may fail to broadcast notifications for other members

2020-05-06 Thread Kirk Lund (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk Lund updated GEODE-8012:
-
Labels: GeodeOperationAPI  (was: )

> JMX managers may fail to broadcast notifications for other members
> --
>
> Key: GEODE-8012
> URL: https://issues.apache.org/jira/browse/GEODE-8012
> Project: Geode
>  Issue Type: Bug
>  Components: jmx
>Reporter: Kirk Lund
>Assignee: Kirk Lund
>Priority: Major
>  Labels: GeodeOperationAPI
>
> This is related to *GEODE-7739*.
> JMX Manager may fail to broadcast notifications for other members because of 
> a race condition during startup. When NotificationCacheListener is first 
> constructed, it is in a state that will ignore all callbacks because the 
> field readyForEvents is false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7739) JMX managers may fail to federate mbeans for other members

2020-05-06 Thread Kirk Lund (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk Lund updated GEODE-7739:
-
Labels: GeodeOperationAPI  (was: )

> JMX managers may fail to federate mbeans for other members
> --
>
> Key: GEODE-7739
> URL: https://issues.apache.org/jira/browse/GEODE-7739
> Project: Geode
>  Issue Type: Bug
>  Components: jmx
>Reporter: Kirk Lund
>Assignee: Kirk Lund
>Priority: Major
>  Labels: GeodeOperationAPI
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> JMX Manager may fail to federate one or more MXBeans for other members 
> because of a race condition during startup. When ManagementCacheListener is 
> first constructed, it is in a state that will ignore all callbacks because 
> the field readyForEvents is false.
> 
> Debugging with JMXMBeanReconnectDUnitTest revealed this bug.
> The test starts two locators with jmx manager configured and started. 
> Locator1 always has all of locator2's mbeans, but locator2 is intermittently 
> missing the personal mbeans of locator1. 
> I think this is caused by some sort of race condition in the code that 
> creates the monitoring regions for other members in locator2.
> It's possible that the jmx manager that hits this bug might fail to have 
> mbeans for servers as well as other locators but I haven't seen a test case 
> for this scenario.
> The exposure of this bug means that a user running more than one locator 
> might have a locator that is missing one or more mbeans for the cluster.
> 
> Studying the JMX code also reveals the existence of *GEODE-8012*.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-7678) Partitioned Region clear operations must invoke cache level listeners

2020-05-06 Thread Geode Integration (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101083#comment-17101083
 ] 

Geode Integration commented on GEODE-7678:
--

A Pivotal Tracker story has been created for this Issue: 
https://www.pivotaltracker.com/story/show/172704534

> Partitioned Region clear operations must invoke cache level listeners
> -
>
> Key: GEODE-7678
> URL: https://issues.apache.org/jira/browse/GEODE-7678
> Project: Geode
>  Issue Type: Sub-task
>  Components: regions
>Reporter: Nabarun Nag
>Priority: Major
>  Labels: GeodeCommons, GeodeOperationAPI
>
> Clear operations are successful and CacheListener.afterRegionClear(), 
> CacheWriter.beforeRegionClear() are invoked.
>  
> Acceptance :
>  * DUnit tests validating the above behavior.
>  * Test coverage to when a member departs in this scenario
>  * Test coverage to when a member restarts in this scenario
>  * Unit tests with complete code coverage for the newly written code.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-8060) GemFireCacheImplCloseTest.close_blocksUntilFirstCallToCloseCompletes fails intermittently

2020-05-06 Thread Kirk Lund (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirk Lund updated GEODE-8060:
-
Labels: GeodeOperationAPI  (was: )

> GemFireCacheImplCloseTest.close_blocksUntilFirstCallToCloseCompletes fails 
> intermittently
> -
>
> Key: GEODE-8060
> URL: https://issues.apache.org/jira/browse/GEODE-8060
> Project: Geode
>  Issue Type: Bug
>  Components: tests
>Reporter: Kirk Lund
>Assignee: Kirk Lund
>Priority: Major
>  Labels: GeodeOperationAPI
>
> {noformat}
> org.apache.geode.internal.cache.GemFireCacheImplCloseTest > 
> close_blocksUntilFirstCallToCloseCompletes FAILED
> org.junit.ComparisonFailure: [ThreadId1=47 and threadId2=49] 
> expected:<4[7]L> but was:<4[9]L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> org.apache.geode.internal.cache.GemFireCacheImplCloseTest.close_blocksUntilFirstCallToCloseCompletes(GemFireCacheImplCloseTest.java:225)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8035) Parallel Disk Store Recovery when Cluster Restarts

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101080#comment-17101080
 ] 

ASF GitHub Bot commented on GEODE-8035:
---

jchen21 commented on pull request #5014:
URL: https://github.com/apache/geode/pull/5014#issuecomment-624817108


   > Changes look good to me, but I'd really prefer to see one or more new 
tests or an old one modified if possible.
   > 
   > I think it's good to add the new methods to InternalCache in order to 
avoid casting to concrete impl (GemFireCacheImpl), but it would be best to 
avoid making them default empty methods unless they're part of some sort of 
deprecated-replacement if that makes sense. I would just remove `default` from 
them and modify all the code (mostly tests) that implements InternalCache to 
provide empty implementations of the new methods.
   
   @kirklund I am not sure about the disadvantage of default empty methods in 
`InternalCache`. I can remove the `default` keyword from `InternalCache` and 
let `CacheCreation` and `InternalCacheForClientAccess` implement the empty 
method. No test change needed. I don't see significant difference between these 
two options. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Parallel Disk Store Recovery when Cluster Restarts
> --
>
> Key: GEODE-8035
> URL: https://issues.apache.org/jira/browse/GEODE-8035
> Project: Geode
>  Issue Type: Improvement
>Reporter: Jianxia Chen
>Assignee: Jianxia Chen
>Priority: Major
>  Labels: GeodeCommons
>
> Currently, when Geode cluster restarts, the disk store recovery is 
> serialized. When all regions share the same disk store, the restart process 
> is time consuming. To improve the performance, different regions can use 
> different disk stores with different disk controllers. And adding parallel 
> disk store recovery. This is expected to significantly reduce the time to 
> restart Geode cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-8041) Create ManagementService Interface

2020-05-06 Thread Patrick Johnsn (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Johnsn updated GEODE-8041:
--
Description: Create a ManagementService interface. The ManagementService 
will configure and start Geode (create a Cache given some configuration 
properties) after the BootstrappingService has bootstrapped the environment and 
loaded the relevant modules.  (was: Create a ManagementService interface, which 
will be used to create a cache given some configuration.)

> Create ManagementService Interface
> --
>
> Key: GEODE-8041
> URL: https://issues.apache.org/jira/browse/GEODE-8041
> Project: Geode
>  Issue Type: Sub-task
>Reporter: Patrick Johnsn
>Assignee: Patrick Johnsn
>Priority: Major
>
> Create a ManagementService interface. The ManagementService will configure 
> and start Geode (create a Cache given some configuration properties) after 
> the BootstrappingService has bootstrapped the environment and loaded the 
> relevant modules.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8035) Parallel Disk Store Recovery when Cluster Restarts

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101069#comment-17101069
 ] 

ASF GitHub Bot commented on GEODE-8035:
---

jchen21 commented on a change in pull request #5014:
URL: https://github.com/apache/geode/pull/5014#discussion_r420998299



##
File path: 
geode-core/src/test/java/org/apache/geode/internal/cache/GemFireCacheImplTest.java
##
@@ -620,6 +625,40 @@ public void getCacheServers_isCanonical() {
 .isSameAs(gemFireCacheImpl.getCacheServers());
   }
 
+  @Test
+  public void testLockDiskStore() throws InterruptedException {
+int nThread = 10;
+String diskStoreName = "MyDiskStore";
+AtomicInteger nTrue = new AtomicInteger();
+AtomicInteger nFalse = new AtomicInteger();
+ExecutorService executorService = Executors.newFixedThreadPool(nThread);

Review comment:
   I had thought about using `ExecutorServiceRule`. However, the return 
value of `doLockDiskStore()` is non-deterministic. I can't assert the return 
value of individual thread. I can only count and assert the number of returning 
values when all threads are done. I am not sure how `ExecutorServiceRule` will 
serve this test case.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Parallel Disk Store Recovery when Cluster Restarts
> --
>
> Key: GEODE-8035
> URL: https://issues.apache.org/jira/browse/GEODE-8035
> Project: Geode
>  Issue Type: Improvement
>Reporter: Jianxia Chen
>Assignee: Jianxia Chen
>Priority: Major
>  Labels: GeodeCommons
>
> Currently, when Geode cluster restarts, the disk store recovery is 
> serialized. When all regions share the same disk store, the restart process 
> is time consuming. To improve the performance, different regions can use 
> different disk stores with different disk controllers. And adding parallel 
> disk store recovery. This is expected to significantly reduce the time to 
> restart Geode cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8037) Create BootstrappingService Interface

2020-05-06 Thread Patrick Johnsn (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Johnsn resolved GEODE-8037.
---
Resolution: Fixed

> Create BootstrappingService Interface
> -
>
> Key: GEODE-8037
> URL: https://issues.apache.org/jira/browse/GEODE-8037
> Project: Geode
>  Issue Type: Sub-task
>Reporter: Patrick Johnsn
>Assignee: Patrick Johnsn
>Priority: Major
>
> Create a BootstrapingService interface. The BootstrappingService will use the 
> ModuleService to bootstrap Geode in a classloader-isolated way, i.e. loading 
> the necessary modules to run Geode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-8037) Create BootstrappingService Interface

2020-05-06 Thread Patrick Johnsn (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Johnsn updated GEODE-8037:
--
Description: Create a BootstrapingService interface. The 
BootstrappingService will use the ModuleService to bootstrap Geode in a 
classloader-isolated way, i.e. loading the necessary modules to run Geode.  
(was: Create a BootstrapingService interface.)

> Create BootstrappingService Interface
> -
>
> Key: GEODE-8037
> URL: https://issues.apache.org/jira/browse/GEODE-8037
> Project: Geode
>  Issue Type: Sub-task
>Reporter: Patrick Johnsn
>Assignee: Patrick Johnsn
>Priority: Major
>
> Create a BootstrapingService interface. The BootstrappingService will use the 
> ModuleService to bootstrap Geode in a classloader-isolated way, i.e. loading 
> the necessary modules to run Geode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8041) Create ManagementService Interface

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101061#comment-17101061
 ] 

ASF GitHub Bot commented on GEODE-8041:
---

yozaner1324 opened a new pull request #5062:
URL: https://github.com/apache/geode/pull/5062


   Thank you for submitting a contribution to Apache Geode.
   
   In order to streamline the review of the contribution we ask you
   to ensure the following steps have been taken:
   
   ### For all changes:
   - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in 
the commit message?
   
   - [ ] Has your PR been rebased against the latest commit within the target 
branch (typically `develop`)?
   
   - [ ] Is your initial contribution a single, squashed commit?
   
   - [ ] Does `gradlew build` run cleanly?
   
   - [ ] Have you written or updated unit tests to verify your changes?
   
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   
   ### Note:
   Please ensure that once the PR is submitted, check Concourse for build 
issues and
   submit an update to your PR as soon as possible. If you need help, please 
send an
   email to d...@geode.apache.org.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Create ManagementService Interface
> --
>
> Key: GEODE-8041
> URL: https://issues.apache.org/jira/browse/GEODE-8041
> Project: Geode
>  Issue Type: Sub-task
>Reporter: Patrick Johnsn
>Assignee: Patrick Johnsn
>Priority: Major
>
> Create a ManagementService interface, which will be used to create a cache 
> given some configuration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8035) Parallel Disk Store Recovery when Cluster Restarts

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101060#comment-17101060
 ] 

ASF GitHub Bot commented on GEODE-8035:
---

jchen21 commented on a change in pull request #5014:
URL: https://github.com/apache/geode/pull/5014#discussion_r420992069



##
File path: 
geode-core/src/test/java/org/apache/geode/internal/cache/GemFireCacheImplTest.java
##
@@ -620,6 +625,40 @@ public void getCacheServers_isCanonical() {
 .isSameAs(gemFireCacheImpl.getCacheServers());
   }
 
+  @Test
+  public void testLockDiskStore() throws InterruptedException {
+int nThread = 10;
+String diskStoreName = "MyDiskStore";
+AtomicInteger nTrue = new AtomicInteger();
+AtomicInteger nFalse = new AtomicInteger();
+ExecutorService executorService = Executors.newFixedThreadPool(nThread);
+IntStream.range(0, nThread).forEach(tid -> {
+  executorService.submit(() -> {
+try {
+  boolean lockResult = gemFireCacheImpl.doLockDiskStore(diskStoreName);
+  if (lockResult) {
+nTrue.incrementAndGet();
+  } else {
+nFalse.incrementAndGet();
+  }
+} finally {
+  boolean unlockResult = 
gemFireCacheImpl.doUnlockDiskStore(diskStoreName);
+  if (unlockResult) {
+nTrue.incrementAndGet();
+  } else {
+nFalse.incrementAndGet();
+  }
+}
+  });
+});
+executorService.shutdown();
+executorService.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);

Review comment:
   Will change it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Parallel Disk Store Recovery when Cluster Restarts
> --
>
> Key: GEODE-8035
> URL: https://issues.apache.org/jira/browse/GEODE-8035
> Project: Geode
>  Issue Type: Improvement
>Reporter: Jianxia Chen
>Assignee: Jianxia Chen
>Priority: Major
>  Labels: GeodeCommons
>
> Currently, when Geode cluster restarts, the disk store recovery is 
> serialized. When all regions share the same disk store, the restart process 
> is time consuming. To improve the performance, different regions can use 
> different disk stores with different disk controllers. And adding parallel 
> disk store recovery. This is expected to significantly reduce the time to 
> restart Geode cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8037) Create BootstrappingService Interface

2020-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101037#comment-17101037
 ] 

ASF subversion and git services commented on GEODE-8037:


Commit ddf9bdae54db5aa03ccbd67e96e6f9f08fc229d3 in geode's branch 
refs/heads/feature/GEODE-8067 from Patrick Johnson
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=ddf9bda ]

GEODE-8037 - Create BootstrappingService interface. (#5046)



> Create BootstrappingService Interface
> -
>
> Key: GEODE-8037
> URL: https://issues.apache.org/jira/browse/GEODE-8037
> Project: Geode
>  Issue Type: Sub-task
>Reporter: Patrick Johnsn
>Assignee: Patrick Johnsn
>Priority: Major
>
> Create a BootstrapingService interface.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8060) GemFireCacheImplCloseTest.close_blocksUntilFirstCallToCloseCompletes fails intermittently

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101020#comment-17101020
 ] 

ASF GitHub Bot commented on GEODE-8060:
---

kirklund commented on a change in pull request #5041:
URL: https://github.com/apache/geode/pull/5041#discussion_r420955598



##
File path: 
geode-core/src/main/java/org/apache/geode/internal/cache/GemFireCacheImpl.java
##
@@ -2377,15 +2381,25 @@ public void close(String reason, Throwable 
systemFailureCause, boolean keepAlive
   } finally {
 CLOSING_THREAD.remove();
   }
+  return true;
 }
   }
 
-  private void waitUntilClosed() {
-try {
-  isClosedLatch.await();
-} catch (InterruptedException ignore) {
-  // ignored
+  /**
+   * Returns true if caller waited on the {@code isClosedLatch}.
+   */
+  private boolean waitIfClosing(boolean skipAwait) {
+if (isClosing) {
+  if (!skipAwait && !Thread.currentThread().equals(CLOSING_THREAD.get())) {
+try {
+  isClosedLatch.await();
+} catch (InterruptedException ignore) {
+  // ignored

Review comment:
   The interrupt flag should be reset after catching `InterruptedException`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> GemFireCacheImplCloseTest.close_blocksUntilFirstCallToCloseCompletes fails 
> intermittently
> -
>
> Key: GEODE-8060
> URL: https://issues.apache.org/jira/browse/GEODE-8060
> Project: Geode
>  Issue Type: Bug
>  Components: tests
>Reporter: Kirk Lund
>Assignee: Kirk Lund
>Priority: Major
>
> {noformat}
> org.apache.geode.internal.cache.GemFireCacheImplCloseTest > 
> close_blocksUntilFirstCallToCloseCompletes FAILED
> org.junit.ComparisonFailure: [ThreadId1=47 and threadId2=49] 
> expected:<4[7]L> but was:<4[9]L>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at 
> org.apache.geode.internal.cache.GemFireCacheImplCloseTest.close_blocksUntilFirstCallToCloseCompletes(GemFireCacheImplCloseTest.java:225)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8035) Parallel Disk Store Recovery when Cluster Restarts

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101017#comment-17101017
 ] 

ASF GitHub Bot commented on GEODE-8035:
---

kirklund commented on a change in pull request #5014:
URL: https://github.com/apache/geode/pull/5014#discussion_r420945761



##
File path: 
geode-core/src/test/java/org/apache/geode/internal/cache/GemFireCacheImplTest.java
##
@@ -620,6 +625,40 @@ public void getCacheServers_isCanonical() {
 .isSameAs(gemFireCacheImpl.getCacheServers());
   }
 
+  @Test
+  public void testLockDiskStore() throws InterruptedException {
+int nThread = 10;
+String diskStoreName = "MyDiskStore";
+AtomicInteger nTrue = new AtomicInteger();
+AtomicInteger nFalse = new AtomicInteger();
+ExecutorService executorService = Executors.newFixedThreadPool(nThread);

Review comment:
   I recommend replacing ExecutorService with ExecutorServiceRule. You can 
limit the rule to a specific number of threads if you need to otherwise it 
defaults to as many threads as tasks that you submit:
   ```
   @Rule
   public ExecutorServiceRule executorServiceRule = new ExecutorServiceRule();
   ```
   The Rule will automatically do shutdown etc during tearDown().
   
   If you await on the Futures, then any assertion failures will be thrown 
causing the test to fail:
   ```
   Future doLockUnlock = executorService.submit(() -> {
 try {
   assertThat(gemFireCacheImpl.doLockDiskStore(diskStoreName)).isTrue();
 } finally {
   assertThat(gemFireCacheImpl. doUnlockDiskStore(diskStoreName)).isTrue();
 }
   }
   
   doLockUnlock.get(GeodeAwaitility.getTimeout().toMillis(), 
TimeUnit.MILLISECONDS);
   ```
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Parallel Disk Store Recovery when Cluster Restarts
> --
>
> Key: GEODE-8035
> URL: https://issues.apache.org/jira/browse/GEODE-8035
> Project: Geode
>  Issue Type: Improvement
>Reporter: Jianxia Chen
>Assignee: Jianxia Chen
>Priority: Major
>  Labels: GeodeCommons
>
> Currently, when Geode cluster restarts, the disk store recovery is 
> serialized. When all regions share the same disk store, the restart process 
> is time consuming. To improve the performance, different regions can use 
> different disk stores with different disk controllers. And adding parallel 
> disk store recovery. This is expected to significantly reduce the time to 
> restart Geode cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8035) Parallel Disk Store Recovery when Cluster Restarts

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101016#comment-17101016
 ] 

ASF GitHub Bot commented on GEODE-8035:
---

kirklund commented on a change in pull request #5014:
URL: https://github.com/apache/geode/pull/5014#discussion_r420945761



##
File path: 
geode-core/src/test/java/org/apache/geode/internal/cache/GemFireCacheImplTest.java
##
@@ -620,6 +625,40 @@ public void getCacheServers_isCanonical() {
 .isSameAs(gemFireCacheImpl.getCacheServers());
   }
 
+  @Test
+  public void testLockDiskStore() throws InterruptedException {
+int nThread = 10;
+String diskStoreName = "MyDiskStore";
+AtomicInteger nTrue = new AtomicInteger();
+AtomicInteger nFalse = new AtomicInteger();
+ExecutorService executorService = Executors.newFixedThreadPool(nThread);

Review comment:
   I recommend replacing ExecutorService with ExecutorServiceRule. You can 
limit the rule to a specific number of threads if you need to otherwise it 
defaults to as many threads as tasks that you submit:
   ```
   @Rule
   public ExecutorServiceRule executorServiceRule = new ExecutorServiceRule();
   ```
   The Rule will automatically do shutdown etc during tearDown().
   
   Also, you might want to capture the `Future` return value from 
`executorService/executorServiceRule.submit` to await on. You could even have a 
`Collection>` if you wanted. 
   
   If you await on the Futures, then any assertion failures will be thrown 
causing the test to fail:
   ```
   Future doLockUnlock = executorService.submit(() -> {
 try {
   assertThat(gemFireCacheImpl.doLockDiskStore(diskStoreName)).isTrue();
 } finally {
   assertThat(gemFireCacheImpl. doUnlockDiskStore(diskStoreName)).isTrue();
 }
   }
   
   doLockUnlock.get(GeodeAwaitility.getTimeout().toMillis(), 
TimeUnit.MILLISECONDS);
   ```
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Parallel Disk Store Recovery when Cluster Restarts
> --
>
> Key: GEODE-8035
> URL: https://issues.apache.org/jira/browse/GEODE-8035
> Project: Geode
>  Issue Type: Improvement
>Reporter: Jianxia Chen
>Assignee: Jianxia Chen
>Priority: Major
>  Labels: GeodeCommons
>
> Currently, when Geode cluster restarts, the disk store recovery is 
> serialized. When all regions share the same disk store, the restart process 
> is time consuming. To improve the performance, different regions can use 
> different disk stores with different disk controllers. And adding parallel 
> disk store recovery. This is expected to significantly reduce the time to 
> restart Geode cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8035) Parallel Disk Store Recovery when Cluster Restarts

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101015#comment-17101015
 ] 

ASF GitHub Bot commented on GEODE-8035:
---

kirklund commented on a change in pull request #5014:
URL: https://github.com/apache/geode/pull/5014#discussion_r420945761



##
File path: 
geode-core/src/test/java/org/apache/geode/internal/cache/GemFireCacheImplTest.java
##
@@ -620,6 +625,40 @@ public void getCacheServers_isCanonical() {
 .isSameAs(gemFireCacheImpl.getCacheServers());
   }
 
+  @Test
+  public void testLockDiskStore() throws InterruptedException {
+int nThread = 10;
+String diskStoreName = "MyDiskStore";
+AtomicInteger nTrue = new AtomicInteger();
+AtomicInteger nFalse = new AtomicInteger();
+ExecutorService executorService = Executors.newFixedThreadPool(nThread);

Review comment:
   I recommend replacing ExecutorService with ExecutorServiceRule. You can 
limit the rule to a specific number of threads if you need to otherwise it 
defaults to as many threads as tasks that you submit:
   ```
   @Rule
   public ExecutorServiceRule executorServiceRule = new ExecutorServiceRule();
   ```
   The Rule will automatically do shutdown etc during tearDown().
   
   Also, you might want to capture the `Future` return value from 
`executorService/executorServiceRule.submit` to await on. You could even have a 
`Collection>` if you wanted. 
   
   You could even move the assertions for locking/unlocking into the thread 
task (ie what you're submitting). If you await on the Futures, then any 
assertion failures will be thrown causing the test to fail:
   ```
   Future doLockUnlock = executorService.submit(() -> {
 try {
   assertThat(gemFireCacheImpl.doLockDiskStore(diskStoreName)).isTrue();
 } finally {
   assertThat(gemFireCacheImpl. doUnlockDiskStore(diskStoreName)).isTrue();
 }
   }
   
   doLockUnlock.get(GeodeAwaitility.getTimeout().toMillis(), 
TimeUnit.MILLISECONDS);
   ```
   

##
File path: 
geode-core/src/test/java/org/apache/geode/internal/cache/GemFireCacheImplTest.java
##
@@ -620,6 +625,40 @@ public void getCacheServers_isCanonical() {
 .isSameAs(gemFireCacheImpl.getCacheServers());
   }
 
+  @Test
+  public void testLockDiskStore() throws InterruptedException {
+int nThread = 10;
+String diskStoreName = "MyDiskStore";
+AtomicInteger nTrue = new AtomicInteger();
+AtomicInteger nFalse = new AtomicInteger();
+ExecutorService executorService = Executors.newFixedThreadPool(nThread);
+IntStream.range(0, nThread).forEach(tid -> {
+  executorService.submit(() -> {
+try {
+  boolean lockResult = gemFireCacheImpl.doLockDiskStore(diskStoreName);
+  if (lockResult) {
+nTrue.incrementAndGet();
+  } else {
+nFalse.incrementAndGet();
+  }
+} finally {
+  boolean unlockResult = 
gemFireCacheImpl.doUnlockDiskStore(diskStoreName);
+  if (unlockResult) {
+nTrue.incrementAndGet();
+  } else {
+nFalse.incrementAndGet();
+  }
+}
+  });
+});
+executorService.shutdown();
+executorService.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);

Review comment:
   If you use the Rule, you shouldn't need this or the shutdown. In 
general, you should however use `GeodeAwaitility.getTimeout()` instead of 
`Long.MAX_VALUE`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Parallel Disk Store Recovery when Cluster Restarts
> --
>
> Key: GEODE-8035
> URL: https://issues.apache.org/jira/browse/GEODE-8035
> Project: Geode
>  Issue Type: Improvement
>Reporter: Jianxia Chen
>Assignee: Jianxia Chen
>Priority: Major
>  Labels: GeodeCommons
>
> Currently, when Geode cluster restarts, the disk store recovery is 
> serialized. When all regions share the same disk store, the restart process 
> is time consuming. To improve the performance, different regions can use 
> different disk stores with different disk controllers. And adding parallel 
> disk store recovery. This is expected to significantly reduce the time to 
> restart Geode cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8035) Parallel Disk Store Recovery when Cluster Restarts

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17101014#comment-17101014
 ] 

ASF GitHub Bot commented on GEODE-8035:
---

agingade commented on a change in pull request #5014:
URL: https://github.com/apache/geode/pull/5014#discussion_r419807590



##
File path: 
geode-core/src/main/java/org/apache/geode/internal/cache/xmlcache/CacheCreation.java
##
@@ -521,12 +521,12 @@ void create(InternalCache cache)
 
 cache.initializePdxRegistry();
 
-for (DiskStore diskStore : diskStores.values()) {
+diskStores.values().parallelStream().forEach(diskStore -> {
   DiskStoreAttributesCreation creation = (DiskStoreAttributesCreation) 
diskStore;
   if (creation != pdxRegDSC) {
 createDiskStore(creation, cache);
   }
-}
+});

Review comment:
   If all the disk store is on the same disk-mount (or same disk 
controller) doing it in the multi thread may be slower (due to read seek 
jumping from one location to other). In that case, is it a good idea to have 
default behavior to be single threaded and have system property to use multiple 
thread or visa versa. 
   User may configure different disk store on the same disk-controller, to 
isolate the region persistent files, which is commonly done.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Parallel Disk Store Recovery when Cluster Restarts
> --
>
> Key: GEODE-8035
> URL: https://issues.apache.org/jira/browse/GEODE-8035
> Project: Geode
>  Issue Type: Improvement
>Reporter: Jianxia Chen
>Assignee: Jianxia Chen
>Priority: Major
>  Labels: GeodeCommons
>
> Currently, when Geode cluster restarts, the disk store recovery is 
> serialized. When all regions share the same disk store, the restart process 
> is time consuming. To improve the performance, different regions can use 
> different disk stores with different disk controllers. And adding parallel 
> disk store recovery. This is expected to significantly reduce the time to 
> restart Geode cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8055) can not create index on sub regions

2020-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100979#comment-17100979
 ] 

ASF subversion and git services commented on GEODE-8055:


Commit c6213f59b83636f3d852406e444825fd19dc96b3 in geode's branch 
refs/heads/support/1.12 from Jinmei Liao
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=c6213f5 ]

GEODE-8055: create index command should work on sub regions (#5034) - spotless


> can not create index on sub regions
> ---
>
> Key: GEODE-8055
> URL: https://issues.apache.org/jira/browse/GEODE-8055
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh
>Affects Versions: 1.7.0, 1.8.0, 1.10.0, 1.9.2, 1.11.0, 1.12.0
>Reporter: Jinmei Liao
>Priority: Major
> Fix For: 1.13.0, 1.14.0
>
>
> When trying to use "create index" command in gfsh to create index on sub 
> regions, we get the following message:
> "Sub-regions are unsupported"
> Pre-1.6, we were able to do that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7852) Add client side configuration option to support a SNI proxy

2020-05-06 Thread Bill Burcham (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Burcham resolved GEODE-7852.
-
Fix Version/s: 1.13.0
   Resolution: Fixed

The SNI feature was in the develop branch when the 1.13 branch was cut.

> Add client side configuration option to support a SNI proxy
> ---
>
> Key: GEODE-7852
> URL: https://issues.apache.org/jira/browse/GEODE-7852
> Project: Geode
>  Issue Type: Improvement
>  Components: client/server, membership
>Reporter: Dan Smith
>Assignee: Bruce J Schuchardt
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 13h
>  Remaining Estimate: 0h
>
> Add an option to the client side configuration to support a the use of a [SNI 
> proxy|https://www.bamsoftware.com/computers/sniproxy/].
> See also GEODE-7837, which adds a system property to support a SNI proxy. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7680) Partitioned region clear operations must be successful while interacting with rebalance

2020-05-06 Thread Donal Evans (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donal Evans updated GEODE-7680:
---
Labels: GeodeCommons caching-applications  (was: GeodeCommons)

> Partitioned region clear operations must be successful while interacting with 
> rebalance 
> 
>
> Key: GEODE-7680
> URL: https://issues.apache.org/jira/browse/GEODE-7680
> Project: Geode
>  Issue Type: Sub-task
>  Components: regions
>Reporter: Nabarun Nag
>Assignee: Donal Evans
>Priority: Major
>  Labels: GeodeCommons, caching-applications
>
> Clear operations are successful and while rebalance operations are ongoing.
> Acceptance :
>  * DUnit tests validating the above behavior.
>  * Test coverage to when a member departs in this scenario
>  * Test coverage to when a member restarts in this scenario
>  * Unit tests with complete code coverage for the newly written code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7680) Partitioned region clear operations must be successful while interacting with rebalance

2020-05-06 Thread Donal Evans (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Donal Evans reassigned GEODE-7680:
--

Assignee: Donal Evans

> Partitioned region clear operations must be successful while interacting with 
> rebalance 
> 
>
> Key: GEODE-7680
> URL: https://issues.apache.org/jira/browse/GEODE-7680
> Project: Geode
>  Issue Type: Sub-task
>  Components: regions
>Reporter: Nabarun Nag
>Assignee: Donal Evans
>Priority: Major
>  Labels: GeodeCommons
>
> Clear operations are successful and while rebalance operations are ongoing.
> Acceptance :
>  * DUnit tests validating the above behavior.
>  * Test coverage to when a member departs in this scenario
>  * Test coverage to when a member restarts in this scenario
>  * Unit tests with complete code coverage for the newly written code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (GEODE-8075) Geek squad tech support

2020-05-06 Thread Aaron Lindsey (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Lindsey closed GEODE-8075.


> Geek squad tech support
> ---
>
> Key: GEODE-8075
> URL: https://issues.apache.org/jira/browse/GEODE-8075
> Project: Geode
>  Issue Type: Test
>Reporter: Jacks martin
>Priority: Major
>
> [Geek Squad Tech Support|https://igeektechs.org/] gives you on-demand 
> solutions, with highly accurate results. Best Buy offers repair services for 
> most major home appliances including refrigerators, freezers, washers, 
> dryers, dishwashers, stoves, and more.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8075) Geek squad tech support

2020-05-06 Thread Aaron Lindsey (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Lindsey resolved GEODE-8075.
--
Resolution: Won't Do

This looks like spam to me. If I misunderstood, please re-open the ticket.

> Geek squad tech support
> ---
>
> Key: GEODE-8075
> URL: https://issues.apache.org/jira/browse/GEODE-8075
> Project: Geode
>  Issue Type: Test
>Reporter: Jacks martin
>Priority: Major
>
> [Geek Squad Tech Support|https://igeektechs.org/] gives you on-demand 
> solutions, with highly accurate results. Best Buy offers repair services for 
> most major home appliances including refrigerators, freezers, washers, 
> dryers, dishwashers, stoves, and more.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8055) can not create index on sub regions

2020-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100930#comment-17100930
 ] 

ASF subversion and git services commented on GEODE-8055:


Commit a35ef0b3182082bd50ecf99132c69b0ea1840881 in geode's branch 
refs/heads/support/1.12 from Jinmei Liao
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=a35ef0b ]

GEODE-8055: create index command should work on sub regions (#5034)


> can not create index on sub regions
> ---
>
> Key: GEODE-8055
> URL: https://issues.apache.org/jira/browse/GEODE-8055
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh
>Affects Versions: 1.7.0, 1.8.0, 1.10.0, 1.9.2, 1.11.0, 1.12.0
>Reporter: Jinmei Liao
>Priority: Major
> Fix For: 1.13.0, 1.14.0
>
>
> When trying to use "create index" command in gfsh to create index on sub 
> regions, we get the following message:
> "Sub-regions are unsupported"
> Pre-1.6, we were able to do that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-7707) Tab completing `--url` on `connect` gives two default values

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100925#comment-17100925
 ] 

ASF GitHub Bot commented on GEODE-7707:
---

alb3rtobr opened a new pull request #5061:
URL: https://github.com/apache/geode/pull/5061


   this PR aligns the command help and documentation with the code, there is no 
default value for `--url` parameter.
   Command help will look as follows:
   ```
   gfsh>connect --url
   
   optional --url: Indicates the base URL to the Manager's HTTP service.  For 
example: 'http://:/geode-mgmt/v1'; no default value
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Tab completing `--url` on `connect` gives two default values 
> -
>
> Key: GEODE-7707
> URL: https://issues.apache.org/jira/browse/GEODE-7707
> Project: Geode
>  Issue Type: Bug
>  Components: management
>Reporter: Michael Oleske
>Assignee: Alberto Bustamante Reyes
>Priority: Major
>  Labels: pull-request-available
>
> Expected result
> To see a string indicating one default value such as "Default is 
> 'http://localhost:7070/geode-mgmt/v1'"
> Actual result
> This string "optional --url: Indicates the base URL to the Manager's HTTP 
> service.  For example: 'http://:/gemfire/v1' Default is 
> 'http://localhost:7070/geode-mgmt/v1'; no default value"  Note the `Default 
> is` and the `no default value`
> steps to reproduce
> execute `gfsh`
> execute `start locator`
> execute `disconnect`
> type `connect --url`
> press tab



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8073) NullPointerException thrown in PartitionedRegion.handleOldNodes

2020-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100923#comment-17100923
 ] 

ASF subversion and git services commented on GEODE-8073:


Commit 643c617ec681918db3508030bd22922c76b87b25 in geode's branch 
refs/heads/develop from Eric Shu
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=643c617 ]

GEODE-8073: Fix NPE after FetchKeysMessage failed. (#5055)



> NullPointerException thrown in PartitionedRegion.handleOldNodes
> ---
>
> Key: GEODE-8073
> URL: https://issues.apache.org/jira/browse/GEODE-8073
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Eric Shu
>Assignee: Eric Shu
>Priority: Major
>  Labels: caching-applications
>
> The NPE can be thrown when a remote node is gone unexpectedly.
> {noformat}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.geode.internal.cache.PartitionedRegion.handleOldNodes(PartitionedRegion.java:4610)
> at 
> org.apache.geode.internal.cache.PartitionedRegion.fetchEntries(PartitionedRegion.java:4689)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVKeysPR(BaseCommand.java:1191)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVAllKeys(BaseCommand.java:1124)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKeysValuesPolicy(BaseCommand.java:973)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.fillAndSendRegisterInterestResponseChunks(BaseCommand.java:905)
> at 
> org.apache.geode.internal.cache.tier.sockets.command.RegisterInterest61.cmdExecute(RegisterInterest61.java:260)
> at 
> org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:183)
> at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:848)
> at 
> org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:72)
> at 
> org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1212)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at 
> org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:686)
> at 
> org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8020) buffer corruption in SSL communications

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100893#comment-17100893
 ] 

ASF GitHub Bot commented on GEODE-8020:
---

bschuchardt commented on pull request #5048:
URL: https://github.com/apache/geode/pull/5048#issuecomment-624719148


   > Can we make the new geode property positive by dropping the no and 
flipping the default.
   
   Good point, and the new property hints that nothing will use direct-buffers. 
 How about this?
   GeodeGlossary.GEMFIRE_PREFIX + "BufferPool.useHeapBuffers"



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> buffer corruption in SSL communications
> ---
>
> Key: GEODE-8020
> URL: https://issues.apache.org/jira/browse/GEODE-8020
> Project: Geode
>  Issue Type: Bug
>  Components: membership, messaging
>Reporter: Bruce J Schuchardt
>Assignee: Bruce J Schuchardt
>Priority: Major
>
> When running an application with SSL enabled I ran into a hang with a lost 
> message.  The sender had a 15 second ack-wait warning pointing to another 
> server in the cluster.  That server had this in its log file at the time the 
> message would have been processed:
> {noformat}
> [info 2020/04/21 11:22:39.437 PDT  rs-bschuchardt-1053-hydra-client-1(bridgegemfire4_host1_12599:12599):41003
>  unshared ordered uid=354 dom #2 port=55262> tid=0xad] P2P message 
> reader@2580db5f io exception for 
> rs-bschuchardt-1053-hydra-client-1(bridgegemfire4_host1_12599:12599):41003@354(GEODE
>  1.10.0)
> javax.net.ssl.SSLException: bad record MAC
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:214)
>   at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1728)
>   at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:986)
>   at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:912)
>   at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:782)
>   at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:626)
>   at 
> org.apache.geode.internal.net.NioSslEngine.unwrap(NioSslEngine.java:275)
>   at 
> org.apache.geode.internal.tcp.Connection.processInputBuffer(Connection.java:2894)
>   at 
> org.apache.geode.internal.tcp.Connection.readMessages(Connection.java:1745)
>   at org.apache.geode.internal.tcp.Connection.run(Connection.java:1577)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: javax.crypto.BadPaddingException: bad record MAC
>   at sun.security.ssl.InputRecord.decrypt(InputRecord.java:219)
>   at 
> sun.security.ssl.EngineInputRecord.decrypt(EngineInputRecord.java:177)
>   at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:979)
>   ... 10 more
> {noformat}
> I bisected to see when this problem was introduced and found it was this 
> commit:
> {noformat}
> commit 418d929e3e03185cd6330c828c9b9ed395a76d4b
> Author: Mario Ivanac <48509724+miva...@users.noreply.github.com>
> Date:   Fri Nov 1 20:28:57 2019 +0100
> GEODE-6661: Fixed use of Direct and Non-Direct buffers (#4267)
> - Fixed use of Direct and Non-Direct buffers
> {noformat}
> That commit modified the NioSSLEngine to use a "direct" byte buffer instead 
> of a heap byte buffer.  If I revert that one part of the PR the test works 
> okay.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8020) buffer corruption in SSL communications

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100879#comment-17100879
 ] 

ASF GitHub Bot commented on GEODE-8020:
---

pivotal-jbarrett commented on pull request #5048:
URL: https://github.com/apache/geode/pull/5048#issuecomment-624710775


   Can we make the new geode property positive by dropping the no and flipping 
the default. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> buffer corruption in SSL communications
> ---
>
> Key: GEODE-8020
> URL: https://issues.apache.org/jira/browse/GEODE-8020
> Project: Geode
>  Issue Type: Bug
>  Components: membership, messaging
>Reporter: Bruce J Schuchardt
>Assignee: Bruce J Schuchardt
>Priority: Major
>
> When running an application with SSL enabled I ran into a hang with a lost 
> message.  The sender had a 15 second ack-wait warning pointing to another 
> server in the cluster.  That server had this in its log file at the time the 
> message would have been processed:
> {noformat}
> [info 2020/04/21 11:22:39.437 PDT  rs-bschuchardt-1053-hydra-client-1(bridgegemfire4_host1_12599:12599):41003
>  unshared ordered uid=354 dom #2 port=55262> tid=0xad] P2P message 
> reader@2580db5f io exception for 
> rs-bschuchardt-1053-hydra-client-1(bridgegemfire4_host1_12599:12599):41003@354(GEODE
>  1.10.0)
> javax.net.ssl.SSLException: bad record MAC
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:214)
>   at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1728)
>   at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:986)
>   at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:912)
>   at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:782)
>   at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:626)
>   at 
> org.apache.geode.internal.net.NioSslEngine.unwrap(NioSslEngine.java:275)
>   at 
> org.apache.geode.internal.tcp.Connection.processInputBuffer(Connection.java:2894)
>   at 
> org.apache.geode.internal.tcp.Connection.readMessages(Connection.java:1745)
>   at org.apache.geode.internal.tcp.Connection.run(Connection.java:1577)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: javax.crypto.BadPaddingException: bad record MAC
>   at sun.security.ssl.InputRecord.decrypt(InputRecord.java:219)
>   at 
> sun.security.ssl.EngineInputRecord.decrypt(EngineInputRecord.java:177)
>   at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:979)
>   ... 10 more
> {noformat}
> I bisected to see when this problem was introduced and found it was this 
> commit:
> {noformat}
> commit 418d929e3e03185cd6330c828c9b9ed395a76d4b
> Author: Mario Ivanac <48509724+miva...@users.noreply.github.com>
> Date:   Fri Nov 1 20:28:57 2019 +0100
> GEODE-6661: Fixed use of Direct and Non-Direct buffers (#4267)
> - Fixed use of Direct and Non-Direct buffers
> {noformat}
> That commit modified the NioSSLEngine to use a "direct" byte buffer instead 
> of a heap byte buffer.  If I revert that one part of the PR the test works 
> okay.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8020) buffer corruption in SSL communications

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100861#comment-17100861
 ] 

ASF GitHub Bot commented on GEODE-8020:
---

bschuchardt commented on a change in pull request #5048:
URL: https://github.com/apache/geode/pull/5048#discussion_r420853255



##
File path: 
geode-core/src/main/java/org/apache/geode/internal/net/BufferPool.java
##
@@ -69,7 +74,8 @@ public BufferPool(DMStats stats) {
   /**
* use direct ByteBuffers instead of heap ByteBuffers for NIO operations
*/
-  public static final boolean useDirectBuffers = 
!Boolean.getBoolean("p2p.nodirectBuffers");
+  public static final boolean useDirectBuffers = 
!(Boolean.getBoolean("p2p.nodirectBuffers")
+  || Boolean.getBoolean(GeodeGlossary.GEMFIRE_PREFIX + "noDirectBuffers"));

Review comment:
   > is the GF prefixed version of the property need to be 
exposed/documented?
   
   yes, but I'm not sure what the best way is to do that.  I updated 
properties.html in this PR with the new property.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> buffer corruption in SSL communications
> ---
>
> Key: GEODE-8020
> URL: https://issues.apache.org/jira/browse/GEODE-8020
> Project: Geode
>  Issue Type: Bug
>  Components: membership, messaging
>Reporter: Bruce J Schuchardt
>Assignee: Bruce J Schuchardt
>Priority: Major
>
> When running an application with SSL enabled I ran into a hang with a lost 
> message.  The sender had a 15 second ack-wait warning pointing to another 
> server in the cluster.  That server had this in its log file at the time the 
> message would have been processed:
> {noformat}
> [info 2020/04/21 11:22:39.437 PDT  rs-bschuchardt-1053-hydra-client-1(bridgegemfire4_host1_12599:12599):41003
>  unshared ordered uid=354 dom #2 port=55262> tid=0xad] P2P message 
> reader@2580db5f io exception for 
> rs-bschuchardt-1053-hydra-client-1(bridgegemfire4_host1_12599:12599):41003@354(GEODE
>  1.10.0)
> javax.net.ssl.SSLException: bad record MAC
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:214)
>   at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1728)
>   at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:986)
>   at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:912)
>   at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:782)
>   at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:626)
>   at 
> org.apache.geode.internal.net.NioSslEngine.unwrap(NioSslEngine.java:275)
>   at 
> org.apache.geode.internal.tcp.Connection.processInputBuffer(Connection.java:2894)
>   at 
> org.apache.geode.internal.tcp.Connection.readMessages(Connection.java:1745)
>   at org.apache.geode.internal.tcp.Connection.run(Connection.java:1577)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: javax.crypto.BadPaddingException: bad record MAC
>   at sun.security.ssl.InputRecord.decrypt(InputRecord.java:219)
>   at 
> sun.security.ssl.EngineInputRecord.decrypt(EngineInputRecord.java:177)
>   at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:979)
>   ... 10 more
> {noformat}
> I bisected to see when this problem was introduced and found it was this 
> commit:
> {noformat}
> commit 418d929e3e03185cd6330c828c9b9ed395a76d4b
> Author: Mario Ivanac <48509724+miva...@users.noreply.github.com>
> Date:   Fri Nov 1 20:28:57 2019 +0100
> GEODE-6661: Fixed use of Direct and Non-Direct buffers (#4267)
> - Fixed use of Direct and Non-Direct buffers
> {noformat}
> That commit modified the NioSSLEngine to use a "direct" byte buffer instead 
> of a heap byte buffer.  If I revert that one part of the PR the test works 
> okay.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8020) buffer corruption in SSL communications

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100856#comment-17100856
 ] 

ASF GitHub Bot commented on GEODE-8020:
---

echobravopapa commented on a change in pull request #5048:
URL: https://github.com/apache/geode/pull/5048#discussion_r420837745



##
File path: 
geode-core/src/main/java/org/apache/geode/internal/net/NioSslEngine.java
##
@@ -398,6 +398,7 @@ public void close(SocketChannel socketChannel) {
 } finally {
   bufferPool.releaseBuffer(TRACKED_SENDER, myNetData);
   bufferPool.releaseBuffer(TRACKED_RECEIVER, peerAppData);
+  myNetData = null;

Review comment:
   that's cute, personalized data ;P

##
File path: 
geode-core/src/main/java/org/apache/geode/internal/tcp/MsgStreamerList.java
##
@@ -63,25 +63,16 @@ public int writeMessage() throws IOException {
 for (MsgStreamer streamer : this.streamers) {
   if (ex != null) {
 streamer.release();
-// TODO: shouldn't we call continue here?
-// It seems wrong to call writeMessage on a streamer we have just 
released.
-// But why do we call release on a streamer when we had an exception 
on one
-// of the previous streamer?
-// release clears the direct bb and returns it to the pool but leaves
-// it has the "buffer". THen we call writeMessage and it will use 
"buffer"
-// that has also been returned to the pool.
-// I think we only have a MsgStreamerList when a DS has a mix of 
versions
-// which usually is just during a rolling upgrade so that might be why 
we
-// haven't noticed this causing a bug.
-  }
-  try {
-result += streamer.writeMessage();
-// if there is an exception we need to finish the
-// loop and release the other streamer's buffers
-  } catch (RuntimeException e) {
-ex = e;
-  } catch (IOException e) {
-ioex = e;
+  } else {

Review comment:
   thx for removing all those crufty comments

##
File path: 
geode-core/src/main/java/org/apache/geode/internal/net/BufferPool.java
##
@@ -69,7 +74,8 @@ public BufferPool(DMStats stats) {
   /**
* use direct ByteBuffers instead of heap ByteBuffers for NIO operations
*/
-  public static final boolean useDirectBuffers = 
!Boolean.getBoolean("p2p.nodirectBuffers");
+  public static final boolean useDirectBuffers = 
!(Boolean.getBoolean("p2p.nodirectBuffers")
+  || Boolean.getBoolean(GeodeGlossary.GEMFIRE_PREFIX + "noDirectBuffers"));

Review comment:
   is the GF prefixed version of the property need to be exposed/documented?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> buffer corruption in SSL communications
> ---
>
> Key: GEODE-8020
> URL: https://issues.apache.org/jira/browse/GEODE-8020
> Project: Geode
>  Issue Type: Bug
>  Components: membership, messaging
>Reporter: Bruce J Schuchardt
>Assignee: Bruce J Schuchardt
>Priority: Major
>
> When running an application with SSL enabled I ran into a hang with a lost 
> message.  The sender had a 15 second ack-wait warning pointing to another 
> server in the cluster.  That server had this in its log file at the time the 
> message would have been processed:
> {noformat}
> [info 2020/04/21 11:22:39.437 PDT  rs-bschuchardt-1053-hydra-client-1(bridgegemfire4_host1_12599:12599):41003
>  unshared ordered uid=354 dom #2 port=55262> tid=0xad] P2P message 
> reader@2580db5f io exception for 
> rs-bschuchardt-1053-hydra-client-1(bridgegemfire4_host1_12599:12599):41003@354(GEODE
>  1.10.0)
> javax.net.ssl.SSLException: bad record MAC
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:214)
>   at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1728)
>   at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:986)
>   at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:912)
>   at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:782)
>   at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:626)
>   at 
> org.apache.geode.internal.net.NioSslEngine.unwrap(NioSslEngine.java:275)
>   at 
> org.apache.geode.internal.tcp.Connection.processInputBuffer(Connection.java:2894)
>   at 
> org.apache.geode.internal.tcp.Connection.readMessages(Connection.java:1745)
>   at org.apache.geode.internal.tcp.Connection.run(Connection.java:1577)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>

[jira] [Created] (GEODE-8080) Update to ACE 6.5.9, Boost 1.73.0

2020-05-06 Thread Jacob Barrett (Jira)
Jacob Barrett created GEODE-8080:


 Summary: Update to ACE 6.5.9, Boost 1.73.0
 Key: GEODE-8080
 URL: https://issues.apache.org/jira/browse/GEODE-8080
 Project: Geode
  Issue Type: Improvement
  Components: native client
Reporter: Jacob Barrett


Update to:
 * ACE 6.5.9
 * Boost 1.73.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7707) Tab completing `--url` on `connect` gives two default values

2020-05-06 Thread Alberto Bustamante Reyes (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alberto Bustamante Reyes updated GEODE-7707:

Labels: pull-request-available  (was: )

> Tab completing `--url` on `connect` gives two default values 
> -
>
> Key: GEODE-7707
> URL: https://issues.apache.org/jira/browse/GEODE-7707
> Project: Geode
>  Issue Type: Bug
>  Components: management
>Reporter: Michael Oleske
>Assignee: Alberto Bustamante Reyes
>Priority: Major
>  Labels: pull-request-available
>
> Expected result
> To see a string indicating one default value such as "Default is 
> 'http://localhost:7070/geode-mgmt/v1'"
> Actual result
> This string "optional --url: Indicates the base URL to the Manager's HTTP 
> service.  For example: 'http://:/gemfire/v1' Default is 
> 'http://localhost:7070/geode-mgmt/v1'; no default value"  Note the `Default 
> is` and the `no default value`
> steps to reproduce
> execute `gfsh`
> execute `start locator`
> execute `disconnect`
> type `connect --url`
> press tab



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-7707) Tab completing `--url` on `connect` gives two default values

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100825#comment-17100825
 ] 

ASF GitHub Bot commented on GEODE-7707:
---

alb3rtobr opened a new pull request #5059:
URL: https://github.com/apache/geode/pull/5059


   with this change, the help is correctly shown when pressing tab after 
writting `connect --url` in `gfsh` :
   ```
   gfsh>connect --url
   
   optional --url: Indicates the base URL to the Manager's HTTP service.  For 
example: 'http://:/geode-mgmt/v1'; default: 
'http://localhost:7070/geode-mgmt/v1'
```
   I also realized documentation was not showing the correct default value.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Tab completing `--url` on `connect` gives two default values 
> -
>
> Key: GEODE-7707
> URL: https://issues.apache.org/jira/browse/GEODE-7707
> Project: Geode
>  Issue Type: Bug
>  Components: management
>Reporter: Michael Oleske
>Assignee: Alberto Bustamante Reyes
>Priority: Major
>
> Expected result
> To see a string indicating one default value such as "Default is 
> 'http://localhost:7070/geode-mgmt/v1'"
> Actual result
> This string "optional --url: Indicates the base URL to the Manager's HTTP 
> service.  For example: 'http://:/gemfire/v1' Default is 
> 'http://localhost:7070/geode-mgmt/v1'; no default value"  Note the `Default 
> is` and the `no default value`
> steps to reproduce
> execute `gfsh`
> execute `start locator`
> execute `disconnect`
> type `connect --url`
> press tab



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-7888) CI Failure: AlterRuntimeCommandDistributedTest.alterStatArchiveFileWithMember_updatesSelectedServerConfigs(false) failed

2020-05-06 Thread Robert Houghton (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-7888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100824#comment-17100824
 ] 

Robert Houghton commented on GEODE-7888:


Similar suspicious string failure from the same test class, triggered by Geode 
commit 5f9800b291d3a11140eb5a1a972459207e07b50c:

 {noformat}
org.apache.geode.management.internal.cli.commands.AlterRuntimeCommandDistributedTest
 > alterLogDiskSpaceLimitOnMember_OK(true) [0] FAILED
java.lang.AssertionError: Suspicious strings were written to the log during 
this run.
Fix the strings or use IgnoredException.addIgnoredException to ignore.
---
Found suspect string in log4j at line 790

[fatal 2020/05/06 08:08:33.172 GMT  
tid=141] Unknown handshake reply code: 0 messageLength: 90
{noformat}

Logs available:

{noformat}=-=-=-=-=-=-=-=-=-=-=-=-=-=-=  Test Results URI 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
http://files.apachegeode-ci.info/builds/apache-develop-main/1.14.0-SNAPSHOT.0005/test-results/distributedTest/1588755148/
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Test report artifacts from this job are available at:

http://files.apachegeode-ci.info/builds/apache-develop-main/1.14.0-SNAPSHOT.0005/test-artifacts/1588755148/windows-gfshdistributedtest-OpenJDK11-1.14.0-SNAPSHOT.0005.tgz
{noformat}

> CI Failure: 
> AlterRuntimeCommandDistributedTest.alterStatArchiveFileWithMember_updatesSelectedServerConfigs(false)
>  failed
> 
>
> Key: GEODE-7888
> URL: https://issues.apache.org/jira/browse/GEODE-7888
> Project: Geode
>  Issue Type: Bug
>  Components: ci
>Reporter: Eric Shu
>Priority: Major
>
> Failed in WindowsGfshDistributedTestOpenJDK11 pipeline: 
> https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main/jobs/WindowsGfshDistributedTestOpenJDK11/builds/1386
> org.apache.geode.management.internal.cli.commands.AlterRuntimeCommandDistributedTest
>  > alterStatArchiveFileWithMember_updatesSelectedServerConfigs(false) [1] 
> FAILED
> java.lang.AssertionError: Suspicious strings were written to the log 
> during this run.
> Fix the strings or use IgnoredException.addIgnoredException to ignore.
> ---
> Found suspect string in log4j at line 810
> [fatal 2020/03/17 19:53:11.437 GMT  
> tid=3647] Unknown handshake reply code: 0 messageLength: 90
> Artifacts are in: 
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=  Test Results URI 
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> http://files.apachegeode-ci.info/builds/apache-develop-main/1.13.0-SNAPSHOT.0100/test-results/distributedTest/1584476616/
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> Test report artifacts from this job are available at:
> http://files.apachegeode-ci.info/builds/apache-develop-main/1.13.0-SNAPSHOT.0100/test-artifacts/1584476616/windows-gfshdistributedtest-OpenJDK11-1.13.0-SNAPSHOT.0100.tgz



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-7707) Tab completing `--url` on `connect` gives two default values

2020-05-06 Thread Alberto Bustamante Reyes (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alberto Bustamante Reyes reassigned GEODE-7707:
---

Assignee: Alberto Bustamante Reyes

> Tab completing `--url` on `connect` gives two default values 
> -
>
> Key: GEODE-7707
> URL: https://issues.apache.org/jira/browse/GEODE-7707
> Project: Geode
>  Issue Type: Bug
>  Components: management
>Reporter: Michael Oleske
>Assignee: Alberto Bustamante Reyes
>Priority: Major
>
> Expected result
> To see a string indicating one default value such as "Default is 
> 'http://localhost:7070/geode-mgmt/v1'"
> Actual result
> This string "optional --url: Indicates the base URL to the Manager's HTTP 
> service.  For example: 'http://:/gemfire/v1' Default is 
> 'http://localhost:7070/geode-mgmt/v1'; no default value"  Note the `Default 
> is` and the `no default value`
> steps to reproduce
> execute `gfsh`
> execute `start locator`
> execute `disconnect`
> type `connect --url`
> press tab



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8071) RebalanceCommand Should Use Daemon Threads

2020-05-06 Thread Juan Ramos (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Ramos resolved GEODE-8071.
---
Resolution: Fixed

> RebalanceCommand Should Use Daemon Threads
> --
>
> Key: GEODE-8071
> URL: https://issues.apache.org/jira/browse/GEODE-8071
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh, management
>Affects Versions: 1.8.0, 1.9.0, 1.9.1, 1.10.0, 1.9.2, 1.11.0, 1.12.0
>Reporter: Juan Ramos
>Assignee: Juan Ramos
>Priority: Major
>  Labels: caching-applications
> Fix For: 1.14.0
>
>
> The {{RebalanceCommand}} uses a non-daemon thread to execute its internal 
> logic:
> {code:title=RebalanceCommand.java|borderStyle=solid}
> ExecutorService commandExecutors = 
> LoggingExecutors.newSingleThreadExecutor("RebalanceCommand", false);
> {code}
> The above prevents the {{locator}} from gracefully shutdown afterwards:
> {noformat}
> "RebalanceCommand1" #971 prio=5 os_prio=0 tid=0x7f9664011000 nid=0x15905 
> waiting on condition [0x7f9651471000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007308c36e8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (GEODE-8071) RebalanceCommand Should Use Daemon Threads

2020-05-06 Thread Nabarun Nag (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nabarun Nag reopened GEODE-8071:


> RebalanceCommand Should Use Daemon Threads
> --
>
> Key: GEODE-8071
> URL: https://issues.apache.org/jira/browse/GEODE-8071
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh, management
>Affects Versions: 1.8.0, 1.9.0, 1.9.1, 1.10.0, 1.9.2, 1.11.0, 1.12.0
>Reporter: Juan Ramos
>Assignee: Juan Ramos
>Priority: Major
>  Labels: caching-applications
> Fix For: 1.14.0
>
>
> The {{RebalanceCommand}} uses a non-daemon thread to execute its internal 
> logic:
> {code:title=RebalanceCommand.java|borderStyle=solid}
> ExecutorService commandExecutors = 
> LoggingExecutors.newSingleThreadExecutor("RebalanceCommand", false);
> {code}
> The above prevents the {{locator}} from gracefully shutdown afterwards:
> {noformat}
> "RebalanceCommand1" #971 prio=5 os_prio=0 tid=0x7f9664011000 nid=0x15905 
> waiting on condition [0x7f9651471000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007308c36e8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-8071) RebalanceCommand Should Use Daemon Threads

2020-05-06 Thread Juan Ramos (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Ramos resolved GEODE-8071.
---
Resolution: Fixed

> RebalanceCommand Should Use Daemon Threads
> --
>
> Key: GEODE-8071
> URL: https://issues.apache.org/jira/browse/GEODE-8071
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh, management
>Affects Versions: 1.8.0, 1.9.0, 1.9.1, 1.10.0, 1.9.2, 1.11.0, 1.12.0
>Reporter: Juan Ramos
>Assignee: Juan Ramos
>Priority: Major
>  Labels: caching-applications
> Fix For: 1.14.0
>
>
> The {{RebalanceCommand}} uses a non-daemon thread to execute its internal 
> logic:
> {code:title=RebalanceCommand.java|borderStyle=solid}
> ExecutorService commandExecutors = 
> LoggingExecutors.newSingleThreadExecutor("RebalanceCommand", false);
> {code}
> The above prevents the {{locator}} from gracefully shutdown afterwards:
> {noformat}
> "RebalanceCommand1" #971 prio=5 os_prio=0 tid=0x7f9664011000 nid=0x15905 
> waiting on condition [0x7f9651471000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007308c36e8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-8071) RebalanceCommand Should Use Daemon Threads

2020-05-06 Thread Juan Ramos (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Ramos updated GEODE-8071:
--
Fix Version/s: 1.14.0

> RebalanceCommand Should Use Daemon Threads
> --
>
> Key: GEODE-8071
> URL: https://issues.apache.org/jira/browse/GEODE-8071
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh, management
>Affects Versions: 1.8.0, 1.9.0, 1.9.1, 1.10.0, 1.9.2, 1.11.0, 1.12.0
>Reporter: Juan Ramos
>Assignee: Juan Ramos
>Priority: Major
>  Labels: caching-applications
> Fix For: 1.14.0
>
>
> The {{RebalanceCommand}} uses a non-daemon thread to execute its internal 
> logic:
> {code:title=RebalanceCommand.java|borderStyle=solid}
> ExecutorService commandExecutors = 
> LoggingExecutors.newSingleThreadExecutor("RebalanceCommand", false);
> {code}
> The above prevents the {{locator}} from gracefully shutdown afterwards:
> {noformat}
> "RebalanceCommand1" #971 prio=5 os_prio=0 tid=0x7f9664011000 nid=0x15905 
> waiting on condition [0x7f9651471000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007308c36e8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8071) RebalanceCommand Should Use Daemon Threads

2020-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100706#comment-17100706
 ] 

ASF subversion and git services commented on GEODE-8071:


Commit d8e86cb720c054b154a16cc88fee88e73db709f3 in geode's branch 
refs/heads/develop from Juan José Ramos
[ https://gitbox.apache.org/repos/asf?p=geode.git;h=d8e86cb ]

GEODE-8071: Use daemon threads in RebalanceCommand (#5054)

Changed the ExecutorService within RebalanceCommand to use daemon
threads, otherwise the locator refuses to gracefully shutdown.

- Fixed minor warnings.
- Added distributed tests.
- Refactored RebalanceCommandDistributedTest.

> RebalanceCommand Should Use Daemon Threads
> --
>
> Key: GEODE-8071
> URL: https://issues.apache.org/jira/browse/GEODE-8071
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh, management
>Affects Versions: 1.8.0, 1.9.0, 1.9.1, 1.10.0, 1.9.2, 1.11.0, 1.12.0
>Reporter: Juan Ramos
>Assignee: Juan Ramos
>Priority: Major
>  Labels: caching-applications
>
> The {{RebalanceCommand}} uses a non-daemon thread to execute its internal 
> logic:
> {code:title=RebalanceCommand.java|borderStyle=solid}
> ExecutorService commandExecutors = 
> LoggingExecutors.newSingleThreadExecutor("RebalanceCommand", false);
> {code}
> The above prevents the {{locator}} from gracefully shutdown afterwards:
> {noformat}
> "RebalanceCommand1" #971 prio=5 os_prio=0 tid=0x7f9664011000 nid=0x15905 
> waiting on condition [0x7f9651471000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007308c36e8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (GEODE-6831) Versioning of JAR Files doc is wrong

2020-05-06 Thread YuJue Li (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YuJue Li closed GEODE-6831.
---

> Versioning of JAR Files  doc is wrong
> -
>
> Key: GEODE-6831
> URL: https://issues.apache.org/jira/browse/GEODE-6831
> Project: Geode
>  Issue Type: Improvement
>  Components: docs
>Reporter: YuJue Li
>Assignee: Dave Barnes
>Priority: Major
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> [https://geode.apache.org/docs/guide/19/configuring/cluster_config/deploying_application_jars.html]
>  
>  
> This document has such a paragraph: 
>  
>    Versioning of JAR Files 
>  
> When you deploy JAR files to a cluster or member group, the JAR file is 
> modified to indicate version information in its name. Each JAR filename is 
> prefixed with|vf.gf#|and contains a version number at the end of the 
> filename. For example, if you deploy|MyClasses.jar|five times, the filename 
> is displayed as|vf.gf#MyClasses.jar#5|when you list all deployed jars. 
>  
> but,in my environment, it is shown as follows: 
>  
> gfsh>list deployed 
> Member  |  JAR   | JAR Location 
> --- | -- | 
> --- 
> server1 | ra.jar | /media/liyujue/data/geode/server1/ra.v1.jar 
> server1 | mx4j-3.0.2.jar | 
> /media/liyujue/data/geode/server1/mx4j-3.0.2.v1.jar 
> server2 | ra.jar | /media/liyujue/data/geode/server2/ra.v1.jar 
> server2 | mx4j-3.0.2.jar | /media/liyujue/data/geode/server2/mx4j-3.0.2.v1.jar
> The description here is incorrect.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (GEODE-8079) AttributesMutator Should Validate AsyncEventQueue/GatewaySender Type

2020-05-06 Thread Juan Ramos (Jira)
Juan Ramos created GEODE-8079:
-

 Summary: AttributesMutator Should Validate 
AsyncEventQueue/GatewaySender Type
 Key: GEODE-8079
 URL: https://issues.apache.org/jira/browse/GEODE-8079
 Project: Geode
  Issue Type: Bug
  Components: configuration, gfsh, wan
Reporter: Juan Ramos


By design, a parallel {{gateway-sender}} can't be attached to a {{REPLICATE}} 
region.
 While working on on GEODE-8029 I've found that the above fact is correctly 
validated when creating or initialising the region, but totally ignored when 
updating the region through the {{AttributesMutator}} class.
 Altering a {{REPLICATE}} region to dispatch events through a parallel 
{{gateway-sender}} results in cryptic errors while putting entries into the 
region afterwards:
{noformat}
[vm1] [warn 2020/05/06 10:34:09.638 IST  
tid=0x13] GatewaySender: Not queuing the event 
GatewaySenderEventImpl[id=EventID[id=18 
bytes;threadID=0x10062|2;sequenceID=91;bucketId=98];action=0;operation=CREATE;region=/TestRegion;key=Key90;value=Value90;valueIsObject=1;numberOfParts=9;callbackArgument=GatewaySenderEventCallbackArgument
 
[originalCallbackArg=null;originatingSenderId=1;recipientGatewayReceivers={2}];possibleDuplicate=false;creationTime=1588757649638;shadowKey=-1;timeStamp=1588757649638;acked=false;dispatched=false;bucketId=98;isConcurrencyConflict=false],
 as the region for which this event originated is not yet configured in the 
GatewaySender

[vm1] [warn 2020/05/06 10:34:09.638 IST  
tid=0x13] GatewaySender: Not queuing the event 
GatewaySenderEventImpl[id=EventID[id=18 
bytes;threadID=0x10063|2;sequenceID=92;bucketId=99];action=0;operation=CREATE;region=/TestRegion;key=Key91;value=Value91;valueIsObject=1;numberOfParts=9;callbackArgument=GatewaySenderEventCallbackArgument
 
[originalCallbackArg=null;originatingSenderId=1;recipientGatewayReceivers={2}];possibleDuplicate=false;creationTime=1588757649638;shadowKey=-1;timeStamp=1588757649638;acked=false;dispatched=false;bucketId=99;isConcurrencyConflict=false],
 as the region for which this event originated is not yet configured in the 
GatewaySender

[vm1] [warn 2020/05/06 10:34:09.639 IST  
tid=0x13] GatewaySender: Not queuing the event 
GatewaySenderEventImpl[id=EventID[id=18 
bytes;threadID=0x10064|2;sequenceID=93;bucketId=100];action=0;operation=CREATE;region=/TestRegion;key=Key92;value=Value92;valueIsObject=1;numberOfParts=9;callbackArgument=GatewaySenderEventCallbackArgument
 
[originalCallbackArg=null;originatingSenderId=1;recipientGatewayReceivers={2}];possibleDuplicate=false;creationTime=1588757649638;shadowKey=-1;timeStamp=1588757649638;acked=false;dispatched=false;bucketId=100;isConcurrencyConflict=false],
 as the region for which this event originated is not yet configured in the 
GatewaySender

[vm1] [warn 2020/05/06 10:34:09.639 IST  
tid=0x13] GatewaySender: Not queuing the event 
GatewaySenderEventImpl[id=EventID[id=18 
bytes;threadID=0x10065|2;sequenceID=94;bucketId=101];action=0;operation=CREATE;region=/TestRegion;key=Key93;value=Value93;valueIsObject=1;numberOfParts=9;callbackArgument=GatewaySenderEventCallbackArgument
 
[originalCallbackArg=null;originatingSenderId=1;recipientGatewayReceivers={2}];possibleDuplicate=false;creationTime=1588757649639;shadowKey=-1;timeStamp=1588757649639;acked=false;dispatched=false;bucketId=101;isConcurrencyConflict=false],
 as the region for which this event originated is not yet configured in the 
GatewaySender
{noformat}
When done from {{GFSH}}, on the other hand, the server doesn't even start up 
after altering the region as the {{cluster-configuration}} is invalid:
{noformat}
gfsh -e "connect" -e "create region --name=TestRegion --type=REPLICATE"

Member  | Status | Message
--- | -- | -
cluster1-server | OK | Region "/TestRegion" created on "cluster1-server"
Cluster configuration for group 'cluster' is updated.


gfsh -e "connect" -e "create gateway-sender --id=MyGateway 
--remote-distributed-system-id=2 --parallel=true"

Member  | Status | Message
--- | -- | 
--
cluster1-server | OK | GatewaySender "MyGateway" created on 
"cluster1-server"
Cluster configuration for group 'cluster' is updated.


gfsh -e "connect" -e "alter region --name=/TestRegion 
-–gateway-sender-id=MyGateway"

Member  | Status | Message
--- | -- | -
cluster1-server | OK | Region TestRegion altered
Cluster configuration for group 'cluster' is updated.


// Restart Cluster
[warn 2020/05/06 10:09:07.385 IST  tid=0x1] Initialization failed for 
Region /TestRegion
org.apache.geode.internal.cache.wan.GatewaySenderConfigurationException: 
Parallel gateway sender MyGateway can not be used with replicated region 
/TestRegion
at 

[jira] [Assigned] (GEODE-8079) AttributesMutator Should Validate AsyncEventQueue/GatewaySender Type

2020-05-06 Thread Juan Ramos (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Ramos reassigned GEODE-8079:
-

Assignee: Juan Ramos

> AttributesMutator Should Validate AsyncEventQueue/GatewaySender Type
> 
>
> Key: GEODE-8079
> URL: https://issues.apache.org/jira/browse/GEODE-8079
> Project: Geode
>  Issue Type: Bug
>  Components: configuration, gfsh, wan
>Affects Versions: 1.12.0
>Reporter: Juan Ramos
>Assignee: Juan Ramos
>Priority: Major
>  Labels: caching-applications
>
> By design, a parallel {{gateway-sender}} can't be attached to a {{REPLICATE}} 
> region.
>  While working on on GEODE-8029 I've found that the above fact is correctly 
> validated when creating or initialising the region, but totally ignored when 
> updating the region through the {{AttributesMutator}} class.
>  Altering a {{REPLICATE}} region to dispatch events through a parallel 
> {{gateway-sender}} results in cryptic errors while putting entries into the 
> region afterwards:
> {noformat}
> [vm1] [warn 2020/05/06 10:34:09.638 IST  
> tid=0x13] GatewaySender: Not queuing the event 
> GatewaySenderEventImpl[id=EventID[id=18 
> bytes;threadID=0x10062|2;sequenceID=91;bucketId=98];action=0;operation=CREATE;region=/TestRegion;key=Key90;value=Value90;valueIsObject=1;numberOfParts=9;callbackArgument=GatewaySenderEventCallbackArgument
>  
> [originalCallbackArg=null;originatingSenderId=1;recipientGatewayReceivers={2}];possibleDuplicate=false;creationTime=1588757649638;shadowKey=-1;timeStamp=1588757649638;acked=false;dispatched=false;bucketId=98;isConcurrencyConflict=false],
>  as the region for which this event originated is not yet configured in the 
> GatewaySender
> [vm1] [warn 2020/05/06 10:34:09.638 IST  
> tid=0x13] GatewaySender: Not queuing the event 
> GatewaySenderEventImpl[id=EventID[id=18 
> bytes;threadID=0x10063|2;sequenceID=92;bucketId=99];action=0;operation=CREATE;region=/TestRegion;key=Key91;value=Value91;valueIsObject=1;numberOfParts=9;callbackArgument=GatewaySenderEventCallbackArgument
>  
> [originalCallbackArg=null;originatingSenderId=1;recipientGatewayReceivers={2}];possibleDuplicate=false;creationTime=1588757649638;shadowKey=-1;timeStamp=1588757649638;acked=false;dispatched=false;bucketId=99;isConcurrencyConflict=false],
>  as the region for which this event originated is not yet configured in the 
> GatewaySender
> [vm1] [warn 2020/05/06 10:34:09.639 IST  
> tid=0x13] GatewaySender: Not queuing the event 
> GatewaySenderEventImpl[id=EventID[id=18 
> bytes;threadID=0x10064|2;sequenceID=93;bucketId=100];action=0;operation=CREATE;region=/TestRegion;key=Key92;value=Value92;valueIsObject=1;numberOfParts=9;callbackArgument=GatewaySenderEventCallbackArgument
>  
> [originalCallbackArg=null;originatingSenderId=1;recipientGatewayReceivers={2}];possibleDuplicate=false;creationTime=1588757649638;shadowKey=-1;timeStamp=1588757649638;acked=false;dispatched=false;bucketId=100;isConcurrencyConflict=false],
>  as the region for which this event originated is not yet configured in the 
> GatewaySender
> [vm1] [warn 2020/05/06 10:34:09.639 IST  
> tid=0x13] GatewaySender: Not queuing the event 
> GatewaySenderEventImpl[id=EventID[id=18 
> bytes;threadID=0x10065|2;sequenceID=94;bucketId=101];action=0;operation=CREATE;region=/TestRegion;key=Key93;value=Value93;valueIsObject=1;numberOfParts=9;callbackArgument=GatewaySenderEventCallbackArgument
>  
> [originalCallbackArg=null;originatingSenderId=1;recipientGatewayReceivers={2}];possibleDuplicate=false;creationTime=1588757649639;shadowKey=-1;timeStamp=1588757649639;acked=false;dispatched=false;bucketId=101;isConcurrencyConflict=false],
>  as the region for which this event originated is not yet configured in the 
> GatewaySender
> {noformat}
> When done from {{GFSH}}, on the other hand, the server doesn't even start up 
> after altering the region as the {{cluster-configuration}} is invalid:
> {noformat}
> gfsh -e "connect" -e "create region --name=TestRegion --type=REPLICATE"
> Member  | Status | Message
> --- | -- | -
> cluster1-server | OK | Region "/TestRegion" created on "cluster1-server"
> Cluster configuration for group 'cluster' is updated.
> gfsh -e "connect" -e "create gateway-sender --id=MyGateway 
> --remote-distributed-system-id=2 --parallel=true"
> Member  | Status | Message
> --- | -- | 
> --
> cluster1-server | OK | GatewaySender "MyGateway" created on 
> "cluster1-server"
> Cluster configuration for group 'cluster' is updated.
> gfsh -e "connect" -e "alter region --name=/TestRegion 
> -–gateway-sender-id=MyGateway"
> Member  | Status | Message

[jira] [Updated] (GEODE-8079) AttributesMutator Should Validate AsyncEventQueue/GatewaySender Type

2020-05-06 Thread Juan Ramos (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Ramos updated GEODE-8079:
--
Labels: caching-applications  (was: )

> AttributesMutator Should Validate AsyncEventQueue/GatewaySender Type
> 
>
> Key: GEODE-8079
> URL: https://issues.apache.org/jira/browse/GEODE-8079
> Project: Geode
>  Issue Type: Bug
>  Components: configuration, gfsh, wan
>Affects Versions: 1.12.0
>Reporter: Juan Ramos
>Priority: Major
>  Labels: caching-applications
>
> By design, a parallel {{gateway-sender}} can't be attached to a {{REPLICATE}} 
> region.
>  While working on on GEODE-8029 I've found that the above fact is correctly 
> validated when creating or initialising the region, but totally ignored when 
> updating the region through the {{AttributesMutator}} class.
>  Altering a {{REPLICATE}} region to dispatch events through a parallel 
> {{gateway-sender}} results in cryptic errors while putting entries into the 
> region afterwards:
> {noformat}
> [vm1] [warn 2020/05/06 10:34:09.638 IST  
> tid=0x13] GatewaySender: Not queuing the event 
> GatewaySenderEventImpl[id=EventID[id=18 
> bytes;threadID=0x10062|2;sequenceID=91;bucketId=98];action=0;operation=CREATE;region=/TestRegion;key=Key90;value=Value90;valueIsObject=1;numberOfParts=9;callbackArgument=GatewaySenderEventCallbackArgument
>  
> [originalCallbackArg=null;originatingSenderId=1;recipientGatewayReceivers={2}];possibleDuplicate=false;creationTime=1588757649638;shadowKey=-1;timeStamp=1588757649638;acked=false;dispatched=false;bucketId=98;isConcurrencyConflict=false],
>  as the region for which this event originated is not yet configured in the 
> GatewaySender
> [vm1] [warn 2020/05/06 10:34:09.638 IST  
> tid=0x13] GatewaySender: Not queuing the event 
> GatewaySenderEventImpl[id=EventID[id=18 
> bytes;threadID=0x10063|2;sequenceID=92;bucketId=99];action=0;operation=CREATE;region=/TestRegion;key=Key91;value=Value91;valueIsObject=1;numberOfParts=9;callbackArgument=GatewaySenderEventCallbackArgument
>  
> [originalCallbackArg=null;originatingSenderId=1;recipientGatewayReceivers={2}];possibleDuplicate=false;creationTime=1588757649638;shadowKey=-1;timeStamp=1588757649638;acked=false;dispatched=false;bucketId=99;isConcurrencyConflict=false],
>  as the region for which this event originated is not yet configured in the 
> GatewaySender
> [vm1] [warn 2020/05/06 10:34:09.639 IST  
> tid=0x13] GatewaySender: Not queuing the event 
> GatewaySenderEventImpl[id=EventID[id=18 
> bytes;threadID=0x10064|2;sequenceID=93;bucketId=100];action=0;operation=CREATE;region=/TestRegion;key=Key92;value=Value92;valueIsObject=1;numberOfParts=9;callbackArgument=GatewaySenderEventCallbackArgument
>  
> [originalCallbackArg=null;originatingSenderId=1;recipientGatewayReceivers={2}];possibleDuplicate=false;creationTime=1588757649638;shadowKey=-1;timeStamp=1588757649638;acked=false;dispatched=false;bucketId=100;isConcurrencyConflict=false],
>  as the region for which this event originated is not yet configured in the 
> GatewaySender
> [vm1] [warn 2020/05/06 10:34:09.639 IST  
> tid=0x13] GatewaySender: Not queuing the event 
> GatewaySenderEventImpl[id=EventID[id=18 
> bytes;threadID=0x10065|2;sequenceID=94;bucketId=101];action=0;operation=CREATE;region=/TestRegion;key=Key93;value=Value93;valueIsObject=1;numberOfParts=9;callbackArgument=GatewaySenderEventCallbackArgument
>  
> [originalCallbackArg=null;originatingSenderId=1;recipientGatewayReceivers={2}];possibleDuplicate=false;creationTime=1588757649639;shadowKey=-1;timeStamp=1588757649639;acked=false;dispatched=false;bucketId=101;isConcurrencyConflict=false],
>  as the region for which this event originated is not yet configured in the 
> GatewaySender
> {noformat}
> When done from {{GFSH}}, on the other hand, the server doesn't even start up 
> after altering the region as the {{cluster-configuration}} is invalid:
> {noformat}
> gfsh -e "connect" -e "create region --name=TestRegion --type=REPLICATE"
> Member  | Status | Message
> --- | -- | -
> cluster1-server | OK | Region "/TestRegion" created on "cluster1-server"
> Cluster configuration for group 'cluster' is updated.
> gfsh -e "connect" -e "create gateway-sender --id=MyGateway 
> --remote-distributed-system-id=2 --parallel=true"
> Member  | Status | Message
> --- | -- | 
> --
> cluster1-server | OK | GatewaySender "MyGateway" created on 
> "cluster1-server"
> Cluster configuration for group 'cluster' is updated.
> gfsh -e "connect" -e "alter region --name=/TestRegion 
> -–gateway-sender-id=MyGateway"
> Member  | Status | Message
> --- | 

[jira] [Updated] (GEODE-8079) AttributesMutator Should Validate AsyncEventQueue/GatewaySender Type

2020-05-06 Thread Juan Ramos (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Ramos updated GEODE-8079:
--
Affects Version/s: 1.12.0

> AttributesMutator Should Validate AsyncEventQueue/GatewaySender Type
> 
>
> Key: GEODE-8079
> URL: https://issues.apache.org/jira/browse/GEODE-8079
> Project: Geode
>  Issue Type: Bug
>  Components: configuration, gfsh, wan
>Affects Versions: 1.12.0
>Reporter: Juan Ramos
>Priority: Major
>
> By design, a parallel {{gateway-sender}} can't be attached to a {{REPLICATE}} 
> region.
>  While working on on GEODE-8029 I've found that the above fact is correctly 
> validated when creating or initialising the region, but totally ignored when 
> updating the region through the {{AttributesMutator}} class.
>  Altering a {{REPLICATE}} region to dispatch events through a parallel 
> {{gateway-sender}} results in cryptic errors while putting entries into the 
> region afterwards:
> {noformat}
> [vm1] [warn 2020/05/06 10:34:09.638 IST  
> tid=0x13] GatewaySender: Not queuing the event 
> GatewaySenderEventImpl[id=EventID[id=18 
> bytes;threadID=0x10062|2;sequenceID=91;bucketId=98];action=0;operation=CREATE;region=/TestRegion;key=Key90;value=Value90;valueIsObject=1;numberOfParts=9;callbackArgument=GatewaySenderEventCallbackArgument
>  
> [originalCallbackArg=null;originatingSenderId=1;recipientGatewayReceivers={2}];possibleDuplicate=false;creationTime=1588757649638;shadowKey=-1;timeStamp=1588757649638;acked=false;dispatched=false;bucketId=98;isConcurrencyConflict=false],
>  as the region for which this event originated is not yet configured in the 
> GatewaySender
> [vm1] [warn 2020/05/06 10:34:09.638 IST  
> tid=0x13] GatewaySender: Not queuing the event 
> GatewaySenderEventImpl[id=EventID[id=18 
> bytes;threadID=0x10063|2;sequenceID=92;bucketId=99];action=0;operation=CREATE;region=/TestRegion;key=Key91;value=Value91;valueIsObject=1;numberOfParts=9;callbackArgument=GatewaySenderEventCallbackArgument
>  
> [originalCallbackArg=null;originatingSenderId=1;recipientGatewayReceivers={2}];possibleDuplicate=false;creationTime=1588757649638;shadowKey=-1;timeStamp=1588757649638;acked=false;dispatched=false;bucketId=99;isConcurrencyConflict=false],
>  as the region for which this event originated is not yet configured in the 
> GatewaySender
> [vm1] [warn 2020/05/06 10:34:09.639 IST  
> tid=0x13] GatewaySender: Not queuing the event 
> GatewaySenderEventImpl[id=EventID[id=18 
> bytes;threadID=0x10064|2;sequenceID=93;bucketId=100];action=0;operation=CREATE;region=/TestRegion;key=Key92;value=Value92;valueIsObject=1;numberOfParts=9;callbackArgument=GatewaySenderEventCallbackArgument
>  
> [originalCallbackArg=null;originatingSenderId=1;recipientGatewayReceivers={2}];possibleDuplicate=false;creationTime=1588757649638;shadowKey=-1;timeStamp=1588757649638;acked=false;dispatched=false;bucketId=100;isConcurrencyConflict=false],
>  as the region for which this event originated is not yet configured in the 
> GatewaySender
> [vm1] [warn 2020/05/06 10:34:09.639 IST  
> tid=0x13] GatewaySender: Not queuing the event 
> GatewaySenderEventImpl[id=EventID[id=18 
> bytes;threadID=0x10065|2;sequenceID=94;bucketId=101];action=0;operation=CREATE;region=/TestRegion;key=Key93;value=Value93;valueIsObject=1;numberOfParts=9;callbackArgument=GatewaySenderEventCallbackArgument
>  
> [originalCallbackArg=null;originatingSenderId=1;recipientGatewayReceivers={2}];possibleDuplicate=false;creationTime=1588757649639;shadowKey=-1;timeStamp=1588757649639;acked=false;dispatched=false;bucketId=101;isConcurrencyConflict=false],
>  as the region for which this event originated is not yet configured in the 
> GatewaySender
> {noformat}
> When done from {{GFSH}}, on the other hand, the server doesn't even start up 
> after altering the region as the {{cluster-configuration}} is invalid:
> {noformat}
> gfsh -e "connect" -e "create region --name=TestRegion --type=REPLICATE"
> Member  | Status | Message
> --- | -- | -
> cluster1-server | OK | Region "/TestRegion" created on "cluster1-server"
> Cluster configuration for group 'cluster' is updated.
> gfsh -e "connect" -e "create gateway-sender --id=MyGateway 
> --remote-distributed-system-id=2 --parallel=true"
> Member  | Status | Message
> --- | -- | 
> --
> cluster1-server | OK | GatewaySender "MyGateway" created on 
> "cluster1-server"
> Cluster configuration for group 'cluster' is updated.
> gfsh -e "connect" -e "alter region --name=/TestRegion 
> -–gateway-sender-id=MyGateway"
> Member  | Status | Message
> --- | -- | -
> cluster1-server | OK   

[jira] [Updated] (GEODE-8071) RebalanceCommand Should Use Daemon Threads

2020-05-06 Thread Juan Ramos (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Ramos updated GEODE-8071:
--
Affects Version/s: (was: 1.13.0)
   1.8.0
   1.9.0
   1.9.1
   1.10.0
   1.9.2
   1.11.0
   1.12.0

> RebalanceCommand Should Use Daemon Threads
> --
>
> Key: GEODE-8071
> URL: https://issues.apache.org/jira/browse/GEODE-8071
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh, management
>Affects Versions: 1.8.0, 1.9.0, 1.9.1, 1.10.0, 1.9.2, 1.11.0, 1.12.0
>Reporter: Juan Ramos
>Assignee: Juan Ramos
>Priority: Major
>  Labels: caching-applications
>
> The {{RebalanceCommand}} uses a non-daemon thread to execute its internal 
> logic:
> {code:title=RebalanceCommand.java|borderStyle=solid}
> ExecutorService commandExecutors = 
> LoggingExecutors.newSingleThreadExecutor("RebalanceCommand", false);
> {code}
> The above prevents the {{locator}} from gracefully shutdown afterwards:
> {noformat}
> "RebalanceCommand1" #971 prio=5 os_prio=0 tid=0x7f9664011000 nid=0x15905 
> waiting on condition [0x7f9651471000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007308c36e8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8071) RebalanceCommand Should Use Daemon Threads

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100578#comment-17100578
 ] 

ASF GitHub Bot commented on GEODE-8071:
---

jujoramos commented on pull request #5054:
URL: https://github.com/apache/geode/pull/5054#issuecomment-624511969


   @DonalEvans 
   > One thing I'm wondering though... do we know of any circumstances in which 
a locator might launch a non-deamon thread other than this? I'm concerned that 
the test might fail for unrelated reasons if something other than the rebalance 
command happens to start a thread.
   
   Under the current test conditions no other `non-daemon` threads should be 
launched by the locator; it's a valid concern, though, so thanks for bringing 
it up.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> RebalanceCommand Should Use Daemon Threads
> --
>
> Key: GEODE-8071
> URL: https://issues.apache.org/jira/browse/GEODE-8071
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh, management
>Affects Versions: 1.13.0
>Reporter: Juan Ramos
>Assignee: Juan Ramos
>Priority: Major
>  Labels: caching-applications
>
> The {{RebalanceCommand}} uses a non-daemon thread to execute its internal 
> logic:
> {code:title=RebalanceCommand.java|borderStyle=solid}
> ExecutorService commandExecutors = 
> LoggingExecutors.newSingleThreadExecutor("RebalanceCommand", false);
> {code}
> The above prevents the {{locator}} from gracefully shutdown afterwards:
> {noformat}
> "RebalanceCommand1" #971 prio=5 os_prio=0 tid=0x7f9664011000 nid=0x15905 
> waiting on condition [0x7f9651471000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007308c36e8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8071) RebalanceCommand Should Use Daemon Threads

2020-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100576#comment-17100576
 ] 

ASF GitHub Bot commented on GEODE-8071:
---

jujoramos commented on a change in pull request #5054:
URL: https://github.com/apache/geode/pull/5054#discussion_r420622492



##
File path: 
geode-dunit/src/main/java/org/apache/geode/management/internal/cli/commands/RebalanceCommandDistributedTest.java
##
@@ -35,40 +37,91 @@
 import org.apache.geode.test.dunit.rules.MemberVM;
 import org.apache.geode.test.junit.assertions.TabularResultModelAssert;
 import org.apache.geode.test.junit.rules.GfshCommandRule;
+import org.apache.geode.test.junit.rules.MemberStarterRule;
+
+@RunWith(Parameterized.class)
+public class RebalanceCommandDistributedTest {
+  private static final String REGION_ONE_NAME = "region-1";
+  private static final String REGION_TWO_NAME = "region-2";
+  private static final String REGION_THREE_NAME = "region-3";
+
+  @Rule
+  public GfshCommandRule gfsh = new GfshCommandRule();
+
+  @Rule
+  public ClusterStartupRule cluster = new ClusterStartupRule();
 
-@SuppressWarnings("serial")
-public class RebalanceCommandDistributedTestBase {
+  protected MemberVM locator, server1, server2;
 
-  @ClassRule
-  public static ClusterStartupRule cluster = new ClusterStartupRule();
+  @Parameterized.Parameters(name = "ConnectionType:{0}")
+  public static GfshCommandRule.PortType[] connectionTypes() {
+return new GfshCommandRule.PortType[] {http, jmxManager};
+  }
 
-  @ClassRule
-  public static GfshCommandRule gfsh = new GfshCommandRule();
+  @Parameterized.Parameter
+  public static GfshCommandRule.PortType portType;
 
-  protected static MemberVM locator, server1, server2, server3;
+  private void setUpRegions() {
+server1.invoke(() -> {
+  Cache cache = ClusterStartupRule.getCache();
+  assertThat(cache).isNotNull();
+  RegionFactory dataRegionFactory =
+  cache.createRegionFactory(RegionShortcut.PARTITION);
+  Region region = 
dataRegionFactory.create(REGION_ONE_NAME);
+  for (int i = 0; i < 10; i++) {
+region.put("key" + (i + 200), "value" + (i + 200));
+  }
+  region = dataRegionFactory.create(REGION_TWO_NAME);
+  for (int i = 0; i < 100; i++) {
+region.put("key" + (i + 200), "value" + (i + 200));
+  }

Review comment:
   I actually didn't add these tests and the entries looked a bit weird to 
me as well, I didn't want to change this but it makes sense, will do it, thanks.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> RebalanceCommand Should Use Daemon Threads
> --
>
> Key: GEODE-8071
> URL: https://issues.apache.org/jira/browse/GEODE-8071
> Project: Geode
>  Issue Type: Bug
>  Components: gfsh, management
>Affects Versions: 1.13.0
>Reporter: Juan Ramos
>Assignee: Juan Ramos
>Priority: Major
>  Labels: caching-applications
>
> The {{RebalanceCommand}} uses a non-daemon thread to execute its internal 
> logic:
> {code:title=RebalanceCommand.java|borderStyle=solid}
> ExecutorService commandExecutors = 
> LoggingExecutors.newSingleThreadExecutor("RebalanceCommand", false);
> {code}
> The above prevents the {{locator}} from gracefully shutdown afterwards:
> {noformat}
> "RebalanceCommand1" #971 prio=5 os_prio=0 tid=0x7f9664011000 nid=0x15905 
> waiting on condition [0x7f9651471000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007308c36e8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7414) SSL ClientHello server_name extension

2020-05-06 Thread Mario Ivanac (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Ivanac updated GEODE-7414:

Fix Version/s: 1.14.0

> SSL ClientHello server_name extension
> -
>
> Key: GEODE-7414
> URL: https://issues.apache.org/jira/browse/GEODE-7414
> Project: Geode
>  Issue Type: Improvement
>  Components: security
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: needs-review, pull-request-available
> Fix For: 1.14.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {color:#172b4d}We propose to add the {color}*server_name extension to the 
> ClientHello message*{color:#172b4d}. The extension would hold the distributed 
> system ID of the site where the connection originated from.{color}
> {color:#172b4d}This will be used to determine internal geode communication, 
> and communication between geode sites.{color}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (GEODE-7414) SSL ClientHello server_name extension

2020-05-06 Thread Owen Nichols (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen Nichols updated GEODE-7414:

Fix Version/s: (was: 1.12.0)

> SSL ClientHello server_name extension
> -
>
> Key: GEODE-7414
> URL: https://issues.apache.org/jira/browse/GEODE-7414
> Project: Geode
>  Issue Type: Improvement
>  Components: security
>Reporter: Mario Ivanac
>Assignee: Mario Ivanac
>Priority: Major
>  Labels: needs-review, pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {color:#172b4d}We propose to add the {color}*server_name extension to the 
> ClientHello message*{color:#172b4d}. The extension would hold the distributed 
> system ID of the site where the connection originated from.{color}
> {color:#172b4d}This will be used to determine internal geode communication, 
> and communication between geode sites.{color}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (GEODE-8070) add TLSv1.3 to "known" secure communications protocols

2020-05-06 Thread Jacob Barrett (Jira)


[ 
https://issues.apache.org/jira/browse/GEODE-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100502#comment-17100502
 ] 

Jacob Barrett commented on GEODE-8070:
--

I think we would at least want {{SSLContext.getInstance("Default")}} because 
{{SSLContext.getDefault()}} is a singleton.

> add TLSv1.3 to "known" secure communications protocols
> --
>
> Key: GEODE-8070
> URL: https://issues.apache.org/jira/browse/GEODE-8070
> Project: Geode
>  Issue Type: Bug
>  Components: membership
>Reporter: Bruce J Schuchardt
>Priority: Major
>
> SSLUtil has a list of "known" TLS protocols.  It should support TLSv1.3.
>  
> {noformat}
> // lookup known algorithms
> String[] knownAlgorithms = {"SSL", "SSLv2", "SSLv3", "TLS", "TLSv1", 
> "TLSv1.1", "TLSv1.2"};
> for (String algo : knownAlgorithms) {
>   try {
> sslContext = SSLContext.getInstance(algo);
> break;
>   } catch (NoSuchAlgorithmException e) {
> // continue
>   }
> } {noformat}
> We probably can't fully test this change since not all JDKs we test with 
> support v1.3 at this time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (GEODE-7990) Update to dependencies

2020-05-06 Thread Jacob Barrett (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-7990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Barrett resolved GEODE-7990.
--
Fix Version/s: 1.14.0
   Resolution: Fixed

> Update to dependencies
> --
>
> Key: GEODE-7990
> URL: https://issues.apache.org/jira/browse/GEODE-7990
> Project: Geode
>  Issue Type: Improvement
>  Components: native client
>Reporter: Jacob Barrett
>Assignee: Jacob Barrett
>Priority: Major
> Fix For: 1.14.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Update dependencies:
> * ACE 6.5.8
> * SQLite 3.31.1
> * Xerces-C 3.2.3



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (GEODE-8076) simplify redis concurrency code

2020-05-06 Thread Darrel Schneider (Jira)


 [ 
https://issues.apache.org/jira/browse/GEODE-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Darrel Schneider reassigned GEODE-8076:
---

Assignee: Darrel Schneider

> simplify redis concurrency code
> ---
>
> Key: GEODE-8076
> URL: https://issues.apache.org/jira/browse/GEODE-8076
> Project: Geode
>  Issue Type: Improvement
>  Components: redis
>Reporter: Darrel Schneider
>Assignee: Darrel Schneider
>Priority: Major
>
> Currently when doing a redis set operation, for example sadd, the code has to 
> be careful to deal with other threads concurrently changing the same set.
> It does this in a number of ways but this could be simplified by having a 
> higher level layer of the code ensure that for a given redis "key" operations 
> will done in sequential order.
> This can be done safely in a distributed cluster because we now route all 
> operations for a given key to the server that is storing the primary copy of 
> data for that key.
> I spike was done and we found that this form of locking did not hurt 
> performance. Since it allows simpler code that is less likely to have subtle 
> concurrency issues we plan on merging the work done in the spike into the 
> product.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)