[jira] [Updated] (GEODE-7414) SSL ClientHello server_name extension
[ https://issues.apache.org/jira/browse/GEODE-7414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Owen Nichols updated GEODE-7414: Fix Version/s: (was: 1.12.0) > SSL ClientHello server_name extension > - > > Key: GEODE-7414 > URL: https://issues.apache.org/jira/browse/GEODE-7414 > Project: Geode > Issue Type: Improvement > Components: security >Reporter: Mario Ivanac >Assignee: Mario Ivanac >Priority: Major > Labels: needs-review, pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > {color:#172b4d}We propose to add the {color}*server_name extension to the > ClientHello message*{color:#172b4d}. The extension would hold the distributed > system ID of the site where the connection originated from.{color} > {color:#172b4d}This will be used to determine internal geode communication, > and communication between geode sites.{color} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8070) add TLSv1.3 to "known" secure communications protocols
[ https://issues.apache.org/jira/browse/GEODE-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100502#comment-17100502 ] Jacob Barrett commented on GEODE-8070: -- I think we would at least want {{SSLContext.getInstance("Default")}} because {{SSLContext.getDefault()}} is a singleton. > add TLSv1.3 to "known" secure communications protocols > -- > > Key: GEODE-8070 > URL: https://issues.apache.org/jira/browse/GEODE-8070 > Project: Geode > Issue Type: Bug > Components: membership >Reporter: Bruce J Schuchardt >Priority: Major > > SSLUtil has a list of "known" TLS protocols. It should support TLSv1.3. > > {noformat} > // lookup known algorithms > String[] knownAlgorithms = {"SSL", "SSLv2", "SSLv3", "TLS", "TLSv1", > "TLSv1.1", "TLSv1.2"}; > for (String algo : knownAlgorithms) { > try { > sslContext = SSLContext.getInstance(algo); > break; > } catch (NoSuchAlgorithmException e) { > // continue > } > } {noformat} > We probably can't fully test this change since not all JDKs we test with > support v1.3 at this time. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (GEODE-7990) Update to dependencies
[ https://issues.apache.org/jira/browse/GEODE-7990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jacob Barrett resolved GEODE-7990. -- Fix Version/s: 1.14.0 Resolution: Fixed > Update to dependencies > -- > > Key: GEODE-7990 > URL: https://issues.apache.org/jira/browse/GEODE-7990 > Project: Geode > Issue Type: Improvement > Components: native client >Reporter: Jacob Barrett >Assignee: Jacob Barrett >Priority: Major > Fix For: 1.14.0 > > Time Spent: 10m > Remaining Estimate: 0h > > Update dependencies: > * ACE 6.5.8 > * SQLite 3.31.1 > * Xerces-C 3.2.3 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (GEODE-8076) simplify redis concurrency code
[ https://issues.apache.org/jira/browse/GEODE-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Darrel Schneider reassigned GEODE-8076: --- Assignee: Darrel Schneider > simplify redis concurrency code > --- > > Key: GEODE-8076 > URL: https://issues.apache.org/jira/browse/GEODE-8076 > Project: Geode > Issue Type: Improvement > Components: redis >Reporter: Darrel Schneider >Assignee: Darrel Schneider >Priority: Major > > Currently when doing a redis set operation, for example sadd, the code has to > be careful to deal with other threads concurrently changing the same set. > It does this in a number of ways but this could be simplified by having a > higher level layer of the code ensure that for a given redis "key" operations > will done in sequential order. > This can be done safely in a distributed cluster because we now route all > operations for a given key to the server that is storing the primary copy of > data for that key. > I spike was done and we found that this form of locking did not hurt > performance. Since it allows simpler code that is less likely to have subtle > concurrency issues we plan on merging the work done in the spike into the > product. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-7414) SSL ClientHello server_name extension
[ https://issues.apache.org/jira/browse/GEODE-7414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100456#comment-17100456 ] ASF subversion and git services commented on GEODE-7414: Commit 5f9800b291d3a11140eb5a1a972459207e07b50c in geode's branch refs/heads/develop from Mario Ivanac [ https://gitbox.apache.org/repos/asf?p=geode.git;h=5f9800b ] GEODE-7414_2: modify init() method argument (#5040) > SSL ClientHello server_name extension > - > > Key: GEODE-7414 > URL: https://issues.apache.org/jira/browse/GEODE-7414 > Project: Geode > Issue Type: Improvement > Components: security >Reporter: Mario Ivanac >Assignee: Mario Ivanac >Priority: Major > Labels: needs-review, pull-request-available > Fix For: 1.12.0 > > Time Spent: 1h > Remaining Estimate: 0h > > {color:#172b4d}We propose to add the {color}*server_name extension to the > ClientHello message*{color:#172b4d}. The extension would hold the distributed > system ID of the site where the connection originated from.{color} > {color:#172b4d}This will be used to determine internal geode communication, > and communication between geode sites.{color} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8072) When cache is closing, the lucene query might still on-going, some NPE could happen
[ https://issues.apache.org/jira/browse/GEODE-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100354#comment-17100354 ] ASF subversion and git services commented on GEODE-8072: Commit 536910a62e6a5c0d6f46d2f42f467aa41ed40dc0 in geode's branch refs/heads/develop from Xiaojian Zhou [ https://gitbox.apache.org/repos/asf?p=geode.git;h=536910a ] GEODE-8072: check the null and stop the on-going query function when … (#5053) * GEODE-8072: check the null and stop the on-going query function when cache is closing > When cache is closing, the lucene query might still on-going, some NPE could > happen > --- > > Key: GEODE-8072 > URL: https://issues.apache.org/jira/browse/GEODE-8072 > Project: Geode > Issue Type: Improvement >Reporter: Xiaojian Zhou >Assignee: Xiaojian Zhou >Priority: Major > Fix For: 1.14.0 > > > when the cache is closing, what detected recently is: > RROR util.TestException: Got unexpected exception > java.lang.NullPointerException > at > org.apache.geode.internal.cache.execute.InternalFunctionExecutionServiceImpl.onRegion(InternalFunctionExecutionServiceImpl.java:120) > at > org.apache.geode.cache.execute.FunctionService.onRegion(FunctionService.java:76) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.onRegion(PageableLuceneQueryResultsImpl.java:116) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getValues(PageableLuceneQueryResultsImpl.java:110) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getHitEntries(PageableLuceneQueryResultsImpl.java:91) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.advancePage(PageableLuceneQueryResultsImpl.java:139) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.hasNext(PageableLuceneQueryResultsImpl.java:148) > It's not caused by any recently code changes, it's just a deep buried race > condition triggered. > I propose a simple fix to just check the null and throw an exception which > could be handled. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8072) When cache is closing, the lucene query might still on-going, some NPE could happen
[ https://issues.apache.org/jira/browse/GEODE-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100355#comment-17100355 ] ASF subversion and git services commented on GEODE-8072: Commit 536910a62e6a5c0d6f46d2f42f467aa41ed40dc0 in geode's branch refs/heads/develop from Xiaojian Zhou [ https://gitbox.apache.org/repos/asf?p=geode.git;h=536910a ] GEODE-8072: check the null and stop the on-going query function when … (#5053) * GEODE-8072: check the null and stop the on-going query function when cache is closing > When cache is closing, the lucene query might still on-going, some NPE could > happen > --- > > Key: GEODE-8072 > URL: https://issues.apache.org/jira/browse/GEODE-8072 > Project: Geode > Issue Type: Improvement >Reporter: Xiaojian Zhou >Assignee: Xiaojian Zhou >Priority: Major > Fix For: 1.14.0 > > > when the cache is closing, what detected recently is: > RROR util.TestException: Got unexpected exception > java.lang.NullPointerException > at > org.apache.geode.internal.cache.execute.InternalFunctionExecutionServiceImpl.onRegion(InternalFunctionExecutionServiceImpl.java:120) > at > org.apache.geode.cache.execute.FunctionService.onRegion(FunctionService.java:76) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.onRegion(PageableLuceneQueryResultsImpl.java:116) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getValues(PageableLuceneQueryResultsImpl.java:110) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getHitEntries(PageableLuceneQueryResultsImpl.java:91) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.advancePage(PageableLuceneQueryResultsImpl.java:139) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.hasNext(PageableLuceneQueryResultsImpl.java:148) > It's not caused by any recently code changes, it's just a deep buried race > condition triggered. > I propose a simple fix to just check the null and throw an exception which > could be handled. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (GEODE-8072) When cache is closing, the lucene query might still on-going, some NPE could happen
[ https://issues.apache.org/jira/browse/GEODE-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaojian Zhou resolved GEODE-8072. -- Fix Version/s: 1.14.0 Assignee: Xiaojian Zhou Resolution: Fixed > When cache is closing, the lucene query might still on-going, some NPE could > happen > --- > > Key: GEODE-8072 > URL: https://issues.apache.org/jira/browse/GEODE-8072 > Project: Geode > Issue Type: Improvement >Reporter: Xiaojian Zhou >Assignee: Xiaojian Zhou >Priority: Major > Fix For: 1.14.0 > > > when the cache is closing, what detected recently is: > RROR util.TestException: Got unexpected exception > java.lang.NullPointerException > at > org.apache.geode.internal.cache.execute.InternalFunctionExecutionServiceImpl.onRegion(InternalFunctionExecutionServiceImpl.java:120) > at > org.apache.geode.cache.execute.FunctionService.onRegion(FunctionService.java:76) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.onRegion(PageableLuceneQueryResultsImpl.java:116) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getValues(PageableLuceneQueryResultsImpl.java:110) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getHitEntries(PageableLuceneQueryResultsImpl.java:91) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.advancePage(PageableLuceneQueryResultsImpl.java:139) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.hasNext(PageableLuceneQueryResultsImpl.java:148) > It's not caused by any recently code changes, it's just a deep buried race > condition triggered. > I propose a simple fix to just check the null and throw an exception which > could be handled. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8073) NullPointerException thrown in PartitionedRegion.handleOldNodes
[ https://issues.apache.org/jira/browse/GEODE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100347#comment-17100347 ] ASF GitHub Bot commented on GEODE-8073: --- lgtm-com[bot] commented on pull request #5055: URL: https://github.com/apache/geode/pull/5055#issuecomment-624373530 This pull request **fixes 1 alert** when merging 694e7ff4a14c7d5852acf014484ccaa205ec39f0 into 7ee1042a8393563b4d7655b8bc2d4a77564b91b5 - [view on LGTM.com](https://lgtm.com/projects/g/apache/geode/rev/pr-0dcdc4e83eb79a3f8c32dab6739b4a8ae4e7df6d) **fixed alerts:** * 1 for Dereferenced variable may be null This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > NullPointerException thrown in PartitionedRegion.handleOldNodes > --- > > Key: GEODE-8073 > URL: https://issues.apache.org/jira/browse/GEODE-8073 > Project: Geode > Issue Type: Bug > Components: regions >Reporter: Eric Shu >Assignee: Eric Shu >Priority: Major > Labels: caching-applications > > The NPE can be thrown when a remote node is gone unexpectedly. > {noformat} > Caused by: java.lang.NullPointerException > at > org.apache.geode.internal.cache.PartitionedRegion.handleOldNodes(PartitionedRegion.java:4610) > at > org.apache.geode.internal.cache.PartitionedRegion.fetchEntries(PartitionedRegion.java:4689) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVKeysPR(BaseCommand.java:1191) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVAllKeys(BaseCommand.java:1124) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKeysValuesPolicy(BaseCommand.java:973) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.fillAndSendRegisterInterestResponseChunks(BaseCommand.java:905) > at > org.apache.geode.internal.cache.tier.sockets.command.RegisterInterest61.cmdExecute(RegisterInterest61.java:260) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:183) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:848) > at > org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:72) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1212) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at > org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:686) > at > org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (GEODE-8073) NullPointerException thrown in PartitionedRegion.handleOldNodes
[ https://issues.apache.org/jira/browse/GEODE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nabarun Nag resolved GEODE-8073. Resolution: Fixed > NullPointerException thrown in PartitionedRegion.handleOldNodes > --- > > Key: GEODE-8073 > URL: https://issues.apache.org/jira/browse/GEODE-8073 > Project: Geode > Issue Type: Bug > Components: regions >Reporter: Eric Shu >Assignee: Eric Shu >Priority: Major > Labels: caching-applications > > The NPE can be thrown when a remote node is gone unexpectedly. > {noformat} > Caused by: java.lang.NullPointerException > at > org.apache.geode.internal.cache.PartitionedRegion.handleOldNodes(PartitionedRegion.java:4610) > at > org.apache.geode.internal.cache.PartitionedRegion.fetchEntries(PartitionedRegion.java:4689) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVKeysPR(BaseCommand.java:1191) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVAllKeys(BaseCommand.java:1124) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKeysValuesPolicy(BaseCommand.java:973) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.fillAndSendRegisterInterestResponseChunks(BaseCommand.java:905) > at > org.apache.geode.internal.cache.tier.sockets.command.RegisterInterest61.cmdExecute(RegisterInterest61.java:260) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:183) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:848) > at > org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:72) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1212) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at > org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:686) > at > org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8016) Replace Maven SNAPSHOT with enumerated build-id artifacts
[ https://issues.apache.org/jira/browse/GEODE-8016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100345#comment-17100345 ] ASF GitHub Bot commented on GEODE-8016: --- onichols-pivotal commented on a change in pull request #5057: URL: https://github.com/apache/geode/pull/5057#discussion_r420478753 ## File path: ci/scripts/execute_build_examples.sh ## @@ -17,7 +17,7 @@ # See the License for the specific language governing permissions and # limitations under the License. -set -e +set -ex Review comment: just to confirm, was this change intended, or "debugging" that accidentally got left in? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Replace Maven SNAPSHOT with enumerated build-id artifacts > - > > Key: GEODE-8016 > URL: https://issues.apache.org/jira/browse/GEODE-8016 > Project: Geode > Issue Type: Task > Components: build, ci >Reporter: Robert Houghton >Assignee: Robert Houghton >Priority: Major > > To better support repeatable builds in CI, publish artifacts in the form > `1.2.3-build.123` instead of `1.2.3-SNAPSHOT` with the SNAPSHOT dynamically > changing. As an example, the `geode-examples` pipeline would be able to grab > a distinct artifact for build-and-test, instead of an unrepeatable, invisibly > rolling `SNAPSHOT`. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (GEODE-8078) Exceptions in locator logs when hitting members REST endpoint
[ https://issues.apache.org/jira/browse/GEODE-8078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Lindsey updated GEODE-8078: - Description: I'm seeing the following exceptions in locator logs when I try to hit the REST endpoint /management/v1/members/\{id} before the member has finished starting up. The reason I need to do this is because I have a program that is polling that endpoint to wait until the member is online. Ideally these errors would not show up in logs, but instead be reflected in the status code of the REST response. {quote}[error 2020/04/06 22:05:59.086 UTC tid=0x31] class org.apache.geode.cache.CacheClosedException cannot be cast to class org.apache.geode.management.runtime.RuntimeInfo (org.apache.geode.cache.CacheClosedException and org.apache.geode.management.runtime.RuntimeInfo are in unnamed module of loader 'app') java.lang.ClassCastException: class org.apache.geode.cache.CacheClosedException cannot be cast to class org.apache.geode.management.runtime.RuntimeInfo (org.apache.geode.cache.CacheClosedException and org.apache.geode.management.runtime.RuntimeInfo are in unnamed module of loader 'app') at org.apache.geode.management.internal.api.LocatorClusterManagementService.list(LocatorClusterManagementService.java:417) at org.apache.geode.management.internal.api.LocatorClusterManagementService.get(LocatorClusterManagementService.java:434) at org.apache.geode.management.internal.rest.controllers.MemberManagementController.getMember(MemberManagementController.java:50) at org.apache.geode.management.internal.rest.controllers.MemberManagementController$$FastClassBySpringCGLIB$$3634e452.invoke() at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:769) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747) at org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:69) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689) at org.apache.geode.management.internal.rest.controllers.MemberManagementController$$EnhancerBySpringCGLIB$$2893b195.getMember() at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:190) at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138) at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:106) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:888) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:793) at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1040) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:943) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898) at javax.servlet.http.HttpServlet.service(HttpServlet.java:687) at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:760) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1617) at org.apache.geode.management.internal.rest.ManagementLoggingFilter.doFilterInternal(ManagementLoggingFilter.java:44)
[jira] [Updated] (GEODE-8078) Exceptions in locator logs when hitting members REST endpoint
[ https://issues.apache.org/jira/browse/GEODE-8078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Lindsey updated GEODE-8078: - Description: I'm seeing the following exceptions in locator logs when I try to hit the REST endpoint /management/v1/members/\{id} before the member has finished starting up. The reason I need to do this is because I have a program that is polling that endpoint to wait until the member is online. Ideally these errors would not show up in logs, but instead be reflected in the status code of the REST response. {{ [error 2020/04/06 22:05:59.086 UTC tid=0x31] class org.apache.geode.cache.CacheClosedException cannot be cast to class org.apache.geode.management.runtime.RuntimeInfo (org.apache.geode.cache.CacheClosedException and org.apache.geode.management.runtime.RuntimeInfo are in unnamed module of loader 'app')}} {{ java.lang.ClassCastException: class org.apache.geode.cache.CacheClosedException cannot be cast to class org.apache.geode.management.runtime.RuntimeInfo (org.apache.geode.cache.CacheClosedException and org.apache.geode.management.runtime.RuntimeInfo are in unnamed module of loader 'app')}} {{ at org.apache.geode.management.internal.api.LocatorClusterManagementService.list(LocatorClusterManagementService.java:417)}} {{ at org.apache.geode.management.internal.api.LocatorClusterManagementService.get(LocatorClusterManagementService.java:434)}} {{ at org.apache.geode.management.internal.rest.controllers.MemberManagementController.getMember(MemberManagementController.java:50)}} {{ at org.apache.geode.management.internal.rest.controllers.MemberManagementController$$FastClassBySpringCGLIB$$3634e452.invoke()}} {{ at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)}} {{ at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:769)}} {{ at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)}} {{ at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747)}} {{ at org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:69)}} {{ at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)}} {{ at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747)}} {{ at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689)}} {{ at org.apache.geode.management.internal.rest.controllers.MemberManagementController$$EnhancerBySpringCGLIB$$2893b195.getMember()}} {{ at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)}} {{ at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)}} {{ at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)}} {{ at java.base/java.lang.reflect.Method.invoke(Method.java:566)}} {{ at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:190)}} {{ at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138)}} {{ at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:106)}} {{ at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:888)}} {{ at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:793)}} {{ at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)}} {{ at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1040)}} {{ at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:943)}} {{ at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006)}} {{ at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898)}} {{ at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)}} {{ at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883)}} {{ at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)}} {{ at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:760)}} {{ at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1617)}} {{ at org.apache.geode.management.internal.rest.ManagementLoggingFilter.doFilterInternal(ManagementLoggingFilter.java:44)}} {{ at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119)}} {
[jira] [Created] (GEODE-8078) Exceptions in locator logs when hitting members REST endpoint
Aaron Lindsey created GEODE-8078: Summary: Exceptions in locator logs when hitting members REST endpoint Key: GEODE-8078 URL: https://issues.apache.org/jira/browse/GEODE-8078 Project: Geode Issue Type: Bug Components: management Reporter: Aaron Lindsey I'm seeing the following exceptions in locator logs when I try to hit the REST endpoint /management/v1/members/\{id} before the member has finished starting up. The reason I need to do this is because I have a program that is polling that endpoint to wait until the member is online. Ideally these errors would not show up in logs, but instead be reflected in the status code of the REST response. {{[error 2020/04/06 22:05:59.086 UTC tid=0x31] class org.apache.geode.cache.CacheClosedException cannot be cast to class org.apache.geode.management.runtime.RuntimeInfo (org.apache.geode.cache.CacheClosedException and org.apache.geode.management.runtime.RuntimeInfo are in unnamed module of loader 'app')}} {{java.lang.ClassCastException: class org.apache.geode.cache.CacheClosedException cannot be cast to class org.apache.geode.management.runtime.RuntimeInfo (org.apache.geode.cache.CacheClosedException and org.apache.geode.management.runtime.RuntimeInfo are in unnamed module of loader 'app')}} {{ at org.apache.geode.management.internal.api.LocatorClusterManagementService.list(LocatorClusterManagementService.java:417)}} {{ at org.apache.geode.management.internal.api.LocatorClusterManagementService.get(LocatorClusterManagementService.java:434)}} {{ at org.apache.geode.management.internal.rest.controllers.MemberManagementController.getMember(MemberManagementController.java:50)}} {{ at org.apache.geode.management.internal.rest.controllers.MemberManagementController$$FastClassBySpringCGLIB$$3634e452.invoke()}} {{ at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)}} {{ at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:769)}} {{ at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)}} {{ at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747)}} {{ at org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:69)}} {{ at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)}} {{ at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747)}} {{ at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689)}} {{ at org.apache.geode.management.internal.rest.controllers.MemberManagementController$$EnhancerBySpringCGLIB$$2893b195.getMember()}} {{ at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)}} {{ at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)}} {{ at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)}} {{ at java.base/java.lang.reflect.Method.invoke(Method.java:566)}} {{ at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:190)}} {{ at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138)}} {{ at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:106)}} {{ at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:888)}} {{ at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:793)}} {{ at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)}} {{ at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1040)}} {{ at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:943)}} {{ at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006)}} {{ at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898)}} {{ at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)}} {{ at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883)}} {{ at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)}} {{ at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:760)}} {{ at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1617)}} {{ at org.apache.geode.management.internal.rest.Managemen
[jira] [Updated] (GEODE-8078) Exceptions in locator logs when hitting members REST endpoint
[ https://issues.apache.org/jira/browse/GEODE-8078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Lindsey updated GEODE-8078: - Description: I'm seeing the following exceptions in locator logs when I try to hit the REST endpoint /management/v1/members/\{id} before the member has finished starting up. The reason I need to do this is because I have a program that is polling that endpoint to wait until the member is online. Ideally these errors would not show up in logs, but instead be reflected in the status code of the REST response. [error 2020/04/06 22:05:59.086 UTC tid=0x31] class org.apache.geode.cache.CacheClosedException cannot be cast to class org.apache.geode.management.runtime.RuntimeInfo (org.apache.geode.cache.CacheClosedException and org.apache.geode.management.runtime.RuntimeInfo are in unnamed module of loader 'app') {{ {{java.lang.ClassCastException: class org.apache.geode.cache.CacheClosedException cannot be cast to class org.apache.geode.management.runtime.RuntimeInfo (org.apache.geode.cache.CacheClosedException and org.apache.geode.management.runtime.RuntimeInfo are in unnamed module of loader 'app') {{ \{{ at org.apache.geode.management.internal.api.LocatorClusterManagementService.list(LocatorClusterManagementService.java:417) {{ \{{ at org.apache.geode.management.internal.api.LocatorClusterManagementService.get(LocatorClusterManagementService.java:434) {{ \{{ at org.apache.geode.management.internal.rest.controllers.MemberManagementController.getMember(MemberManagementController.java:50) {{ \{{ at org.apache.geode.management.internal.rest.controllers.MemberManagementController$$FastClassBySpringCGLIB$$3634e452.invoke() {{ \{{ at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) {{ \{{ at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:769) {{ \{{ at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) {{ \{{ at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747) {{ \{{ at org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:69) {{ \{{ at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) {{ \{{ at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747) {{ \{{ at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689) {{ \{{ at org.apache.geode.management.internal.rest.controllers.MemberManagementController$$EnhancerBySpringCGLIB$$2893b195.getMember() {{ \{{ at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) {{ \{{ at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) {{ \{{ at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) {{ \{{ at java.base/java.lang.reflect.Method.invoke(Method.java:566) {{ \{{ at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:190) {{ \{{ at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138) {{ \{{ at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:106) {{ \{{ at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:888) {{ \{{ at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:793) {{ \{{ at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) {{ \{{ at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1040) {{ \{{ at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:943) {{ \{{ at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) {{ \{{ at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898) {{ \{{ at javax.servlet.http.HttpServlet.service(HttpServlet.java:687) {{ \{{ at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) {{ \{{ at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) {{ \{{ at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:760) {{ \{{ at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1617) {{ \{{ at org.apache.geo
[jira] [Updated] (GEODE-8078) Exceptions in locator logs when hitting members REST endpoint
[ https://issues.apache.org/jira/browse/GEODE-8078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Lindsey updated GEODE-8078: - Description: I'm seeing the following exceptions in locator logs when I try to hit the REST endpoint /management/v1/members/\{id} before the member has finished starting up. The reason I need to do this is because I have a program that is polling that endpoint to wait until the member is online. Ideally these errors would not show up in logs, but instead be reflected in the status code of the REST response. {{[error 2020/04/06 22:05:59.086 UTC tid=0x31] class org.apache.geode.cache.CacheClosedException cannot be cast to class org.apache.geode.management.runtime.RuntimeInfo (org.apache.geode.cache.CacheClosedException and org.apache.geode.management.runtime.RuntimeInfo are in unnamed module of loader 'app')}} {{ {{ {{java.lang.ClassCastException: class org.apache.geode.cache.CacheClosedException cannot be cast to class org.apache.geode.management.runtime.RuntimeInfo (org.apache.geode.cache.CacheClosedException and org.apache.geode.management.runtime.RuntimeInfo are in unnamed module of loader 'app')}} {{ \{{ {{ at org.apache.geode.management.internal.api.LocatorClusterManagementService.list(LocatorClusterManagementService.java:417)}} {{ \{{ {{ at org.apache.geode.management.internal.api.LocatorClusterManagementService.get(LocatorClusterManagementService.java:434)}} {{ \{{ {{ at org.apache.geode.management.internal.rest.controllers.MemberManagementController.getMember(MemberManagementController.java:50)}} {{ \{{ {{ at org.apache.geode.management.internal.rest.controllers.MemberManagementController$$FastClassBySpringCGLIB$$3634e452.invoke()}} {{ \{{ {{ at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)}} {{ \{{ {{ at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:769)}} {{ \{{ {{ at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)}} {{ \{{ {{ at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747)}} {{ \{{ {{ at org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:69)}} {{ \{{ {{ at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)}} {{ \{{ {{ at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747)}} {{ \{{ {{ at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689)}} {{ \{{ {{ at org.apache.geode.management.internal.rest.controllers.MemberManagementController$$EnhancerBySpringCGLIB$$2893b195.getMember()}} {{ \{{ {{ at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)}} {{ \{{ {{ at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)}} {{ \{{ {{ at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)}} {{ \{{ {{ at java.base/java.lang.reflect.Method.invoke(Method.java:566)}} {{ \{{ {{ at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:190)}} {{ \{{ {{ at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138)}} {{ \{{ {{ at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:106)}} {{ \{{ {{ at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:888)}} {{ \{{ {{ at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:793)}} {{ \{{ {{ at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)}} {{ \{{ {{ at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1040)}} {{ \{{ {{ at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:943)}} {{ \{{ {{ at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006)}} {{ \{{ {{ at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898)}} {{ \{{ {{ at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)}} {{ \{{ {{ at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883)}} {{ \{{ {{ at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)}} {{ \{{ {{ at org.eclipse.jetty.servlet.ServletHolder.h
[jira] [Commented] (GEODE-8016) Replace Maven SNAPSHOT with enumerated build-id artifacts
[ https://issues.apache.org/jira/browse/GEODE-8016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100325#comment-17100325 ] ASF GitHub Bot commented on GEODE-8016: --- dickcav commented on a change in pull request #5057: URL: https://github.com/apache/geode/pull/5057#discussion_r420470789 ## File path: gradle.properties ## @@ -22,13 +22,11 @@ # The releaseQualifier uses the following conventions: Review comment: We don't use these other releaseQualifiers so can we remove them? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Replace Maven SNAPSHOT with enumerated build-id artifacts > - > > Key: GEODE-8016 > URL: https://issues.apache.org/jira/browse/GEODE-8016 > Project: Geode > Issue Type: Task > Components: build, ci >Reporter: Robert Houghton >Assignee: Robert Houghton >Priority: Major > > To better support repeatable builds in CI, publish artifacts in the form > `1.2.3-build.123` instead of `1.2.3-SNAPSHOT` with the SNAPSHOT dynamically > changing. As an example, the `geode-examples` pipeline would be able to grab > a distinct artifact for build-and-test, instead of an unrepeatable, invisibly > rolling `SNAPSHOT`. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8016) Replace Maven SNAPSHOT with enumerated build-id artifacts
[ https://issues.apache.org/jira/browse/GEODE-8016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100323#comment-17100323 ] ASF GitHub Bot commented on GEODE-8016: --- dickcav commented on a change in pull request #5057: URL: https://github.com/apache/geode/pull/5057#discussion_r420470363 ## File path: ci/scripts/shared_utilities.sh ## @@ -31,7 +31,7 @@ find-here-test-reports() { } ## Parsing functions for the Concourse Semver resource. -## These functions expect one input in the form of the resource file, e.g., "1.9.0-SNAPSHOT.325" +## These functions expect one input in the form of the resource file, e.g., "1.9.0-build.325" Review comment: Can we use 1.14.0 here instead of an old version? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Replace Maven SNAPSHOT with enumerated build-id artifacts > - > > Key: GEODE-8016 > URL: https://issues.apache.org/jira/browse/GEODE-8016 > Project: Geode > Issue Type: Task > Components: build, ci >Reporter: Robert Houghton >Assignee: Robert Houghton >Priority: Major > > To better support repeatable builds in CI, publish artifacts in the form > `1.2.3-build.123` instead of `1.2.3-SNAPSHOT` with the SNAPSHOT dynamically > changing. As an example, the `geode-examples` pipeline would be able to grab > a distinct artifact for build-and-test, instead of an unrepeatable, invisibly > rolling `SNAPSHOT`. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (GEODE-8077) Logging to Standard Out
[ https://issues.apache.org/jira/browse/GEODE-8077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Lindsey updated GEODE-8077: - Description: The description below is from RFC [Logging to Standard Out|https://cwiki.apache.org/confluence/display/GEODE/Logging+to+Standard+Out] {quote} h2. Problem Currently logging to stdout is not consistent between client, server and locator. If {{log-file}} is {{null}} on a client then it will log to stdout by default, but on servers and locators it will log to a file named after the member. Setting the {{log-file}} to {{""}} (empty string) on the server will result in logging to stdout, but on a locator it is treated like the {{null}} case and logs to a file. The only way get the locator to log to stdout is to override the log4j.xml file. h3. Anti-Goals Do not change the current default behavior in client, server, or locators when handling {{null}} or {{""}} (empty string). h2. Solution Introduce a new value {{log-file}} of "-" (dash) to indicate standard out, which is a common standard across most applications. When the logger is configured and thee {{log-file}} value is {{""}} then the logger will log to standard out and not to any files. h3. Changes and Additions to Public Interface Changes will be needed in documentation to reference this new value for logging to standard out. h3. Performance Impact As no changes will be made to logging itself there is not impact to performance. h3. Backwards Compatibility and Upgrade Path Since no changes are being made to the current behaviors there should be no impact to rolling upgrades and backwards compatibility. {quote} was: The description below is from RFC [Logging to Standard Out |[https://cwiki.apache.org/confluence/display/GEODE/Logging+to+Standard+Out]] {quote} h2. Problem Currently logging to stdout is not consistent between client, server and locator. If {{log-file}} is {{null}} on a client then it will log to stdout by default, but on servers and locators it will log to a file named after the member. Setting the {{log-file}} to {{""}} (empty string) on the server will result in logging to stdout, but on a locator it is treated like the {{null}} case and logs to a file. The only way get the locator to log to stdout is to override the log4j.xml file. h3. Anti-Goals Do not change the current default behavior in client, server, or locators when handling {{null}} or {{""}} (empty string). h2. Solution Introduce a new value {{log-file}} of "-" (dash) to indicate standard out, which is a common standard across most applications. When the logger is configured and thee {{log-file}} value is {{""}} then the logger will log to standard out and not to any files. h3. Changes and Additions to Public Interface Changes will be needed in documentation to reference this new value for logging to standard out. h3. Performance Impact As no changes will be made to logging itself there is not impact to performance. h3. Backwards Compatibility and Upgrade Path Since no changes are being made to the current behaviors there should be no impact to rolling upgrades and backwards compatibility. {quote} > Logging to Standard Out > --- > > Key: GEODE-8077 > URL: https://issues.apache.org/jira/browse/GEODE-8077 > Project: Geode > Issue Type: Improvement > Components: logging >Reporter: Aaron Lindsey >Priority: Major > > The description below is from RFC [Logging to Standard > Out|https://cwiki.apache.org/confluence/display/GEODE/Logging+to+Standard+Out] > {quote} > h2. Problem > Currently logging to stdout is not consistent between client, server and > locator. If {{log-file}} is {{null}} on a client then it will log to stdout > by default, but on servers and locators it will log to a file named after the > member. Setting the {{log-file}} to {{""}} (empty string) on the server will > result in logging to stdout, but on a locator it is treated like the {{null}} > case and logs to a file. The only way get the locator to log to stdout is to > override the log4j.xml file. > h3. Anti-Goals > Do not change the current default behavior in client, server, or locators > when handling {{null}} or {{""}} (empty string). > h2. Solution > Introduce a new value {{log-file}} of "-" (dash) to indicate standard out, > which is a common standard across most applications. When the logger is > configured and thee {{log-file}} value is {{""}} then the logger will log to > standard out and not to any files. > h3. Changes and Additions to Public Interface > Changes will be needed in documentation to reference this new value for > logging to standard out. > h3. Performance Impact > As no changes will be made to logging itself there is not impact to > performance. > h3. Backwards Compatibility and Upgrade Path > Since no changes are being made
[jira] [Updated] (GEODE-8077) Logging to Standard Out
[ https://issues.apache.org/jira/browse/GEODE-8077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Lindsey updated GEODE-8077: - Description: The description below is from RFC [Logging to Standard Out |[https://cwiki.apache.org/confluence/display/GEODE/Logging+to+Standard+Out]] {quote} h2. Problem Currently logging to stdout is not consistent between client, server and locator. If {{log-file}} is {{null}} on a client then it will log to stdout by default, but on servers and locators it will log to a file named after the member. Setting the {{log-file}} to {{""}} (empty string) on the server will result in logging to stdout, but on a locator it is treated like the {{null}} case and logs to a file. The only way get the locator to log to stdout is to override the log4j.xml file. h3. Anti-Goals Do not change the current default behavior in client, server, or locators when handling {{null}} or {{""}} (empty string). h2. Solution Introduce a new value {{log-file}} of "-" (dash) to indicate standard out, which is a common standard across most applications. When the logger is configured and thee {{log-file}} value is {{""}} then the logger will log to standard out and not to any files. h3. Changes and Additions to Public Interface Changes will be needed in documentation to reference this new value for logging to standard out. h3. Performance Impact As no changes will be made to logging itself there is not impact to performance. h3. Backwards Compatibility and Upgrade Path Since no changes are being made to the current behaviors there should be no impact to rolling upgrades and backwards compatibility. {quote} was: The description below is from RFC [Logging to Standard Out |[https://cwiki.apache.org/confluence/display/GEODE/Logging+to+Standard+Out]] {quote} h2. Problem Currently logging to stdout is not consistent between client, server and locator. If {{log-file}} is {{null}} on a client then it will log to stdout by default, but on servers and locators it will log to a file named after the member. Setting the {{log-file}} to {{""}} (empty string) on the server will result in logging to stdout, but on a locator it is treated like the {{null}} case and logs to a file. The only way get the locator to log to stdout is to override the log4j.xml file. h3. Anti-Goals Do not change the current default behavior in client, server, or locators when handling {{null}} or {{""}} (empty string). h2. Solution Introduce a new value {{log-file}} of {{"-"-}} (dash) to indicate standard out, which is a common standard across most applications. When the logger is configured and thee {{log-file}} value is {{""}} then the logger will log to standard out and not to any files. h3. Changes and Additions to Public Interface Changes will be needed in documentation to reference this new value for logging to standard out. h3. Performance Impact As no changes will be made to logging itself there is not impact to performance. h3. Backwards Compatibility and Upgrade Path Since no changes are being made to the current behaviors there should be no impact to rolling upgrades and backwards compatibility. {quote} > Logging to Standard Out > --- > > Key: GEODE-8077 > URL: https://issues.apache.org/jira/browse/GEODE-8077 > Project: Geode > Issue Type: Improvement > Components: logging >Reporter: Aaron Lindsey >Priority: Major > > The description below is from RFC [Logging to Standard Out > |[https://cwiki.apache.org/confluence/display/GEODE/Logging+to+Standard+Out]] > {quote} > h2. Problem > Currently logging to stdout is not consistent between client, server and > locator. If {{log-file}} is {{null}} on a client then it will log to stdout > by default, but on servers and locators it will log to a file named after the > member. Setting the {{log-file}} to {{""}} (empty string) on the server will > result in logging to stdout, but on a locator it is treated like the {{null}} > case and logs to a file. The only way get the locator to log to stdout is to > override the log4j.xml file. > h3. Anti-Goals > Do not change the current default behavior in client, server, or locators > when handling {{null}} or {{""}} (empty string). > h2. Solution > Introduce a new value {{log-file}} of "-" (dash) to indicate standard out, > which is a common standard across most applications. When the logger is > configured and thee {{log-file}} value is {{""}} then the logger will log to > standard out and not to any files. > h3. Changes and Additions to Public Interface > Changes will be needed in documentation to reference this new value for > logging to standard out. > h3. Performance Impact > As no changes will be made to logging itself there is not impact to > performance. > h3. Backwards Compatibility and Upgrade Path > Since no changes are
[jira] [Created] (GEODE-8077) Logging to Standard Out
Aaron Lindsey created GEODE-8077: Summary: Logging to Standard Out Key: GEODE-8077 URL: https://issues.apache.org/jira/browse/GEODE-8077 Project: Geode Issue Type: Improvement Components: logging Reporter: Aaron Lindsey The description below is from RFC [Logging to Standard Out |[https://cwiki.apache.org/confluence/display/GEODE/Logging+to+Standard+Out]] {quote} h2. Problem Currently logging to stdout is not consistent between client, server and locator. If {{log-file}} is {{null}} on a client then it will log to stdout by default, but on servers and locators it will log to a file named after the member. Setting the {{log-file}} to {{""}} (empty string) on the server will result in logging to stdout, but on a locator it is treated like the {{null}} case and logs to a file. The only way get the locator to log to stdout is to override the log4j.xml file. h3. Anti-Goals Do not change the current default behavior in client, server, or locators when handling {{null}} or {{""}} (empty string). h2. Solution Introduce a new value {{log-file}} of {{"-"-}} (dash) to indicate standard out, which is a common standard across most applications. When the logger is configured and thee {{log-file}} value is {{""}} then the logger will log to standard out and not to any files. h3. Changes and Additions to Public Interface Changes will be needed in documentation to reference this new value for logging to standard out. h3. Performance Impact As no changes will be made to logging itself there is not impact to performance. h3. Backwards Compatibility and Upgrade Path Since no changes are being made to the current behaviors there should be no impact to rolling upgrades and backwards compatibility. {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8073) NullPointerException thrown in PartitionedRegion.handleOldNodes
[ https://issues.apache.org/jira/browse/GEODE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100321#comment-17100321 ] ASF GitHub Bot commented on GEODE-8073: --- pivotal-eshu commented on a change in pull request #5055: URL: https://github.com/apache/geode/pull/5055#discussion_r420467289 ## File path: geode-core/src/test/java/org/apache/geode/internal/cache/PartitionedRegionTest.java ## @@ -573,6 +578,50 @@ public void transactionThrowsTransactionDataRebalancedExceptionIfIsAForceReattem .hasMessage(PartitionedRegion.DATA_MOVED_BY_REBALANCE).hasCause(exception); } + @Test + public void failuresSavedIfFetchKeysThrows() throws Exception { +PartitionedRegion spyPartitionedRegion = spy(partitionedRegion); + +VersionedObjectList values = mock(VersionedObjectList.class); +ServerConnection serverConnection = mock(ServerConnection.class); +Set failures = mock(Set.class); +InternalDistributedMember member = mock(InternalDistributedMember.class); +Set buckets = new HashSet<>(); +buckets.add(1); +doThrow(new ForceReattemptException("")).when(spyPartitionedRegion).getFetchKeysResponse(member, +1); + +spyPartitionedRegion.fetchKeysAndValues(values, serverConnection, failures, member, null, +buckets); + +verify(failures).add(1); +verify(spyPartitionedRegion, never()).getValuesForKeys(values, serverConnection, null); + } + + @Test + public void fetchKeysAndValuesInvokesGetValuesForKeys() throws Exception { +PartitionedRegion spyPartitionedRegion = spy(partitionedRegion); + +VersionedObjectList values = mock(VersionedObjectList.class); +ServerConnection serverConnection = mock(ServerConnection.class); +Set failures = mock(Set.class); Review comment: Thanks for the explanation. Change has been made. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > NullPointerException thrown in PartitionedRegion.handleOldNodes > --- > > Key: GEODE-8073 > URL: https://issues.apache.org/jira/browse/GEODE-8073 > Project: Geode > Issue Type: Bug > Components: regions >Reporter: Eric Shu >Assignee: Eric Shu >Priority: Major > Labels: caching-applications > > The NPE can be thrown when a remote node is gone unexpectedly. > {noformat} > Caused by: java.lang.NullPointerException > at > org.apache.geode.internal.cache.PartitionedRegion.handleOldNodes(PartitionedRegion.java:4610) > at > org.apache.geode.internal.cache.PartitionedRegion.fetchEntries(PartitionedRegion.java:4689) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVKeysPR(BaseCommand.java:1191) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVAllKeys(BaseCommand.java:1124) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKeysValuesPolicy(BaseCommand.java:973) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.fillAndSendRegisterInterestResponseChunks(BaseCommand.java:905) > at > org.apache.geode.internal.cache.tier.sockets.command.RegisterInterest61.cmdExecute(RegisterInterest61.java:260) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:183) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:848) > at > org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:72) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1212) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at > org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:686) > at > org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8073) NullPointerException thrown in PartitionedRegion.handleOldNodes
[ https://issues.apache.org/jira/browse/GEODE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100307#comment-17100307 ] ASF GitHub Bot commented on GEODE-8073: --- DonalEvans commented on a change in pull request #5055: URL: https://github.com/apache/geode/pull/5055#discussion_r420457715 ## File path: geode-core/src/main/java/org/apache/geode/internal/cache/PartitionedRegion.java ## @@ -4593,39 +4593,58 @@ void updateNodeToBucketMap( buckets = bucketKeys.keySet(); } - for (Integer bucket : buckets) { -Set keys = null; -if (bucketKeys == null) { - try { -FetchKeysResponse fkr = FetchKeysMessage.send(member, this, bucket, true); -keys = fkr.waitForKeys(); - } catch (ForceReattemptException ignore) { -failures.add(bucket); - } -} else { - keys = bucketKeys.get(bucket); + fetchKeysAndValues(values, servConn, failures, member, bucketKeys, buckets); +} +return failures; + } + + void fetchKeysAndValues(VersionedObjectList values, ServerConnection servConn, + Set failures, InternalDistributedMember member, + HashMap bucketKeys, Set buckets) + throws IOException { +for (Integer bucket : buckets) { + Set keys = null; + if (bucketKeys == null) { +try { + FetchKeysResponse fetchKeysResponse = getFetchKeysResponse(member, bucket); + keys = fetchKeysResponse.waitForKeys(); +} catch (ForceReattemptException ignore) { Review comment: Ah, I see. We may catch the exception, but we don't actually do anything with it, even though we do _something_ if we catch it. This name is fine as it is. ## File path: geode-core/src/test/java/org/apache/geode/internal/cache/PartitionedRegionTest.java ## @@ -573,6 +578,50 @@ public void transactionThrowsTransactionDataRebalancedExceptionIfIsAForceReattem .hasMessage(PartitionedRegion.DATA_MOVED_BY_REBALANCE).hasCause(exception); } + @Test + public void failuresSavedIfFetchKeysThrows() throws Exception { +PartitionedRegion spyPartitionedRegion = spy(partitionedRegion); + +VersionedObjectList values = mock(VersionedObjectList.class); +ServerConnection serverConnection = mock(ServerConnection.class); +Set failures = mock(Set.class); Review comment: Instead of verifying the method call on the mocked set, you can verify the contents of the real set, which should contain one item, which is the `Integer` 1. ## File path: geode-core/src/test/java/org/apache/geode/internal/cache/PartitionedRegionTest.java ## @@ -573,6 +578,50 @@ public void transactionThrowsTransactionDataRebalancedExceptionIfIsAForceReattem .hasMessage(PartitionedRegion.DATA_MOVED_BY_REBALANCE).hasCause(exception); } + @Test + public void failuresSavedIfFetchKeysThrows() throws Exception { +PartitionedRegion spyPartitionedRegion = spy(partitionedRegion); + +VersionedObjectList values = mock(VersionedObjectList.class); +ServerConnection serverConnection = mock(ServerConnection.class); +Set failures = mock(Set.class); +InternalDistributedMember member = mock(InternalDistributedMember.class); +Set buckets = new HashSet<>(); +buckets.add(1); +doThrow(new ForceReattemptException("")).when(spyPartitionedRegion).getFetchKeysResponse(member, +1); + +spyPartitionedRegion.fetchKeysAndValues(values, serverConnection, failures, member, null, +buckets); + +verify(failures).add(1); +verify(spyPartitionedRegion, never()).getValuesForKeys(values, serverConnection, null); + } + + @Test + public void fetchKeysAndValuesInvokesGetValuesForKeys() throws Exception { +PartitionedRegion spyPartitionedRegion = spy(partitionedRegion); + +VersionedObjectList values = mock(VersionedObjectList.class); +ServerConnection serverConnection = mock(ServerConnection.class); +Set failures = mock(Set.class); Review comment: It's part of the "don't mock what you don't own" principle in TDD, explained [here](https://github.com/testdouble/contributing-tests/wiki/Don't-mock-what-you-don't-own) and [here](https://blog.codecentric.de/en/2018/03/mock-what-when-how/). Also, as discussed [here](https://github.com/mockito/mockito/wiki/How-to-write-good-tests), if you can avoid mocking a class, as in this case, then that's probably a good thing. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > NullPointerException thrown in PartitionedRegion.handleOldNodes > --
[jira] [Commented] (GEODE-8070) add TLSv1.3 to "known" secure communications protocols
[ https://issues.apache.org/jira/browse/GEODE-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100292#comment-17100292 ] Bill Burcham commented on GEODE-8070: - or better yet instead of having a list of protocol versions at all in the product code, perhaps call {{SSLContext.getDefault()}} > add TLSv1.3 to "known" secure communications protocols > -- > > Key: GEODE-8070 > URL: https://issues.apache.org/jira/browse/GEODE-8070 > Project: Geode > Issue Type: Bug > Components: membership >Reporter: Bruce J Schuchardt >Priority: Major > > SSLUtil has a list of "known" TLS protocols. It should support TLSv1.3. > > {noformat} > // lookup known algorithms > String[] knownAlgorithms = {"SSL", "SSLv2", "SSLv3", "TLS", "TLSv1", > "TLSv1.1", "TLSv1.2"}; > for (String algo : knownAlgorithms) { > try { > sslContext = SSLContext.getInstance(algo); > break; > } catch (NoSuchAlgorithmException e) { > // continue > } > } {noformat} > We probably can't fully test this change since not all JDKs we test with > support v1.3 at this time. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8072) When cache is closing, the lucene query might still on-going, some NPE could happen
[ https://issues.apache.org/jira/browse/GEODE-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100290#comment-17100290 ] ASF GitHub Bot commented on GEODE-8072: --- gesterzhou commented on a change in pull request #5053: URL: https://github.com/apache/geode/pull/5053#discussion_r420437837 ## File path: geode-core/src/main/java/org/apache/geode/internal/cache/execute/InternalFunctionExecutionServiceImpl.java ## @@ -116,6 +117,11 @@ public Execution onRegion(Region region) { throw new FunctionException("Region instance passed is null"); } +if (region.getAttributes() == null) { Review comment: fixed This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > When cache is closing, the lucene query might still on-going, some NPE could > happen > --- > > Key: GEODE-8072 > URL: https://issues.apache.org/jira/browse/GEODE-8072 > Project: Geode > Issue Type: Improvement >Reporter: Xiaojian Zhou >Priority: Major > > when the cache is closing, what detected recently is: > RROR util.TestException: Got unexpected exception > java.lang.NullPointerException > at > org.apache.geode.internal.cache.execute.InternalFunctionExecutionServiceImpl.onRegion(InternalFunctionExecutionServiceImpl.java:120) > at > org.apache.geode.cache.execute.FunctionService.onRegion(FunctionService.java:76) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.onRegion(PageableLuceneQueryResultsImpl.java:116) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getValues(PageableLuceneQueryResultsImpl.java:110) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getHitEntries(PageableLuceneQueryResultsImpl.java:91) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.advancePage(PageableLuceneQueryResultsImpl.java:139) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.hasNext(PageableLuceneQueryResultsImpl.java:148) > It's not caused by any recently code changes, it's just a deep buried race > condition triggered. > I propose a simple fix to just check the null and throw an exception which > could be handled. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8072) When cache is closing, the lucene query might still on-going, some NPE could happen
[ https://issues.apache.org/jira/browse/GEODE-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100280#comment-17100280 ] ASF GitHub Bot commented on GEODE-8072: --- pivotal-eshu commented on a change in pull request #5053: URL: https://github.com/apache/geode/pull/5053#discussion_r420430242 ## File path: geode-core/src/main/java/org/apache/geode/internal/cache/execute/InternalFunctionExecutionServiceImpl.java ## @@ -116,6 +117,11 @@ public Execution onRegion(Region region) { throw new FunctionException("Region instance passed is null"); } +if (region.getAttributes() == null) { Review comment: How about check if (region.isDestroyed()) here? And then throw FunctionException that region is destroyed? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > When cache is closing, the lucene query might still on-going, some NPE could > happen > --- > > Key: GEODE-8072 > URL: https://issues.apache.org/jira/browse/GEODE-8072 > Project: Geode > Issue Type: Improvement >Reporter: Xiaojian Zhou >Priority: Major > > when the cache is closing, what detected recently is: > RROR util.TestException: Got unexpected exception > java.lang.NullPointerException > at > org.apache.geode.internal.cache.execute.InternalFunctionExecutionServiceImpl.onRegion(InternalFunctionExecutionServiceImpl.java:120) > at > org.apache.geode.cache.execute.FunctionService.onRegion(FunctionService.java:76) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.onRegion(PageableLuceneQueryResultsImpl.java:116) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getValues(PageableLuceneQueryResultsImpl.java:110) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getHitEntries(PageableLuceneQueryResultsImpl.java:91) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.advancePage(PageableLuceneQueryResultsImpl.java:139) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.hasNext(PageableLuceneQueryResultsImpl.java:148) > It's not caused by any recently code changes, it's just a deep buried race > condition triggered. > I propose a simple fix to just check the null and throw an exception which > could be handled. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (GEODE-8076) simplify redis concurrency code
Darrel Schneider created GEODE-8076: --- Summary: simplify redis concurrency code Key: GEODE-8076 URL: https://issues.apache.org/jira/browse/GEODE-8076 Project: Geode Issue Type: Improvement Components: redis Reporter: Darrel Schneider Currently when doing a redis set operation, for example sadd, the code has to be careful to deal with other threads concurrently changing the same set. It does this in a number of ways but this could be simplified by having a higher level layer of the code ensure that for a given redis "key" operations will done in sequential order. This can be done safely in a distributed cluster because we now route all operations for a given key to the server that is storing the primary copy of data for that key. I spike was done and we found that this form of locking did not hurt performance. Since it allows simpler code that is less likely to have subtle concurrency issues we plan on merging the work done in the spike into the product. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8016) Replace Maven SNAPSHOT with enumerated build-id artifacts
[ https://issues.apache.org/jira/browse/GEODE-8016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100271#comment-17100271 ] ASF GitHub Bot commented on GEODE-8016: --- rhoughton-pivot opened a new pull request #5057: URL: https://github.com/apache/geode/pull/5057 * Artifacts take the form `1.13.0-build.123` instead of `1.13.0-SNAPSHOT`. * checkPom task has been modified to use a slug instead of an always changing version. * Gradle clients will use the greedy "1.13.0-build+" notation * Maven clients will use semver v1.0 "[1.13.0-build,1.14.0)" notation. Signed-off-by: Robert Houghton Thank you for submitting a contribution to Apache Geode. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Has your PR been rebased against the latest commit within the target branch (typically `develop`)? - [ ] Is your initial contribution a single, squashed commit? - [ ] Does `gradlew build` run cleanly? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? ### Note: Please ensure that once the PR is submitted, check Concourse for build issues and submit an update to your PR as soon as possible. If you need help, please send an email to d...@geode.apache.org. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Replace Maven SNAPSHOT with enumerated build-id artifacts > - > > Key: GEODE-8016 > URL: https://issues.apache.org/jira/browse/GEODE-8016 > Project: Geode > Issue Type: Task > Components: build, ci >Reporter: Robert Houghton >Assignee: Robert Houghton >Priority: Major > > To better support repeatable builds in CI, publish artifacts in the form > `1.2.3-build.123` instead of `1.2.3-SNAPSHOT` with the SNAPSHOT dynamically > changing. As an example, the `geode-examples` pipeline would be able to grab > a distinct artifact for build-and-test, instead of an unrepeatable, invisibly > rolling `SNAPSHOT`. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8073) NullPointerException thrown in PartitionedRegion.handleOldNodes
[ https://issues.apache.org/jira/browse/GEODE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100266#comment-17100266 ] ASF GitHub Bot commented on GEODE-8073: --- pivotal-eshu commented on a change in pull request #5055: URL: https://github.com/apache/geode/pull/5055#discussion_r420419285 ## File path: geode-core/src/test/java/org/apache/geode/internal/cache/PartitionedRegionTest.java ## @@ -573,6 +578,50 @@ public void transactionThrowsTransactionDataRebalancedExceptionIfIsAForceReattem .hasMessage(PartitionedRegion.DATA_MOVED_BY_REBALANCE).hasCause(exception); } + @Test + public void failuresSavedIfFetchKeysThrows() throws Exception { +PartitionedRegion spyPartitionedRegion = spy(partitionedRegion); + +VersionedObjectList values = mock(VersionedObjectList.class); +ServerConnection serverConnection = mock(ServerConnection.class); +Set failures = mock(Set.class); +InternalDistributedMember member = mock(InternalDistributedMember.class); +Set buckets = new HashSet<>(); +buckets.add(1); +doThrow(new ForceReattemptException("")).when(spyPartitionedRegion).getFetchKeysResponse(member, +1); + +spyPartitionedRegion.fetchKeysAndValues(values, serverConnection, failures, member, null, +buckets); + +verify(failures).add(1); +verify(spyPartitionedRegion, never()).getValuesForKeys(values, serverConnection, null); + } + + @Test + public void fetchKeysAndValuesInvokesGetValuesForKeys() throws Exception { +PartitionedRegion spyPartitionedRegion = spy(partitionedRegion); + +VersionedObjectList values = mock(VersionedObjectList.class); +ServerConnection serverConnection = mock(ServerConnection.class); +Set failures = mock(Set.class); +InternalDistributedMember member = mock(InternalDistributedMember.class); +Set buckets = new HashSet<>(); +buckets.add(1); +FetchKeysMessage.FetchKeysResponse fetchKeysResponse = +mock(FetchKeysMessage.FetchKeysResponse.class); + doReturn(fetchKeysResponse).when(spyPartitionedRegion).getFetchKeysResponse(member, 1); +Set keys = mock(Set.class); Review comment: I'd like to know the reason why this has to be a real Set. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > NullPointerException thrown in PartitionedRegion.handleOldNodes > --- > > Key: GEODE-8073 > URL: https://issues.apache.org/jira/browse/GEODE-8073 > Project: Geode > Issue Type: Bug > Components: regions >Reporter: Eric Shu >Assignee: Eric Shu >Priority: Major > Labels: caching-applications > > The NPE can be thrown when a remote node is gone unexpectedly. > {noformat} > Caused by: java.lang.NullPointerException > at > org.apache.geode.internal.cache.PartitionedRegion.handleOldNodes(PartitionedRegion.java:4610) > at > org.apache.geode.internal.cache.PartitionedRegion.fetchEntries(PartitionedRegion.java:4689) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVKeysPR(BaseCommand.java:1191) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVAllKeys(BaseCommand.java:1124) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKeysValuesPolicy(BaseCommand.java:973) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.fillAndSendRegisterInterestResponseChunks(BaseCommand.java:905) > at > org.apache.geode.internal.cache.tier.sockets.command.RegisterInterest61.cmdExecute(RegisterInterest61.java:260) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:183) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:848) > at > org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:72) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1212) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at > org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:686) > at > org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119) >
[jira] [Commented] (GEODE-8073) NullPointerException thrown in PartitionedRegion.handleOldNodes
[ https://issues.apache.org/jira/browse/GEODE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100265#comment-17100265 ] ASF GitHub Bot commented on GEODE-8073: --- pivotal-eshu commented on a change in pull request #5055: URL: https://github.com/apache/geode/pull/5055#discussion_r420418846 ## File path: geode-core/src/test/java/org/apache/geode/internal/cache/PartitionedRegionTest.java ## @@ -573,6 +578,50 @@ public void transactionThrowsTransactionDataRebalancedExceptionIfIsAForceReattem .hasMessage(PartitionedRegion.DATA_MOVED_BY_REBALANCE).hasCause(exception); } + @Test + public void failuresSavedIfFetchKeysThrows() throws Exception { +PartitionedRegion spyPartitionedRegion = spy(partitionedRegion); + +VersionedObjectList values = mock(VersionedObjectList.class); +ServerConnection serverConnection = mock(ServerConnection.class); +Set failures = mock(Set.class); +InternalDistributedMember member = mock(InternalDistributedMember.class); +Set buckets = new HashSet<>(); +buckets.add(1); +doThrow(new ForceReattemptException("")).when(spyPartitionedRegion).getFetchKeysResponse(member, +1); + +spyPartitionedRegion.fetchKeysAndValues(values, serverConnection, failures, member, null, +buckets); + +verify(failures).add(1); +verify(spyPartitionedRegion, never()).getValuesForKeys(values, serverConnection, null); + } + + @Test + public void fetchKeysAndValuesInvokesGetValuesForKeys() throws Exception { +PartitionedRegion spyPartitionedRegion = spy(partitionedRegion); + +VersionedObjectList values = mock(VersionedObjectList.class); +ServerConnection serverConnection = mock(ServerConnection.class); +Set failures = mock(Set.class); Review comment: Is that a particular reason set should not be mocked? It is not being iterated through, just used as a placeholder. I would like to know the reason for the requested change so that I may gain some new knowledge. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > NullPointerException thrown in PartitionedRegion.handleOldNodes > --- > > Key: GEODE-8073 > URL: https://issues.apache.org/jira/browse/GEODE-8073 > Project: Geode > Issue Type: Bug > Components: regions >Reporter: Eric Shu >Assignee: Eric Shu >Priority: Major > Labels: caching-applications > > The NPE can be thrown when a remote node is gone unexpectedly. > {noformat} > Caused by: java.lang.NullPointerException > at > org.apache.geode.internal.cache.PartitionedRegion.handleOldNodes(PartitionedRegion.java:4610) > at > org.apache.geode.internal.cache.PartitionedRegion.fetchEntries(PartitionedRegion.java:4689) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVKeysPR(BaseCommand.java:1191) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVAllKeys(BaseCommand.java:1124) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKeysValuesPolicy(BaseCommand.java:973) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.fillAndSendRegisterInterestResponseChunks(BaseCommand.java:905) > at > org.apache.geode.internal.cache.tier.sockets.command.RegisterInterest61.cmdExecute(RegisterInterest61.java:260) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:183) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:848) > at > org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:72) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1212) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at > org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:686) > at > org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8073) NullPointerException thrown in PartitionedRegion.handleOldNodes
[ https://issues.apache.org/jira/browse/GEODE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100264#comment-17100264 ] ASF GitHub Bot commented on GEODE-8073: --- pivotal-eshu commented on a change in pull request #5055: URL: https://github.com/apache/geode/pull/5055#discussion_r420417517 ## File path: geode-core/src/test/java/org/apache/geode/internal/cache/PartitionedRegionTest.java ## @@ -573,6 +578,50 @@ public void transactionThrowsTransactionDataRebalancedExceptionIfIsAForceReattem .hasMessage(PartitionedRegion.DATA_MOVED_BY_REBALANCE).hasCause(exception); } + @Test + public void failuresSavedIfFetchKeysThrows() throws Exception { +PartitionedRegion spyPartitionedRegion = spy(partitionedRegion); + +VersionedObjectList values = mock(VersionedObjectList.class); +ServerConnection serverConnection = mock(ServerConnection.class); +Set failures = mock(Set.class); Review comment: The mocked failures as a set is used later to verify whether a method is invoked on the set. Otherwise it would be hard to verify. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > NullPointerException thrown in PartitionedRegion.handleOldNodes > --- > > Key: GEODE-8073 > URL: https://issues.apache.org/jira/browse/GEODE-8073 > Project: Geode > Issue Type: Bug > Components: regions >Reporter: Eric Shu >Assignee: Eric Shu >Priority: Major > Labels: caching-applications > > The NPE can be thrown when a remote node is gone unexpectedly. > {noformat} > Caused by: java.lang.NullPointerException > at > org.apache.geode.internal.cache.PartitionedRegion.handleOldNodes(PartitionedRegion.java:4610) > at > org.apache.geode.internal.cache.PartitionedRegion.fetchEntries(PartitionedRegion.java:4689) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVKeysPR(BaseCommand.java:1191) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVAllKeys(BaseCommand.java:1124) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKeysValuesPolicy(BaseCommand.java:973) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.fillAndSendRegisterInterestResponseChunks(BaseCommand.java:905) > at > org.apache.geode.internal.cache.tier.sockets.command.RegisterInterest61.cmdExecute(RegisterInterest61.java:260) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:183) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:848) > at > org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:72) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1212) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at > org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:686) > at > org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8073) NullPointerException thrown in PartitionedRegion.handleOldNodes
[ https://issues.apache.org/jira/browse/GEODE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100262#comment-17100262 ] ASF GitHub Bot commented on GEODE-8073: --- pivotal-eshu commented on a change in pull request #5055: URL: https://github.com/apache/geode/pull/5055#discussion_r420416227 ## File path: geode-core/src/main/java/org/apache/geode/internal/cache/PartitionedRegion.java ## @@ -4593,39 +4593,58 @@ void updateNodeToBucketMap( buckets = bucketKeys.keySet(); } - for (Integer bucket : buckets) { -Set keys = null; -if (bucketKeys == null) { - try { -FetchKeysResponse fkr = FetchKeysMessage.send(member, this, bucket, true); -keys = fkr.waitForKeys(); - } catch (ForceReattemptException ignore) { -failures.add(bucket); - } -} else { - keys = bucketKeys.get(bucket); + fetchKeysAndValues(values, servConn, failures, member, bucketKeys, buckets); +} +return failures; + } + + void fetchKeysAndValues(VersionedObjectList values, ServerConnection servConn, + Set failures, InternalDistributedMember member, + HashMap bucketKeys, Set buckets) + throws IOException { +for (Integer bucket : buckets) { + Set keys = null; + if (bucketKeys == null) { +try { + FetchKeysResponse fetchKeysResponse = getFetchKeysResponse(member, bucket); + keys = fetchKeysResponse.waitForKeys(); +} catch (ForceReattemptException ignore) { Review comment: The exception was caught here, but the exception is itself being ignored. Seems that this was particularly changed to ignore in a previous checkin. Does this needs to change back definitely? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > NullPointerException thrown in PartitionedRegion.handleOldNodes > --- > > Key: GEODE-8073 > URL: https://issues.apache.org/jira/browse/GEODE-8073 > Project: Geode > Issue Type: Bug > Components: regions >Reporter: Eric Shu >Assignee: Eric Shu >Priority: Major > Labels: caching-applications > > The NPE can be thrown when a remote node is gone unexpectedly. > {noformat} > Caused by: java.lang.NullPointerException > at > org.apache.geode.internal.cache.PartitionedRegion.handleOldNodes(PartitionedRegion.java:4610) > at > org.apache.geode.internal.cache.PartitionedRegion.fetchEntries(PartitionedRegion.java:4689) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVKeysPR(BaseCommand.java:1191) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVAllKeys(BaseCommand.java:1124) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKeysValuesPolicy(BaseCommand.java:973) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.fillAndSendRegisterInterestResponseChunks(BaseCommand.java:905) > at > org.apache.geode.internal.cache.tier.sockets.command.RegisterInterest61.cmdExecute(RegisterInterest61.java:260) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:183) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:848) > at > org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:72) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1212) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at > org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:686) > at > org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (GEODE-8055) can not create index on sub regions
[ https://issues.apache.org/jira/browse/GEODE-8055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Owen Nichols updated GEODE-8055: Fix Version/s: 1.13.0 > can not create index on sub regions > --- > > Key: GEODE-8055 > URL: https://issues.apache.org/jira/browse/GEODE-8055 > Project: Geode > Issue Type: Bug > Components: gfsh >Affects Versions: 1.7.0, 1.8.0, 1.10.0, 1.9.2, 1.11.0, 1.12.0 >Reporter: Jinmei Liao >Priority: Major > Fix For: 1.13.0, 1.14.0 > > > When trying to use "create index" command in gfsh to create index on sub > regions, we get the following message: > "Sub-regions are unsupported" > Pre-1.6, we were able to do that. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8055) can not create index on sub regions
[ https://issues.apache.org/jira/browse/GEODE-8055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100250#comment-17100250 ] ASF subversion and git services commented on GEODE-8055: Commit f2e54fc0484baca48d590859f0e8721196ca65d1 in geode's branch refs/heads/support/1.13 from Jinmei Liao [ https://gitbox.apache.org/repos/asf?p=geode.git;h=f2e54fc ] GEODE-8055: create index command should work on sub regions (#5034) > can not create index on sub regions > --- > > Key: GEODE-8055 > URL: https://issues.apache.org/jira/browse/GEODE-8055 > Project: Geode > Issue Type: Bug > Components: gfsh >Affects Versions: 1.7.0, 1.8.0, 1.10.0, 1.9.2, 1.11.0, 1.12.0 >Reporter: Jinmei Liao >Priority: Major > Fix For: 1.14.0 > > > When trying to use "create index" command in gfsh to create index on sub > regions, we get the following message: > "Sub-regions are unsupported" > Pre-1.6, we were able to do that. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8073) NullPointerException thrown in PartitionedRegion.handleOldNodes
[ https://issues.apache.org/jira/browse/GEODE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100220#comment-17100220 ] ASF GitHub Bot commented on GEODE-8073: --- DonalEvans commented on a change in pull request #5055: URL: https://github.com/apache/geode/pull/5055#discussion_r420368476 ## File path: geode-core/src/test/java/org/apache/geode/internal/cache/PartitionedRegionTest.java ## @@ -573,6 +578,50 @@ public void transactionThrowsTransactionDataRebalancedExceptionIfIsAForceReattem .hasMessage(PartitionedRegion.DATA_MOVED_BY_REBALANCE).hasCause(exception); } + @Test + public void failuresSavedIfFetchKeysThrows() throws Exception { +PartitionedRegion spyPartitionedRegion = spy(partitionedRegion); + +VersionedObjectList values = mock(VersionedObjectList.class); +ServerConnection serverConnection = mock(ServerConnection.class); +Set failures = mock(Set.class); Review comment: It's best not to mock Collections. It's fine to just use an actual Set here. ## File path: geode-core/src/main/java/org/apache/geode/internal/cache/PartitionedRegion.java ## @@ -4593,39 +4593,58 @@ void updateNodeToBucketMap( buckets = bucketKeys.keySet(); } - for (Integer bucket : buckets) { -Set keys = null; -if (bucketKeys == null) { - try { -FetchKeysResponse fkr = FetchKeysMessage.send(member, this, bucket, true); -keys = fkr.waitForKeys(); - } catch (ForceReattemptException ignore) { -failures.add(bucket); - } -} else { - keys = bucketKeys.get(bucket); + fetchKeysAndValues(values, servConn, failures, member, bucketKeys, buckets); +} +return failures; + } + + void fetchKeysAndValues(VersionedObjectList values, ServerConnection servConn, + Set failures, InternalDistributedMember member, + HashMap bucketKeys, Set buckets) + throws IOException { +for (Integer bucket : buckets) { + Set keys = null; + if (bucketKeys == null) { +try { + FetchKeysResponse fetchKeysResponse = getFetchKeysResponse(member, bucket); + keys = fetchKeysResponse.waitForKeys(); +} catch (ForceReattemptException ignore) { Review comment: The exception isn't ignored, so it shouldn't be called "ignore." ## File path: geode-core/src/test/java/org/apache/geode/internal/cache/PartitionedRegionTest.java ## @@ -573,6 +578,50 @@ public void transactionThrowsTransactionDataRebalancedExceptionIfIsAForceReattem .hasMessage(PartitionedRegion.DATA_MOVED_BY_REBALANCE).hasCause(exception); } + @Test + public void failuresSavedIfFetchKeysThrows() throws Exception { +PartitionedRegion spyPartitionedRegion = spy(partitionedRegion); + +VersionedObjectList values = mock(VersionedObjectList.class); +ServerConnection serverConnection = mock(ServerConnection.class); +Set failures = mock(Set.class); +InternalDistributedMember member = mock(InternalDistributedMember.class); +Set buckets = new HashSet<>(); +buckets.add(1); +doThrow(new ForceReattemptException("")).when(spyPartitionedRegion).getFetchKeysResponse(member, +1); + +spyPartitionedRegion.fetchKeysAndValues(values, serverConnection, failures, member, null, +buckets); + +verify(failures).add(1); +verify(spyPartitionedRegion, never()).getValuesForKeys(values, serverConnection, null); + } + + @Test + public void fetchKeysAndValuesInvokesGetValuesForKeys() throws Exception { +PartitionedRegion spyPartitionedRegion = spy(partitionedRegion); + +VersionedObjectList values = mock(VersionedObjectList.class); +ServerConnection serverConnection = mock(ServerConnection.class); +Set failures = mock(Set.class); Review comment: Here you should use a real Set instead of a mock. ## File path: geode-core/src/test/java/org/apache/geode/internal/cache/PartitionedRegionTest.java ## @@ -573,6 +578,50 @@ public void transactionThrowsTransactionDataRebalancedExceptionIfIsAForceReattem .hasMessage(PartitionedRegion.DATA_MOVED_BY_REBALANCE).hasCause(exception); } + @Test + public void failuresSavedIfFetchKeysThrows() throws Exception { +PartitionedRegion spyPartitionedRegion = spy(partitionedRegion); + +VersionedObjectList values = mock(VersionedObjectList.class); +ServerConnection serverConnection = mock(ServerConnection.class); +Set failures = mock(Set.class); +InternalDistributedMember member = mock(InternalDistributedMember.class); +Set buckets = new HashSet<>(); +buckets.add(1); +doThrow(new ForceReattemptException("")).when(spyPartitionedRegion).getFetchKeysResponse(member, +1); + +spyPartitionedRegion.fetchKeysAndValues(values, serverConnection, failures, member, nu
[jira] [Commented] (GEODE-8073) NullPointerException thrown in PartitionedRegion.handleOldNodes
[ https://issues.apache.org/jira/browse/GEODE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100219#comment-17100219 ] ASF GitHub Bot commented on GEODE-8073: --- lgtm-com[bot] commented on pull request #5055: URL: https://github.com/apache/geode/pull/5055#issuecomment-624272650 This pull request **fixes 1 alert** when merging d61bae66de48625dfbff45cf159b9fcb54529828 into 7ee1042a8393563b4d7655b8bc2d4a77564b91b5 - [view on LGTM.com](https://lgtm.com/projects/g/apache/geode/rev/pr-deac454be03d5313c08a45720cf8cf0f81bee0fa) **fixed alerts:** * 1 for Dereferenced variable may be null This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > NullPointerException thrown in PartitionedRegion.handleOldNodes > --- > > Key: GEODE-8073 > URL: https://issues.apache.org/jira/browse/GEODE-8073 > Project: Geode > Issue Type: Bug > Components: regions >Reporter: Eric Shu >Assignee: Eric Shu >Priority: Major > Labels: caching-applications > > The NPE can be thrown when a remote node is gone unexpectedly. > {noformat} > Caused by: java.lang.NullPointerException > at > org.apache.geode.internal.cache.PartitionedRegion.handleOldNodes(PartitionedRegion.java:4610) > at > org.apache.geode.internal.cache.PartitionedRegion.fetchEntries(PartitionedRegion.java:4689) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVKeysPR(BaseCommand.java:1191) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVAllKeys(BaseCommand.java:1124) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKeysValuesPolicy(BaseCommand.java:973) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.fillAndSendRegisterInterestResponseChunks(BaseCommand.java:905) > at > org.apache.geode.internal.cache.tier.sockets.command.RegisterInterest61.cmdExecute(RegisterInterest61.java:260) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:183) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:848) > at > org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:72) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1212) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at > org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:686) > at > org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-7678) Partitioned Region clear operations must invoke cache level listeners
[ https://issues.apache.org/jira/browse/GEODE-7678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100205#comment-17100205 ] ASF GitHub Bot commented on GEODE-7678: --- pivotal-eshu commented on a change in pull request #4987: URL: https://github.com/apache/geode/pull/4987#discussion_r420348846 ## File path: geode-core/src/main/java/org/apache/geode/internal/cache/PartitionedRegion.java ## @@ -10373,4 +10377,27 @@ void updatePartitionRegionConfig( public SenderIdMonitor getSenderIdMonitor() { return senderIdMonitor; } + + protected ClearPartitionedRegion getClearPartitionedRegion() { +return clearPartitionedRegion; + } + + @Override + void cmnClearRegion(RegionEventImpl regionEvent, boolean cacheWrite, boolean useRVV) { +// Synchronized to avoid other threads invoking clear on this vm/node. +synchronized (clearLock) { + clearPartitionedRegion.doClear(regionEvent, cacheWrite, this); Review comment: clear as a region operations can not be executed under transaction. It will throw UnsupportedOperation. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Partitioned Region clear operations must invoke cache level listeners > - > > Key: GEODE-7678 > URL: https://issues.apache.org/jira/browse/GEODE-7678 > Project: Geode > Issue Type: Sub-task > Components: regions >Reporter: Nabarun Nag >Priority: Major > Labels: GeodeCommons, GeodeOperationAPI > > Clear operations are successful and CacheListener.afterRegionClear(), > CacheWriter.beforeRegionClear() are invoked. > > Acceptance : > * DUnit tests validating the above behavior. > * Test coverage to when a member departs in this scenario > * Test coverage to when a member restarts in this scenario > * Unit tests with complete code coverage for the newly written code. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8073) NullPointerException thrown in PartitionedRegion.handleOldNodes
[ https://issues.apache.org/jira/browse/GEODE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100192#comment-17100192 ] ASF GitHub Bot commented on GEODE-8073: --- pivotal-eshu opened a new pull request #5055: URL: https://github.com/apache/geode/pull/5055 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > NullPointerException thrown in PartitionedRegion.handleOldNodes > --- > > Key: GEODE-8073 > URL: https://issues.apache.org/jira/browse/GEODE-8073 > Project: Geode > Issue Type: Bug > Components: regions >Reporter: Eric Shu >Assignee: Eric Shu >Priority: Major > Labels: caching-applications > > The NPE can be thrown when a remote node is gone unexpectedly. > {noformat} > Caused by: java.lang.NullPointerException > at > org.apache.geode.internal.cache.PartitionedRegion.handleOldNodes(PartitionedRegion.java:4610) > at > org.apache.geode.internal.cache.PartitionedRegion.fetchEntries(PartitionedRegion.java:4689) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVKeysPR(BaseCommand.java:1191) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVAllKeys(BaseCommand.java:1124) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKeysValuesPolicy(BaseCommand.java:973) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.fillAndSendRegisterInterestResponseChunks(BaseCommand.java:905) > at > org.apache.geode.internal.cache.tier.sockets.command.RegisterInterest61.cmdExecute(RegisterInterest61.java:260) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:183) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:848) > at > org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:72) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1212) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at > org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:686) > at > org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8073) NullPointerException thrown in PartitionedRegion.handleOldNodes
[ https://issues.apache.org/jira/browse/GEODE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100187#comment-17100187 ] ASF subversion and git services commented on GEODE-8073: Commit d61bae66de48625dfbff45cf159b9fcb54529828 in geode's branch refs/heads/feature/GEODE-8073 from Eric Shu [ https://gitbox.apache.org/repos/asf?p=geode.git;h=d61bae6 ] GEODE-8073: Fix NPE after FetchKeysMessage failed. > NullPointerException thrown in PartitionedRegion.handleOldNodes > --- > > Key: GEODE-8073 > URL: https://issues.apache.org/jira/browse/GEODE-8073 > Project: Geode > Issue Type: Bug > Components: regions >Reporter: Eric Shu >Assignee: Eric Shu >Priority: Major > Labels: caching-applications > > The NPE can be thrown when a remote node is gone unexpectedly. > {noformat} > Caused by: java.lang.NullPointerException > at > org.apache.geode.internal.cache.PartitionedRegion.handleOldNodes(PartitionedRegion.java:4610) > at > org.apache.geode.internal.cache.PartitionedRegion.fetchEntries(PartitionedRegion.java:4689) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVKeysPR(BaseCommand.java:1191) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVAllKeys(BaseCommand.java:1124) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKeysValuesPolicy(BaseCommand.java:973) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.fillAndSendRegisterInterestResponseChunks(BaseCommand.java:905) > at > org.apache.geode.internal.cache.tier.sockets.command.RegisterInterest61.cmdExecute(RegisterInterest61.java:260) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:183) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:848) > at > org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:72) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1212) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at > org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:686) > at > org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-7667) GFSH commands - uniform gfsh command to clear regions
[ https://issues.apache.org/jira/browse/GEODE-7667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100173#comment-17100173 ] ASF subversion and git services commented on GEODE-7667: Commit 62858e296f874bc237706663b55590bce8f595c5 in geode's branch refs/heads/feature/GEODE-7665 from BenjaminPerryRoss [ https://gitbox.apache.org/repos/asf?p=geode.git;h=62858e2 ] GEODE-7667: Add a 'clear' gfsh command for PR and RR clear (#4818) * Added clear command and modified remove functionality to clear PR Authored-by: Benjamin Ross > GFSH commands - uniform gfsh command to clear regions > - > > Key: GEODE-7667 > URL: https://issues.apache.org/jira/browse/GEODE-7667 > Project: Geode > Issue Type: Sub-task > Components: regions >Reporter: Nabarun Nag >Assignee: Benjamin P Ross >Priority: Major > Labels: GeodeCommons, docs > Time Spent: 5h > Remaining Estimate: 0h > > * Currently, the gfsh command to clear replicated region is called ‘remove > —region=/regionName’. > * Replace this command with ‘clear region —region=regionName’ > * While executing this gfsh command on partitioned regions, this should call > the clear() Java API using the gfsh function execution machinery. > * Point to note is that this command should take into consideration of the > coordinator selection and how this command is distributed to the members > Acceptance : > * There should be ‘clear region —region=/regionName’ gfsh command > * The gfsh command must be documented in the Geode User Guide > * DUnit tests to verify that command can be executed successfully on > PartitionedRegion > * Deprecate the remove command, as remove does not mean clear > * Unit tests with complete code coverage for the newly written code. > * Test coverage to when a member departs in this scenario > * Test coverage to when a member restarts in this scenario -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (GEODE-8048) change redis sets to use functions and deltas
[ https://issues.apache.org/jira/browse/GEODE-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Darrel Schneider resolved GEODE-8048. - Fix Version/s: 1.13.0 Resolution: Fixed > change redis sets to use functions and deltas > - > > Key: GEODE-8048 > URL: https://issues.apache.org/jira/browse/GEODE-8048 > Project: Geode > Issue Type: Improvement > Components: redis >Reporter: Darrel Schneider >Assignee: Darrel Schneider >Priority: Major > Fix For: 1.13.0 > > > Required operations: > * "SADD" > * "SMEMBERS" > * "SREM" > * "DEL" (of sets) > *AC* > * Above commands implemented using functions/delta propagation > * Make Set operations _not_ on the above list minimally functional with > functions/deltas (performance not important) > * Existing SetsIntegrationTests passing without Ignores > * All other tests passing, including non-Set tests -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (GEODE-8075) Geek squad tech support
Jacks martin created GEODE-8075: --- Summary: Geek squad tech support Key: GEODE-8075 URL: https://issues.apache.org/jira/browse/GEODE-8075 Project: Geode Issue Type: Test Reporter: Jacks martin [Geek Squad Tech Support|https://igeektechs.org/] gives you on-demand solutions, with highly accurate results. Best Buy offers repair services for most major home appliances including refrigerators, freezers, washers, dryers, dishwashers, stoves, and more. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8071) RebalanceCommand Should Use Daemon Threads
[ https://issues.apache.org/jira/browse/GEODE-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100124#comment-17100124 ] ASF GitHub Bot commented on GEODE-8071: --- DonalEvans commented on a change in pull request #5054: URL: https://github.com/apache/geode/pull/5054#discussion_r420289186 ## File path: geode-dunit/src/main/java/org/apache/geode/management/internal/cli/commands/RebalanceCommandDistributedTest.java ## @@ -35,40 +37,91 @@ import org.apache.geode.test.dunit.rules.MemberVM; import org.apache.geode.test.junit.assertions.TabularResultModelAssert; import org.apache.geode.test.junit.rules.GfshCommandRule; +import org.apache.geode.test.junit.rules.MemberStarterRule; + +@RunWith(Parameterized.class) +public class RebalanceCommandDistributedTest { + private static final String REGION_ONE_NAME = "region-1"; + private static final String REGION_TWO_NAME = "region-2"; + private static final String REGION_THREE_NAME = "region-3"; + + @Rule + public GfshCommandRule gfsh = new GfshCommandRule(); + + @Rule + public ClusterStartupRule cluster = new ClusterStartupRule(); -@SuppressWarnings("serial") -public class RebalanceCommandDistributedTestBase { + protected MemberVM locator, server1, server2; - @ClassRule - public static ClusterStartupRule cluster = new ClusterStartupRule(); + @Parameterized.Parameters(name = "ConnectionType:{0}") + public static GfshCommandRule.PortType[] connectionTypes() { +return new GfshCommandRule.PortType[] {http, jmxManager}; + } - @ClassRule - public static GfshCommandRule gfsh = new GfshCommandRule(); + @Parameterized.Parameter + public static GfshCommandRule.PortType portType; - protected static MemberVM locator, server1, server2, server3; + private void setUpRegions() { +server1.invoke(() -> { + Cache cache = ClusterStartupRule.getCache(); + assertThat(cache).isNotNull(); + RegionFactory dataRegionFactory = + cache.createRegionFactory(RegionShortcut.PARTITION); + Region region = dataRegionFactory.create(REGION_ONE_NAME); + for (int i = 0; i < 10; i++) { +region.put("key" + (i + 200), "value" + (i + 200)); + } + region = dataRegionFactory.create(REGION_TWO_NAME); + for (int i = 0; i < 100; i++) { +region.put("key" + (i + 200), "value" + (i + 200)); + } Review comment: Could the ints added to the keys and values be cleared up a bit? There doesn't seem to be any reason to add 200 to them here, and if the number of entries is extracted to a constant, then it can be used in the invocation on server2 to prevent overwriting the existing data. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > RebalanceCommand Should Use Daemon Threads > -- > > Key: GEODE-8071 > URL: https://issues.apache.org/jira/browse/GEODE-8071 > Project: Geode > Issue Type: Bug > Components: gfsh, management >Affects Versions: 1.13.0 >Reporter: Juan Ramos >Assignee: Juan Ramos >Priority: Major > Labels: caching-applications > > The {{RebalanceCommand}} uses a non-daemon thread to execute its internal > logic: > {code:title=RebalanceCommand.java|borderStyle=solid} > ExecutorService commandExecutors = > LoggingExecutors.newSingleThreadExecutor("RebalanceCommand", false); > {code} > The above prevents the {{locator}} from gracefully shutdown afterwards: > {noformat} > "RebalanceCommand1" #971 prio=5 os_prio=0 tid=0x7f9664011000 nid=0x15905 > waiting on condition [0x7f9651471000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x0007308c36e8> (a > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) > at > java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) > at > java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (GEODE-8039) update LICENSE for 1.13
[ https://issues.apache.org/jira/browse/GEODE-8039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Owen Nichols resolved GEODE-8039. - Fix Version/s: 1.14.0 1.13.0 1.12.1 Resolution: Fixed > update LICENSE for 1.13 > --- > > Key: GEODE-8039 > URL: https://issues.apache.org/jira/browse/GEODE-8039 > Project: Geode > Issue Type: Improvement > Components: release >Reporter: Owen Nichols >Priority: Major > Fix For: 1.12.1, 1.13.0, 1.14.0 > > > ensure all dependencies we bundle with src distribution are correctly listed > in LICENSE and all dependencies bundled in binary distribution are correctly > listed in geode-assembly/src/main/dist/LICENSE -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8004) Regression Introduced Through GEODE-7565
[ https://issues.apache.org/jira/browse/GEODE-8004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100104#comment-17100104 ] ASF GitHub Bot commented on GEODE-8004: --- alb3rtobr commented on a change in pull request #4978: URL: https://github.com/apache/geode/pull/4978#discussion_r420281148 ## File path: geode-core/src/main/java/org/apache/geode/distributed/internal/ServerLocationAndMemberId.java ## @@ -0,0 +1,65 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.geode.distributed.internal; + +public class ServerLocationAndMemberId { + + private final ServerLocation serverLocation; + private final String memberId; + + public ServerLocationAndMemberId(ServerLocation serverLocation, String memberId) { +this.serverLocation = serverLocation; +this.memberId = memberId; + } + + public ServerLocation getServerLocation() { +return this.serverLocation; + } + + public String getMemberId() { +return this.memberId; + } + + @Override + public boolean equals(Object obj) { +if (this == obj) + return true; +if (obj == null) + return false; +if (!(obj instanceof ServerLocationAndMemberId)) + return false; +final ServerLocationAndMemberId other = (ServerLocationAndMemberId) obj; + +if (!this.serverLocation.equals(other.getServerLocation())) { + return false; +} + +return this.memberId.equals(other.getMemberId()); Review comment: Fixed, thanks! ## File path: geode-core/src/main/java/org/apache/geode/internal/cache/GridAdvisor.java ## @@ -418,18 +418,24 @@ public String toString() { public int hashCode() { final String thisHost = this.gp.getHost(); final int thisPort = this.gp.getPort(); - return thisHost != null ? (thisHost.hashCode() ^ thisPort) : thisPort; + final String thisMemberId = this.getMemberId().getUniqueId(); + final int thisMemberIdHashCode = (thisMemberId != null) ? thisMemberId.hashCode() : 0; + return thisHost != null ? (thisHost.hashCode() ^ thisPort) + thisMemberIdHashCode + : thisPort + thisMemberIdHashCode; } @Override public boolean equals(Object obj) { if (obj instanceof GridProfileId) { final GridProfileId other = (GridProfileId) obj; + if (this.gp.getPort() == other.gp.getPort()) { final String thisHost = this.gp.getHost(); final String otherHost = other.gp.getHost(); if (thisHost != null) { -return thisHost.equals(otherHost); +if (thisHost.equals(otherHost)) { + return this.getMemberId().getUniqueId().equals(other.getMemberId().getUniqueId()); Review comment: Fixed, thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Regression Introduced Through GEODE-7565 > > > Key: GEODE-8004 > URL: https://issues.apache.org/jira/browse/GEODE-8004 > Project: Geode > Issue Type: Bug > Components: client/server >Reporter: Juan Ramos >Assignee: Juan Ramos >Priority: Major > Labels: GeodeCommons > > Intermittent errors were observed while executing some internal tests and > commit > [dd23ee8|https://github.com/apache/geode/commit/dd23ee8200cba67cea82e57e2e4ccedcdf9e8266] > was determined to be responsible. As of yet, no local reproduction of the > issue is available, but work is ongoing to provide a test that can be used to > debug the issue (a [PR|https://github.com/apache/geode/pull/4974] to revert > of the original commit has been opened and will be merged shortly, though, > this ticket is to investigate the root cause so the original commit can be > merged again into {{develop}}). > --- > It seems that a server is trying to read an {{ack}}
[jira] [Commented] (GEODE-7667) GFSH commands - uniform gfsh command to clear regions
[ https://issues.apache.org/jira/browse/GEODE-7667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100097#comment-17100097 ] ASF GitHub Bot commented on GEODE-7667: --- BenjaminPerryRoss commented on pull request #4818: URL: https://github.com/apache/geode/pull/4818#issuecomment-624188611 @davebarnes97 that seems like a good idea. I've created the Jira for the documentation work and added it as a child of the base feature here https://issues.apache.org/jira/browse/GEODE-8074. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > GFSH commands - uniform gfsh command to clear regions > - > > Key: GEODE-7667 > URL: https://issues.apache.org/jira/browse/GEODE-7667 > Project: Geode > Issue Type: Sub-task > Components: regions >Reporter: Nabarun Nag >Assignee: Benjamin P Ross >Priority: Major > Labels: GeodeCommons, docs > Time Spent: 5h > Remaining Estimate: 0h > > * Currently, the gfsh command to clear replicated region is called ‘remove > —region=/regionName’. > * Replace this command with ‘clear region —region=regionName’ > * While executing this gfsh command on partitioned regions, this should call > the clear() Java API using the gfsh function execution machinery. > * Point to note is that this command should take into consideration of the > coordinator selection and how this command is distributed to the members > Acceptance : > * There should be ‘clear region —region=/regionName’ gfsh command > * The gfsh command must be documented in the Geode User Guide > * DUnit tests to verify that command can be executed successfully on > PartitionedRegion > * Deprecate the remove command, as remove does not mean clear > * Unit tests with complete code coverage for the newly written code. > * Test coverage to when a member departs in this scenario > * Test coverage to when a member restarts in this scenario -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (GEODE-8074) Add documentation for PR clear
Benjamin P Ross created GEODE-8074: -- Summary: Add documentation for PR clear Key: GEODE-8074 URL: https://issues.apache.org/jira/browse/GEODE-8074 Project: Geode Issue Type: New Feature Components: docs Reporter: Benjamin P Ross With the addition of the ability to clear a partitioned region we need to add documentation for the new gfsh command 'clear' as well as update existing documentation for existing clear feature and 'remove' command. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (GEODE-8073) NullPointerException thrown in PartitionedRegion.handleOldNodes
[ https://issues.apache.org/jira/browse/GEODE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Shu updated GEODE-8073: Labels: caching-applications (was: ) > NullPointerException thrown in PartitionedRegion.handleOldNodes > --- > > Key: GEODE-8073 > URL: https://issues.apache.org/jira/browse/GEODE-8073 > Project: Geode > Issue Type: Bug > Components: regions >Reporter: Eric Shu >Assignee: Eric Shu >Priority: Major > Labels: caching-applications > > The NPE can be thrown when a remote node is gone unexpectedly. > {noformat} > Caused by: java.lang.NullPointerException > at > org.apache.geode.internal.cache.PartitionedRegion.handleOldNodes(PartitionedRegion.java:4610) > at > org.apache.geode.internal.cache.PartitionedRegion.fetchEntries(PartitionedRegion.java:4689) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVKeysPR(BaseCommand.java:1191) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVAllKeys(BaseCommand.java:1124) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKeysValuesPolicy(BaseCommand.java:973) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.fillAndSendRegisterInterestResponseChunks(BaseCommand.java:905) > at > org.apache.geode.internal.cache.tier.sockets.command.RegisterInterest61.cmdExecute(RegisterInterest61.java:260) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:183) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:848) > at > org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:72) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1212) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at > org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:686) > at > org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8039) update LICENSE for 1.13
[ https://issues.apache.org/jira/browse/GEODE-8039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100085#comment-17100085 ] ASF subversion and git services commented on GEODE-8039: Commit b7563522d17129f679ab789433fd3aa4471baf09 in geode's branch refs/heads/support/1.13 from Owen Nichols [ https://gitbox.apache.org/repos/asf?p=geode.git;h=b756352 ] GEODE-8039: update incorrect versions in LICENSE (#5018) * GEODE-8039: update incorrect versions in LICENSE * add license review as part of the release process and RC pipeline * fix wrapping and capitalization so that binary license is a superset of source license (cherry picked from commit 7ee1042a8393563b4d7655b8bc2d4a77564b91b5) > update LICENSE for 1.13 > --- > > Key: GEODE-8039 > URL: https://issues.apache.org/jira/browse/GEODE-8039 > Project: Geode > Issue Type: Improvement > Components: release >Reporter: Owen Nichols >Priority: Major > > ensure all dependencies we bundle with src distribution are correctly listed > in LICENSE and all dependencies bundled in binary distribution are correctly > listed in geode-assembly/src/main/dist/LICENSE -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8039) update LICENSE for 1.13
[ https://issues.apache.org/jira/browse/GEODE-8039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100086#comment-17100086 ] ASF subversion and git services commented on GEODE-8039: Commit b7563522d17129f679ab789433fd3aa4471baf09 in geode's branch refs/heads/support/1.13 from Owen Nichols [ https://gitbox.apache.org/repos/asf?p=geode.git;h=b756352 ] GEODE-8039: update incorrect versions in LICENSE (#5018) * GEODE-8039: update incorrect versions in LICENSE * add license review as part of the release process and RC pipeline * fix wrapping and capitalization so that binary license is a superset of source license (cherry picked from commit 7ee1042a8393563b4d7655b8bc2d4a77564b91b5) > update LICENSE for 1.13 > --- > > Key: GEODE-8039 > URL: https://issues.apache.org/jira/browse/GEODE-8039 > Project: Geode > Issue Type: Improvement > Components: release >Reporter: Owen Nichols >Priority: Major > > ensure all dependencies we bundle with src distribution are correctly listed > in LICENSE and all dependencies bundled in binary distribution are correctly > listed in geode-assembly/src/main/dist/LICENSE -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (GEODE-8073) NullPointerException thrown in PartitionedRegion.handleOldNodes
[ https://issues.apache.org/jira/browse/GEODE-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Shu reassigned GEODE-8073: --- Assignee: Eric Shu > NullPointerException thrown in PartitionedRegion.handleOldNodes > --- > > Key: GEODE-8073 > URL: https://issues.apache.org/jira/browse/GEODE-8073 > Project: Geode > Issue Type: Bug > Components: regions >Reporter: Eric Shu >Assignee: Eric Shu >Priority: Major > > The NPE can be thrown when a remote node is gone unexpectedly. > {noformat} > Caused by: java.lang.NullPointerException > at > org.apache.geode.internal.cache.PartitionedRegion.handleOldNodes(PartitionedRegion.java:4610) > at > org.apache.geode.internal.cache.PartitionedRegion.fetchEntries(PartitionedRegion.java:4689) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVKeysPR(BaseCommand.java:1191) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVAllKeys(BaseCommand.java:1124) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKeysValuesPolicy(BaseCommand.java:973) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.fillAndSendRegisterInterestResponseChunks(BaseCommand.java:905) > at > org.apache.geode.internal.cache.tier.sockets.command.RegisterInterest61.cmdExecute(RegisterInterest61.java:260) > at > org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:183) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:848) > at > org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:72) > at > org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1212) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at > org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:686) > at > org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (GEODE-8073) NullPointerException thrown in PartitionedRegion.handleOldNodes
Eric Shu created GEODE-8073: --- Summary: NullPointerException thrown in PartitionedRegion.handleOldNodes Key: GEODE-8073 URL: https://issues.apache.org/jira/browse/GEODE-8073 Project: Geode Issue Type: Bug Components: regions Reporter: Eric Shu The NPE can be thrown when a remote node is gone unexpectedly. {noformat} Caused by: java.lang.NullPointerException at org.apache.geode.internal.cache.PartitionedRegion.handleOldNodes(PartitionedRegion.java:4610) at org.apache.geode.internal.cache.PartitionedRegion.fetchEntries(PartitionedRegion.java:4689) at org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVKeysPR(BaseCommand.java:1191) at org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKVAllKeys(BaseCommand.java:1124) at org.apache.geode.internal.cache.tier.sockets.BaseCommand.handleKeysValuesPolicy(BaseCommand.java:973) at org.apache.geode.internal.cache.tier.sockets.BaseCommand.fillAndSendRegisterInterestResponseChunks(BaseCommand.java:905) at org.apache.geode.internal.cache.tier.sockets.command.RegisterInterest61.cmdExecute(RegisterInterest61.java:260) at org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:183) at org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMessage(ServerConnection.java:848) at org.apache.geode.internal.cache.tier.sockets.OriginalServerConnection.doOneMessage(OriginalServerConnection.java:72) at org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1212) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl.lambda$initializeServerConnectionThreadPool$3(AcceptorImpl.java:686) at org.apache.geode.logging.internal.executors.LoggingThreadFactory.lambda$newThread$0(LoggingThreadFactory.java:119) at java.lang.Thread.run(Thread.java:748) {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8071) RebalanceCommand Should Use Daemon Threads
[ https://issues.apache.org/jira/browse/GEODE-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100079#comment-17100079 ] ASF GitHub Bot commented on GEODE-8071: --- jujoramos commented on pull request #5054: URL: https://github.com/apache/geode/pull/5054#issuecomment-624176387 The [first commit ](https://github.com/apache/geode/pull/5054/commits/feb052e71cee12a5e7a2ca72809f9ad07a908e44) is just a refactor of the test class, the actual changes to fix the problem are in the [second commit](https://github.com/apache/geode/pull/5054/commits/bd481554d8add576c885c675c2350474eb398939). I've split them to make the review easier, will `squash` them before merging into `develop`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > RebalanceCommand Should Use Daemon Threads > -- > > Key: GEODE-8071 > URL: https://issues.apache.org/jira/browse/GEODE-8071 > Project: Geode > Issue Type: Bug > Components: gfsh, management >Affects Versions: 1.13.0 >Reporter: Juan Ramos >Assignee: Juan Ramos >Priority: Major > Labels: caching-applications > > The {{RebalanceCommand}} uses a non-daemon thread to execute its internal > logic: > {code:title=RebalanceCommand.java|borderStyle=solid} > ExecutorService commandExecutors = > LoggingExecutors.newSingleThreadExecutor("RebalanceCommand", false); > {code} > The above prevents the {{locator}} from gracefully shutdown afterwards: > {noformat} > "RebalanceCommand1" #971 prio=5 os_prio=0 tid=0x7f9664011000 nid=0x15905 > waiting on condition [0x7f9651471000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x0007308c36e8> (a > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) > at > java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) > at > java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (GEODE-8029) java.lang.IllegalArgumentException: Too large (805306401 expected elements with load factor 0.75)
[ https://issues.apache.org/jira/browse/GEODE-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nabarun Nag closed GEODE-8029. -- > java.lang.IllegalArgumentException: Too large (805306401 expected elements > with load factor 0.75) > - > > Key: GEODE-8029 > URL: https://issues.apache.org/jira/browse/GEODE-8029 > Project: Geode > Issue Type: Bug > Components: configuration, core, gfsh >Affects Versions: 1.9.0 >Reporter: Jagadeesh sivasankaran >Assignee: Juan Ramos >Priority: Major > Labels: GeodeCommons, caching-applications > Fix For: 1.14.0 > > Attachments: Screen Shot 2020-04-27 at 12.21.19 PM.png, Screen Shot > 2020-04-27 at 12.21.19 PM.png, server02.log > > > we have a cluster of three Locator Geode and three Cache Server running in > CentOS servers. Today (April 27) after patching our CENTOS servers , all > locator and 2 servers came up , But one Cache server was not starting . here > is the Exception details. Please let me know how to resolve the beloe issue > and need any configuration changes to diskstore ? > > > Starting a Geode Server in /app/provServerHO2... > The > Cache Server process terminated unexpectedly with exit status 1. Please > refer to the log file in /app/provServerHO2 for full details. > Exception in thread "main" java.lang.IllegalArgumentException: Too large > (805306401 expected elements with load factor 0.75) > at it.unimi.dsi.fastutil.HashCommon.arraySize(HashCommon.java:222) > at it.unimi.dsi.fastutil.ints.IntOpenHashSet.add(IntOpenHashSet.java:308) > at > org.apache.geode.internal.cache.DiskStoreImpl$OplogEntryIdSet.add(DiskStoreImpl.java:3474) > at org.apache.geode.internal.cache.Oplog.readDelEntry(Oplog.java:3007) > at org.apache.geode.internal.cache.Oplog.recoverDrf(Oplog.java:1500) > at > org.apache.geode.internal.cache.PersistentOplogSet.recoverOplogs(PersistentOplogSet.java:445) > at > org.apache.geode.internal.cache.PersistentOplogSet.recoverRegionsThatAreReady(PersistentOplogSet.java:369) > at > org.apache.geode.internal.cache.DiskStoreImpl.recoverRegionsThatAreReady(DiskStoreImpl.java:2053) > at > org.apache.geode.internal.cache.DiskStoreImpl.initializeIfNeeded(DiskStoreImpl.java:2041) > security-peer-auth-init= > at > org.apache.geode.internal.cache.DiskStoreImpl.doInitialRecovery(DiskStoreImpl.java:2046) > at > org.apache.geode.internal.cache.DiskStoreFactoryImpl.initializeDiskStore(DiskStoreFactoryImpl.java:184) > at > org.apache.geode.internal.cache.DiskStoreFactoryImpl.create(DiskStoreFactoryImpl.java:150) > at > org.apache.geode.internal.cache.xmlcache.CacheCreation.createDiskStore(CacheCreation.java:794) > at > org.apache.geode.internal.cache.xmlcache.CacheCreation.initializePdxDiskStore(CacheCreation.java:785) > at > org.apache.geode.internal.cache.xmlcache.CacheCreation.create(CacheCreation.java:509) > at > org.apache.geode.internal.cache.xmlcache.CacheXmlParser.create(CacheXmlParser.java:337) > at > org.apache.geode.internal.cache.GemFireCacheImpl.loadCacheXml(GemFireCacheImpl.java:4272) > at > org.apache.geode.internal.cache.ClusterConfigurationLoader.applyClusterXmlConfiguration(ClusterConfigurationLoader.java:197) > at > org.apache.geode.internal.cache.GemFireCacheImpl.applyJarAndXmlFromClusterConfig(GemFireCacheImpl.java:1240) > at > org.apache.geode.internal.cache.GemFireCacheImpl.initialize(GemFireCacheImpl.java:1206) > at > org.apache.geode.internal.cache.InternalCacheBuilder.create(InternalCacheBuilder.java:207) > at > org.apache.geode.internal.cache.InternalCacheBuilder.create(InternalCacheBuilder.java:164) > at org.apache.geode.cache.CacheFactory.create(CacheFactory.java:139) > at > org.apache.geode.distributed.internal.DefaultServerLauncherCacheProvider.createCache(DefaultServerLauncherCacheProvider.java:52) > at > org.apache.geode.distributed.ServerLauncher.createCache(ServerLauncher.java:869) > at org.apache.geode.distributed.ServerLauncher.start(ServerLauncher.java:786) > at org.apache.geode.distributed.ServerLauncher.run(ServerLauncher.java:716) > at org.apache.geode.distributed.ServerLauncher.main(ServerLauncher.java:236) > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (GEODE-8064) DeploymentSemanticVersionJarDUnitTest.java (GEODE-7421) is failing.
[ https://issues.apache.org/jira/browse/GEODE-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Hanson updated GEODE-8064: --- Labels: flaky (was: ) > DeploymentSemanticVersionJarDUnitTest.java (GEODE-7421) is failing. > --- > > Key: GEODE-8064 > URL: https://issues.apache.org/jira/browse/GEODE-8064 > Project: Geode > Issue Type: Bug > Components: management >Reporter: Mark Hanson >Assignee: Joris Melchior >Priority: Major > Labels: flaky > Fix For: 1.13.0 > > > The following tests are failing in the mass test run. > {noformat} > deploySameJarNameWithDifferentContent > > https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-mass-test-run-main/jobs/DistributedTestOpenJDK8/builds/2541 > deploy > > https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-mass-test-run-main/jobs/DistributedTestOpenJDK8/builds/2541 > deployWithPlainWillCleanSemanticVersion > > https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-mass-test-run-main/jobs/DistributedTestOpenJDK8/builds/2541 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8071) RebalanceCommand Should Use Daemon Threads
[ https://issues.apache.org/jira/browse/GEODE-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100075#comment-17100075 ] ASF GitHub Bot commented on GEODE-8071: --- jujoramos opened a new pull request #5054: URL: https://github.com/apache/geode/pull/5054 GEODE-8071: Use daemon threads in RebalanceCommand Changed the ExecutorService within RebalanceCommand to use daemon threads, otherwise the locator refuses to gracefully shutdown. - Fixed minor warnings. - Added distributed tests. Thank you for submitting a contribution to Apache Geode. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [X] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [X] Has your PR been rebased against the latest commit within the target branch (typically `develop`)? - [ ] Is your initial contribution a single, squashed commit? - [X] Does `gradlew build` run cleanly? - [X] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? ### Note: Please ensure that once the PR is submitted, check Concourse for build issues and submit an update to your PR as soon as possible. If you need help, please send an email to d...@geode.apache.org. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > RebalanceCommand Should Use Daemon Threads > -- > > Key: GEODE-8071 > URL: https://issues.apache.org/jira/browse/GEODE-8071 > Project: Geode > Issue Type: Bug > Components: gfsh, management >Affects Versions: 1.13.0 >Reporter: Juan Ramos >Assignee: Juan Ramos >Priority: Major > Labels: caching-applications > > The {{RebalanceCommand}} uses a non-daemon thread to execute its internal > logic: > {code:title=RebalanceCommand.java|borderStyle=solid} > ExecutorService commandExecutors = > LoggingExecutors.newSingleThreadExecutor("RebalanceCommand", false); > {code} > The above prevents the {{locator}} from gracefully shutdown afterwards: > {noformat} > "RebalanceCommand1" #971 prio=5 os_prio=0 tid=0x7f9664011000 nid=0x15905 > waiting on condition [0x7f9651471000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x0007308c36e8> (a > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) > at > java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) > at > java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8039) update LICENSE for 1.13
[ https://issues.apache.org/jira/browse/GEODE-8039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100071#comment-17100071 ] ASF subversion and git services commented on GEODE-8039: Commit 7ee1042a8393563b4d7655b8bc2d4a77564b91b5 in geode's branch refs/heads/develop from Owen Nichols [ https://gitbox.apache.org/repos/asf?p=geode.git;h=7ee1042 ] GEODE-8039: update incorrect versions in LICENSE (#5018) * GEODE-8039: update incorrect versions in LICENSE * add license review as part of the release process and RC pipeline * fix wrapping and capitalization so that binary license is a superset of source license > update LICENSE for 1.13 > --- > > Key: GEODE-8039 > URL: https://issues.apache.org/jira/browse/GEODE-8039 > Project: Geode > Issue Type: Improvement > Components: release >Reporter: Owen Nichols >Priority: Major > > ensure all dependencies we bundle with src distribution are correctly listed > in LICENSE and all dependencies bundled in binary distribution are correctly > listed in geode-assembly/src/main/dist/LICENSE -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8039) update LICENSE for 1.13
[ https://issues.apache.org/jira/browse/GEODE-8039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100072#comment-17100072 ] ASF subversion and git services commented on GEODE-8039: Commit 7ee1042a8393563b4d7655b8bc2d4a77564b91b5 in geode's branch refs/heads/develop from Owen Nichols [ https://gitbox.apache.org/repos/asf?p=geode.git;h=7ee1042 ] GEODE-8039: update incorrect versions in LICENSE (#5018) * GEODE-8039: update incorrect versions in LICENSE * add license review as part of the release process and RC pipeline * fix wrapping and capitalization so that binary license is a superset of source license > update LICENSE for 1.13 > --- > > Key: GEODE-8039 > URL: https://issues.apache.org/jira/browse/GEODE-8039 > Project: Geode > Issue Type: Improvement > Components: release >Reporter: Owen Nichols >Priority: Major > > ensure all dependencies we bundle with src distribution are correctly listed > in LICENSE and all dependencies bundled in binary distribution are correctly > listed in geode-assembly/src/main/dist/LICENSE -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8072) When cache is closing, the lucene query might still on-going, some NPE could happen
[ https://issues.apache.org/jira/browse/GEODE-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100064#comment-17100064 ] ASF subversion and git services commented on GEODE-8072: Commit d15076e641a94030027e3a124a113beb195ed8bf in geode's branch refs/heads/feature/GEODE-8072 from zhouxh [ https://gitbox.apache.org/repos/asf?p=geode.git;h=d15076e ] GEODE-8072: check the null and stop the on-going query function when cache is closing > When cache is closing, the lucene query might still on-going, some NPE could > happen > --- > > Key: GEODE-8072 > URL: https://issues.apache.org/jira/browse/GEODE-8072 > Project: Geode > Issue Type: Improvement >Reporter: Xiaojian Zhou >Priority: Major > > when the cache is closing, what detected recently is: > RROR util.TestException: Got unexpected exception > java.lang.NullPointerException > at > org.apache.geode.internal.cache.execute.InternalFunctionExecutionServiceImpl.onRegion(InternalFunctionExecutionServiceImpl.java:120) > at > org.apache.geode.cache.execute.FunctionService.onRegion(FunctionService.java:76) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.onRegion(PageableLuceneQueryResultsImpl.java:116) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getValues(PageableLuceneQueryResultsImpl.java:110) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getHitEntries(PageableLuceneQueryResultsImpl.java:91) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.advancePage(PageableLuceneQueryResultsImpl.java:139) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.hasNext(PageableLuceneQueryResultsImpl.java:148) > It's not caused by any recently code changes, it's just a deep buried race > condition triggered. > I propose a simple fix to just check the null and throw an exception which > could be handled. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8072) When cache is closing, the lucene query might still on-going, some NPE could happen
[ https://issues.apache.org/jira/browse/GEODE-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100065#comment-17100065 ] ASF GitHub Bot commented on GEODE-8072: --- gesterzhou opened a new pull request #5053: URL: https://github.com/apache/geode/pull/5053 …cache is closing Thank you for submitting a contribution to Apache Geode. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Has your PR been rebased against the latest commit within the target branch (typically `develop`)? - [ ] Is your initial contribution a single, squashed commit? - [ ] Does `gradlew build` run cleanly? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? ### Note: Please ensure that once the PR is submitted, check Concourse for build issues and submit an update to your PR as soon as possible. If you need help, please send an email to d...@geode.apache.org. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > When cache is closing, the lucene query might still on-going, some NPE could > happen > --- > > Key: GEODE-8072 > URL: https://issues.apache.org/jira/browse/GEODE-8072 > Project: Geode > Issue Type: Improvement >Reporter: Xiaojian Zhou >Priority: Major > > when the cache is closing, what detected recently is: > RROR util.TestException: Got unexpected exception > java.lang.NullPointerException > at > org.apache.geode.internal.cache.execute.InternalFunctionExecutionServiceImpl.onRegion(InternalFunctionExecutionServiceImpl.java:120) > at > org.apache.geode.cache.execute.FunctionService.onRegion(FunctionService.java:76) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.onRegion(PageableLuceneQueryResultsImpl.java:116) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getValues(PageableLuceneQueryResultsImpl.java:110) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getHitEntries(PageableLuceneQueryResultsImpl.java:91) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.advancePage(PageableLuceneQueryResultsImpl.java:139) > at > org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.hasNext(PageableLuceneQueryResultsImpl.java:148) > It's not caused by any recently code changes, it's just a deep buried race > condition triggered. > I propose a simple fix to just check the null and throw an exception which > could be handled. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (GEODE-8072) When cache is closing, the lucene query might still on-going, some NPE could happen
Xiaojian Zhou created GEODE-8072: Summary: When cache is closing, the lucene query might still on-going, some NPE could happen Key: GEODE-8072 URL: https://issues.apache.org/jira/browse/GEODE-8072 Project: Geode Issue Type: Improvement Reporter: Xiaojian Zhou when the cache is closing, what detected recently is: RROR util.TestException: Got unexpected exception java.lang.NullPointerException at org.apache.geode.internal.cache.execute.InternalFunctionExecutionServiceImpl.onRegion(InternalFunctionExecutionServiceImpl.java:120) at org.apache.geode.cache.execute.FunctionService.onRegion(FunctionService.java:76) at org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.onRegion(PageableLuceneQueryResultsImpl.java:116) at org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getValues(PageableLuceneQueryResultsImpl.java:110) at org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.getHitEntries(PageableLuceneQueryResultsImpl.java:91) at org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.advancePage(PageableLuceneQueryResultsImpl.java:139) at org.apache.geode.cache.lucene.internal.PageableLuceneQueryResultsImpl.hasNext(PageableLuceneQueryResultsImpl.java:148) It's not caused by any recently code changes, it's just a deep buried race condition triggered. I propose a simple fix to just check the null and throw an exception which could be handled. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8039) update LICENSE for 1.13
[ https://issues.apache.org/jira/browse/GEODE-8039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100058#comment-17100058 ] ASF GitHub Bot commented on GEODE-8039: --- metatype commented on a change in pull request #5018: URL: https://github.com/apache/geode/pull/5018#discussion_r420249646 ## File path: gradle/java.gradle ## @@ -73,9 +73,9 @@ gradle.taskGraph.whenReady({ graph -> } } jar.metaInf { - from("$rootDir/LICENSE") + from("$rootDir/geode-assembly/src/main/dist/LICENSE") if (jar.source.filter({ it.name.contains('NOTICE') }).empty) { -from("$rootDir/NOTICE") +from("$rootDir/geode-assembly/src/main/dist/NOTICE") Review comment: Makes sense This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > update LICENSE for 1.13 > --- > > Key: GEODE-8039 > URL: https://issues.apache.org/jira/browse/GEODE-8039 > Project: Geode > Issue Type: Improvement > Components: release >Reporter: Owen Nichols >Priority: Major > > ensure all dependencies we bundle with src distribution are correctly listed > in LICENSE and all dependencies bundled in binary distribution are correctly > listed in geode-assembly/src/main/dist/LICENSE -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8039) update LICENSE for 1.13
[ https://issues.apache.org/jira/browse/GEODE-8039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100056#comment-17100056 ] ASF GitHub Bot commented on GEODE-8039: --- metatype commented on a change in pull request #5018: URL: https://github.com/apache/geode/pull/5018#discussion_r420248906 ## File path: LICENSE ## @@ -280,8 +280,6 @@ Apache Geode bundles the following files under the MIT license: Foundation and other contributors, http://jquery.org - jScrollPane (http://jscrollpane.kelvinluck.com/), Copyright (c) 2010 Kelvin Luck - - matchMedia() polyfill (https://github.com/paulirish/matchMedia.js), Review comment: Nice work! Thanks for filling in the gaps in my memory :-) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > update LICENSE for 1.13 > --- > > Key: GEODE-8039 > URL: https://issues.apache.org/jira/browse/GEODE-8039 > Project: Geode > Issue Type: Improvement > Components: release >Reporter: Owen Nichols >Priority: Major > > ensure all dependencies we bundle with src distribution are correctly listed > in LICENSE and all dependencies bundled in binary distribution are correctly > listed in geode-assembly/src/main/dist/LICENSE -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (GEODE-8071) RebalanceCommand Should Use Daemon Threads
[ https://issues.apache.org/jira/browse/GEODE-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Juan Ramos reassigned GEODE-8071: - Assignee: Juan Ramos > RebalanceCommand Should Use Daemon Threads > -- > > Key: GEODE-8071 > URL: https://issues.apache.org/jira/browse/GEODE-8071 > Project: Geode > Issue Type: Bug > Components: gfsh, management >Reporter: Juan Ramos >Assignee: Juan Ramos >Priority: Major > > The {{RebalanceCommand}} uses a non-daemon thread to execute its internal > logic: > {code:title=RebalanceCommand.java|borderStyle=solid} > ExecutorService commandExecutors = > LoggingExecutors.newSingleThreadExecutor("RebalanceCommand", false); > {code} > The above prevents the {{locator}} from gracefully shutdown afterwards: > {noformat} > "RebalanceCommand1" #971 prio=5 os_prio=0 tid=0x7f9664011000 nid=0x15905 > waiting on condition [0x7f9651471000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x0007308c36e8> (a > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) > at > java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) > at > java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (GEODE-8071) RebalanceCommand Should Use Daemon Threads
Juan Ramos created GEODE-8071: - Summary: RebalanceCommand Should Use Daemon Threads Key: GEODE-8071 URL: https://issues.apache.org/jira/browse/GEODE-8071 Project: Geode Issue Type: Bug Components: gfsh, management Reporter: Juan Ramos The {{RebalanceCommand}} uses a non-daemon thread to execute its internal logic: {code:title=RebalanceCommand.java|borderStyle=solid} ExecutorService commandExecutors = LoggingExecutors.newSingleThreadExecutor("RebalanceCommand", false); {code} The above prevents the {{locator}} from gracefully shutdown afterwards: {noformat} "RebalanceCommand1" #971 prio=5 os_prio=0 tid=0x7f9664011000 nid=0x15905 waiting on condition [0x7f9651471000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x0007308c36e8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (GEODE-8071) RebalanceCommand Should Use Daemon Threads
[ https://issues.apache.org/jira/browse/GEODE-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Juan Ramos updated GEODE-8071: -- Labels: caching-applications (was: ) > RebalanceCommand Should Use Daemon Threads > -- > > Key: GEODE-8071 > URL: https://issues.apache.org/jira/browse/GEODE-8071 > Project: Geode > Issue Type: Bug > Components: gfsh, management >Affects Versions: 1.13.0 >Reporter: Juan Ramos >Assignee: Juan Ramos >Priority: Major > Labels: caching-applications > > The {{RebalanceCommand}} uses a non-daemon thread to execute its internal > logic: > {code:title=RebalanceCommand.java|borderStyle=solid} > ExecutorService commandExecutors = > LoggingExecutors.newSingleThreadExecutor("RebalanceCommand", false); > {code} > The above prevents the {{locator}} from gracefully shutdown afterwards: > {noformat} > "RebalanceCommand1" #971 prio=5 os_prio=0 tid=0x7f9664011000 nid=0x15905 > waiting on condition [0x7f9651471000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x0007308c36e8> (a > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) > at > java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) > at > java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (GEODE-8071) RebalanceCommand Should Use Daemon Threads
[ https://issues.apache.org/jira/browse/GEODE-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Juan Ramos updated GEODE-8071: -- Affects Version/s: 1.13.0 > RebalanceCommand Should Use Daemon Threads > -- > > Key: GEODE-8071 > URL: https://issues.apache.org/jira/browse/GEODE-8071 > Project: Geode > Issue Type: Bug > Components: gfsh, management >Affects Versions: 1.13.0 >Reporter: Juan Ramos >Assignee: Juan Ramos >Priority: Major > > The {{RebalanceCommand}} uses a non-daemon thread to execute its internal > logic: > {code:title=RebalanceCommand.java|borderStyle=solid} > ExecutorService commandExecutors = > LoggingExecutors.newSingleThreadExecutor("RebalanceCommand", false); > {code} > The above prevents the {{locator}} from gracefully shutdown afterwards: > {noformat} > "RebalanceCommand1" #971 prio=5 os_prio=0 tid=0x7f9664011000 nid=0x15905 > waiting on condition [0x7f9651471000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x0007308c36e8> (a > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) > at > java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) > at > java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8070) add TLSv1.3 to "known" secure communications protocols
[ https://issues.apache.org/jira/browse/GEODE-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100031#comment-17100031 ] Jacob Barrett commented on GEODE-8070: -- Two issues I see with the current approach. 1) If there is an insecure provider that still supplies SSL (v1) then we will create a context from that provider and thus be limited to the set of protocols it supports when negotiating connections. Now given that its unlikely to have alternative providers its still odd that we are scanning this list from least secure to most, this should be reversed. 2) Scanning this list at all for a context doesn't seem productive. Assuming we correct issue 1 and scan backwards, or there is only 1 provider supplied, in all cases we are likely to get the highest priority provider installed. So why not just {{SSLContext.getInstance()}} without limiting the based on "known algorithms"? If we wanted to allow users to specify alternatives it would be better for us to provide an "provider" property they could adjust. > add TLSv1.3 to "known" secure communications protocols > -- > > Key: GEODE-8070 > URL: https://issues.apache.org/jira/browse/GEODE-8070 > Project: Geode > Issue Type: Bug > Components: membership >Reporter: Bruce J Schuchardt >Priority: Major > > SSLUtil has a list of "known" TLS protocols. It should support TLSv1.3. > > {noformat} > // lookup known algorithms > String[] knownAlgorithms = {"SSL", "SSLv2", "SSLv3", "TLS", "TLSv1", > "TLSv1.1", "TLSv1.2"}; > for (String algo : knownAlgorithms) { > try { > sslContext = SSLContext.getInstance(algo); > break; > } catch (NoSuchAlgorithmException e) { > // continue > } > } {noformat} > We probably can't fully test this change since not all JDKs we test with > support v1.3 at this time. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8004) Regression Introduced Through GEODE-7565
[ https://issues.apache.org/jira/browse/GEODE-8004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100026#comment-17100026 ] ASF GitHub Bot commented on GEODE-8004: --- bschuchardt commented on a change in pull request #4978: URL: https://github.com/apache/geode/pull/4978#discussion_r420211074 ## File path: geode-core/src/main/java/org/apache/geode/distributed/internal/ServerLocationAndMemberId.java ## @@ -0,0 +1,65 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.geode.distributed.internal; + +public class ServerLocationAndMemberId { + + private final ServerLocation serverLocation; + private final String memberId; + + public ServerLocationAndMemberId(ServerLocation serverLocation, String memberId) { +this.serverLocation = serverLocation; +this.memberId = memberId; + } + + public ServerLocation getServerLocation() { +return this.serverLocation; + } + + public String getMemberId() { +return this.memberId; + } + + @Override + public boolean equals(Object obj) { +if (this == obj) + return true; +if (obj == null) + return false; +if (!(obj instanceof ServerLocationAndMemberId)) + return false; +final ServerLocationAndMemberId other = (ServerLocationAndMemberId) obj; + +if (!this.serverLocation.equals(other.getServerLocation())) { + return false; +} + +return this.memberId.equals(other.getMemberId()); Review comment: There is a null check for memberId in hashCode() but not in equals(). If it's possible for memberId to be null then you should add a null check to equals(). ## File path: geode-core/src/main/java/org/apache/geode/internal/cache/GridAdvisor.java ## @@ -418,18 +418,24 @@ public String toString() { public int hashCode() { final String thisHost = this.gp.getHost(); final int thisPort = this.gp.getPort(); - return thisHost != null ? (thisHost.hashCode() ^ thisPort) : thisPort; + final String thisMemberId = this.getMemberId().getUniqueId(); + final int thisMemberIdHashCode = (thisMemberId != null) ? thisMemberId.hashCode() : 0; + return thisHost != null ? (thisHost.hashCode() ^ thisPort) + thisMemberIdHashCode + : thisPort + thisMemberIdHashCode; } @Override public boolean equals(Object obj) { if (obj instanceof GridProfileId) { final GridProfileId other = (GridProfileId) obj; + if (this.gp.getPort() == other.gp.getPort()) { final String thisHost = this.gp.getHost(); final String otherHost = other.gp.getHost(); if (thisHost != null) { -return thisHost.equals(otherHost); +if (thisHost.equals(otherHost)) { + return this.getMemberId().getUniqueId().equals(other.getMemberId().getUniqueId()); Review comment: This pair of equals()/hashCode() methods has the same problem. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Regression Introduced Through GEODE-7565 > > > Key: GEODE-8004 > URL: https://issues.apache.org/jira/browse/GEODE-8004 > Project: Geode > Issue Type: Bug > Components: client/server >Reporter: Juan Ramos >Assignee: Juan Ramos >Priority: Major > Labels: GeodeCommons > > Intermittent errors were observed while executing some internal tests and > commit > [dd23ee8|https://github.com/apache/geode/commit/dd23ee8200cba67cea82e57e2e4ccedcdf9e8266] > was determined to be responsible. As of yet, no local reproduction of the > issue is available, but work is ongoing to provide a test that can be used to > debug the issue (a [PR|https://github.com/apache/geode/pull/4974] to revert > of the original commit has been opened and will be me
[jira] [Created] (GEODE-8070) add TLSv1.3 to "known" secure communications protocols
Bruce J Schuchardt created GEODE-8070: - Summary: add TLSv1.3 to "known" secure communications protocols Key: GEODE-8070 URL: https://issues.apache.org/jira/browse/GEODE-8070 Project: Geode Issue Type: Bug Components: membership Reporter: Bruce J Schuchardt SSLUtil has a list of "known" TLS protocols. It should support TLSv1.3. {noformat} // lookup known algorithms String[] knownAlgorithms = {"SSL", "SSLv2", "SSLv3", "TLS", "TLSv1", "TLSv1.1", "TLSv1.2"}; for (String algo : knownAlgorithms) { try { sslContext = SSLContext.getInstance(algo); break; } catch (NoSuchAlgorithmException e) { // continue } } {noformat} We probably can't fully test this change since not all JDKs we test with support v1.3 at this time. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8058) Update implementation of EXPIRE, PEXPIRE, EXPIREAT, and PEXPIREAT to be HA
[ https://issues.apache.org/jira/browse/GEODE-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1706#comment-1706 ] ASF GitHub Bot commented on GEODE-8058: --- sabbeyPivotal commented on a change in pull request #5036: URL: https://github.com/apache/geode/pull/5036#discussion_r420192791 ## File path: geode-redis/src/distributedTest/java/org/apache/geode/redis/executors/ExpireDUnitTest.java ## @@ -0,0 +1,233 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ + +package org.apache.geode.redis.executors; + +import static org.apache.geode.distributed.ConfigurationProperties.MAX_WAIT_TIME_RECONNECT; +import static org.apache.geode.distributed.ConfigurationProperties.REDIS_BIND_ADDRESS; +import static org.apache.geode.distributed.ConfigurationProperties.REDIS_PORT; +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.Properties; + +import org.junit.After; +import org.junit.AfterClass; +import org.junit.BeforeClass; +import org.junit.ClassRule; +import org.junit.Ignore; +import org.junit.Test; +import redis.clients.jedis.Jedis; + +import org.apache.geode.internal.AvailablePortHelper; +import org.apache.geode.internal.cache.InternalCache; +import org.apache.geode.redis.internal.ByteArrayWrapper; +import org.apache.geode.redis.internal.GeodeRedisService; +import org.apache.geode.redis.internal.executor.TTLExecutor; +import org.apache.geode.test.awaitility.GeodeAwaitility; +import org.apache.geode.test.dunit.rules.ClusterStartupRule; +import org.apache.geode.test.dunit.rules.MemberVM; + +@Ignore("GEODE-8058: this test needs to pass to have feature parity with native redis") +public class ExpireDUnitTest { + + @ClassRule + public static ClusterStartupRule clusterStartUp = new ClusterStartupRule(4); + + static final String LOCAL_HOST = "127.0.0.1"; + static int[] availablePorts; + private static final int JEDIS_TIMEOUT = + Math.toIntExact(GeodeAwaitility.getTimeout().toMillis()); + static Jedis jedis1; + static Jedis jedis2; + static Jedis jedis3; + + static Properties locatorProperties; + static Properties serverProperties1; + static Properties serverProperties2; + static Properties serverProperties3; + + static MemberVM locator; + static MemberVM server1; + static MemberVM server2; + static MemberVM server3; + + @BeforeClass + public static void classSetup() { + +availablePorts = AvailablePortHelper.getRandomAvailableTCPPorts(3); + +locatorProperties = new Properties(); +serverProperties1 = new Properties(); +serverProperties2 = new Properties(); +serverProperties3 = new Properties(); + +locatorProperties.setProperty(MAX_WAIT_TIME_RECONNECT, "15000"); + +serverProperties1.setProperty(REDIS_PORT, Integer.toString(availablePorts[0])); +serverProperties1.setProperty(REDIS_BIND_ADDRESS, LOCAL_HOST); + +serverProperties2.setProperty(REDIS_PORT, Integer.toString(availablePorts[1])); +serverProperties2.setProperty(REDIS_BIND_ADDRESS, LOCAL_HOST); + +serverProperties3.setProperty(REDIS_PORT, Integer.toString(availablePorts[2])); +serverProperties3.setProperty(REDIS_BIND_ADDRESS, LOCAL_HOST); + +locator = clusterStartUp.startLocatorVM(0, locatorProperties); +server1 = clusterStartUp.startServerVM(1, serverProperties1, locator.getPort()); +server2 = clusterStartUp.startServerVM(2, serverProperties2, locator.getPort()); +server3 = clusterStartUp.startServerVM(3, serverProperties3, locator.getPort()); + +jedis1 = new Jedis(LOCAL_HOST, availablePorts[0], JEDIS_TIMEOUT); +jedis2 = new Jedis(LOCAL_HOST, availablePorts[1], JEDIS_TIMEOUT); +jedis3 = new Jedis(LOCAL_HOST, availablePorts[2], JEDIS_TIMEOUT); + } + + @After + public void testCleanUp() { +jedis1.flushAll(); + } + + @AfterClass + public static void tearDown() { +jedis1.disconnect(); +jedis2.disconnect(); +jedis3.disconnect(); + +server1.stop(); +server2.stop(); +server3.stop(); + } + + @Test + public void expireOnOneServer_shouldPropagateToAllServers() { Review comment: Yeah, they don't actually work yet (there is another story for t
[jira] [Commented] (GEODE-8058) Update implementation of EXPIRE, PEXPIRE, EXPIREAT, and PEXPIREAT to be HA
[ https://issues.apache.org/jira/browse/GEODE-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1705#comment-1705 ] ASF GitHub Bot commented on GEODE-8058: --- sabbeyPivotal commented on a change in pull request #5036: URL: https://github.com/apache/geode/pull/5036#discussion_r420192281 ## File path: geode-redis/src/distributedTest/java/org/apache/geode/redis/executors/ExpireDUnitTest.java ## @@ -0,0 +1,233 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ + +package org.apache.geode.redis.executors; + +import static org.apache.geode.distributed.ConfigurationProperties.MAX_WAIT_TIME_RECONNECT; +import static org.apache.geode.distributed.ConfigurationProperties.REDIS_BIND_ADDRESS; +import static org.apache.geode.distributed.ConfigurationProperties.REDIS_PORT; +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.Properties; + +import org.junit.After; +import org.junit.AfterClass; +import org.junit.BeforeClass; +import org.junit.ClassRule; +import org.junit.Ignore; +import org.junit.Test; +import redis.clients.jedis.Jedis; + +import org.apache.geode.internal.AvailablePortHelper; +import org.apache.geode.internal.cache.InternalCache; +import org.apache.geode.redis.internal.ByteArrayWrapper; +import org.apache.geode.redis.internal.GeodeRedisService; +import org.apache.geode.redis.internal.executor.TTLExecutor; +import org.apache.geode.test.awaitility.GeodeAwaitility; +import org.apache.geode.test.dunit.rules.ClusterStartupRule; +import org.apache.geode.test.dunit.rules.MemberVM; + +@Ignore("GEODE-8058: this test needs to pass to have feature parity with native redis") +public class ExpireDUnitTest { + + @ClassRule + public static ClusterStartupRule clusterStartUp = new ClusterStartupRule(4); + + static final String LOCAL_HOST = "127.0.0.1"; + static int[] availablePorts; + private static final int JEDIS_TIMEOUT = + Math.toIntExact(GeodeAwaitility.getTimeout().toMillis()); + static Jedis jedis1; + static Jedis jedis2; + static Jedis jedis3; + + static Properties locatorProperties; + static Properties serverProperties1; + static Properties serverProperties2; + static Properties serverProperties3; + + static MemberVM locator; + static MemberVM server1; + static MemberVM server2; + static MemberVM server3; + + @BeforeClass + public static void classSetup() { + +availablePorts = AvailablePortHelper.getRandomAvailableTCPPorts(3); + +locatorProperties = new Properties(); +serverProperties1 = new Properties(); +serverProperties2 = new Properties(); +serverProperties3 = new Properties(); + +locatorProperties.setProperty(MAX_WAIT_TIME_RECONNECT, "15000"); + +serverProperties1.setProperty(REDIS_PORT, Integer.toString(availablePorts[0])); +serverProperties1.setProperty(REDIS_BIND_ADDRESS, LOCAL_HOST); + +serverProperties2.setProperty(REDIS_PORT, Integer.toString(availablePorts[1])); +serverProperties2.setProperty(REDIS_BIND_ADDRESS, LOCAL_HOST); + +serverProperties3.setProperty(REDIS_PORT, Integer.toString(availablePorts[2])); +serverProperties3.setProperty(REDIS_BIND_ADDRESS, LOCAL_HOST); + +locator = clusterStartUp.startLocatorVM(0, locatorProperties); +server1 = clusterStartUp.startServerVM(1, serverProperties1, locator.getPort()); +server2 = clusterStartUp.startServerVM(2, serverProperties2, locator.getPort()); +server3 = clusterStartUp.startServerVM(3, serverProperties3, locator.getPort()); + +jedis1 = new Jedis(LOCAL_HOST, availablePorts[0], JEDIS_TIMEOUT); +jedis2 = new Jedis(LOCAL_HOST, availablePorts[1], JEDIS_TIMEOUT); +jedis3 = new Jedis(LOCAL_HOST, availablePorts[2], JEDIS_TIMEOUT); + } + + @After + public void testCleanUp() { +jedis1.flushAll(); + } + + @AfterClass + public static void tearDown() { +jedis1.disconnect(); +jedis2.disconnect(); +jedis3.disconnect(); + +server1.stop(); +server2.stop(); +server3.stop(); + } + + @Test + public void expireOnOneServer_shouldPropagateToAllServers() { +String key = "key"; + +jedis1.set(key, "value"); +jedis1.expire(key, 20); +
[jira] [Commented] (GEODE-8058) Update implementation of EXPIRE, PEXPIRE, EXPIREAT, and PEXPIREAT to be HA
[ https://issues.apache.org/jira/browse/GEODE-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1704#comment-1704 ] ASF GitHub Bot commented on GEODE-8058: --- sabbeyPivotal commented on a change in pull request #5036: URL: https://github.com/apache/geode/pull/5036#discussion_r420190842 ## File path: geode-redis/src/distributedTest/java/org/apache/geode/redis/executors/ExpireDUnitTest.java ## @@ -0,0 +1,233 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ + +package org.apache.geode.redis.executors; + +import static org.apache.geode.distributed.ConfigurationProperties.MAX_WAIT_TIME_RECONNECT; +import static org.apache.geode.distributed.ConfigurationProperties.REDIS_BIND_ADDRESS; +import static org.apache.geode.distributed.ConfigurationProperties.REDIS_PORT; +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.Properties; + +import org.junit.After; +import org.junit.AfterClass; +import org.junit.BeforeClass; +import org.junit.ClassRule; +import org.junit.Ignore; +import org.junit.Test; +import redis.clients.jedis.Jedis; + +import org.apache.geode.internal.AvailablePortHelper; +import org.apache.geode.internal.cache.InternalCache; +import org.apache.geode.redis.internal.ByteArrayWrapper; +import org.apache.geode.redis.internal.GeodeRedisService; +import org.apache.geode.redis.internal.executor.TTLExecutor; +import org.apache.geode.test.awaitility.GeodeAwaitility; +import org.apache.geode.test.dunit.rules.ClusterStartupRule; +import org.apache.geode.test.dunit.rules.MemberVM; + +@Ignore("GEODE-8058: this test needs to pass to have feature parity with native redis") +public class ExpireDUnitTest { + + @ClassRule + public static ClusterStartupRule clusterStartUp = new ClusterStartupRule(4); + + static final String LOCAL_HOST = "127.0.0.1"; + static int[] availablePorts; + private static final int JEDIS_TIMEOUT = + Math.toIntExact(GeodeAwaitility.getTimeout().toMillis()); + static Jedis jedis1; + static Jedis jedis2; + static Jedis jedis3; + + static Properties locatorProperties; + static Properties serverProperties1; + static Properties serverProperties2; + static Properties serverProperties3; + + static MemberVM locator; + static MemberVM server1; + static MemberVM server2; + static MemberVM server3; + + @BeforeClass + public static void classSetup() { + +availablePorts = AvailablePortHelper.getRandomAvailableTCPPorts(3); + +locatorProperties = new Properties(); +serverProperties1 = new Properties(); +serverProperties2 = new Properties(); +serverProperties3 = new Properties(); + +locatorProperties.setProperty(MAX_WAIT_TIME_RECONNECT, "15000"); + +serverProperties1.setProperty(REDIS_PORT, Integer.toString(availablePorts[0])); +serverProperties1.setProperty(REDIS_BIND_ADDRESS, LOCAL_HOST); + +serverProperties2.setProperty(REDIS_PORT, Integer.toString(availablePorts[1])); +serverProperties2.setProperty(REDIS_BIND_ADDRESS, LOCAL_HOST); + +serverProperties3.setProperty(REDIS_PORT, Integer.toString(availablePorts[2])); +serverProperties3.setProperty(REDIS_BIND_ADDRESS, LOCAL_HOST); + +locator = clusterStartUp.startLocatorVM(0, locatorProperties); +server1 = clusterStartUp.startServerVM(1, serverProperties1, locator.getPort()); +server2 = clusterStartUp.startServerVM(2, serverProperties2, locator.getPort()); +server3 = clusterStartUp.startServerVM(3, serverProperties3, locator.getPort()); + +jedis1 = new Jedis(LOCAL_HOST, availablePorts[0], JEDIS_TIMEOUT); +jedis2 = new Jedis(LOCAL_HOST, availablePorts[1], JEDIS_TIMEOUT); +jedis3 = new Jedis(LOCAL_HOST, availablePorts[2], JEDIS_TIMEOUT); + } + + @After + public void testCleanUp() { +jedis1.flushAll(); + } + + @AfterClass + public static void tearDown() { +jedis1.disconnect(); +jedis2.disconnect(); +jedis3.disconnect(); + +server1.stop(); +server2.stop(); +server3.stop(); + } + + @Test + public void expireOnOneServer_shouldPropagateToAllServers() { +String key = "key"; + +jedis1.set(key, "value"); +jedis1.expire(key, 20); +
[jira] [Commented] (GEODE-8054) Refactor Sadd and Srem DUnit tests to use ConcurrentLoopingThreads class
[ https://issues.apache.org/jira/browse/GEODE-8054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17099971#comment-17099971 ] ASF GitHub Bot commented on GEODE-8054: --- sabbeyPivotal opened a new pull request #5052: URL: https://github.com/apache/geode/pull/5052 Other concurrent DUnit and integration tests are utilizing the ConcurrentLoopingThreads class to generate and run concurrent threads. We wanted to be consistent across DUnit and integration tests. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Refactor Sadd and Srem DUnit tests to use ConcurrentLoopingThreads class > > > Key: GEODE-8054 > URL: https://issues.apache.org/jira/browse/GEODE-8054 > Project: Geode > Issue Type: Improvement > Components: redis, tests >Reporter: Sarah Abbey >Priority: Major > Fix For: 1.13.0 > > > Other concurrent DUnit and integration tests are utilizing the > ConcurrentLoopingThreads class to generate and run concurrent threads. We > wanted to be consistent across DUnit and integration tests. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8058) Update implementation of EXPIRE, PEXPIRE, EXPIREAT, and PEXPIREAT to be HA
[ https://issues.apache.org/jira/browse/GEODE-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17099955#comment-17099955 ] ASF GitHub Bot commented on GEODE-8058: --- prettyClouds commented on a change in pull request #5036: URL: https://github.com/apache/geode/pull/5036#discussion_r420156867 ## File path: geode-redis/src/main/java/org/apache/geode/redis/internal/GeodeRedisService.java ## @@ -87,4 +88,9 @@ private void stopRedisServer() { public CacheServiceMBeanBase getMBean() { return null; } + + @VisibleForTesting + public GeodeRedisServer getGeodeRedisServer() { Review comment: if we remove that internal check mentioned above, we don't need this getter. ## File path: geode-redis/src/distributedTest/java/org/apache/geode/redis/executors/ExpireDUnitTest.java ## @@ -0,0 +1,233 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ + +package org.apache.geode.redis.executors; + +import static org.apache.geode.distributed.ConfigurationProperties.MAX_WAIT_TIME_RECONNECT; +import static org.apache.geode.distributed.ConfigurationProperties.REDIS_BIND_ADDRESS; +import static org.apache.geode.distributed.ConfigurationProperties.REDIS_PORT; +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.Properties; + +import org.junit.After; +import org.junit.AfterClass; +import org.junit.BeforeClass; +import org.junit.ClassRule; +import org.junit.Ignore; +import org.junit.Test; +import redis.clients.jedis.Jedis; + +import org.apache.geode.internal.AvailablePortHelper; +import org.apache.geode.internal.cache.InternalCache; +import org.apache.geode.redis.internal.ByteArrayWrapper; +import org.apache.geode.redis.internal.GeodeRedisService; +import org.apache.geode.redis.internal.executor.TTLExecutor; +import org.apache.geode.test.awaitility.GeodeAwaitility; +import org.apache.geode.test.dunit.rules.ClusterStartupRule; +import org.apache.geode.test.dunit.rules.MemberVM; + +@Ignore("GEODE-8058: this test needs to pass to have feature parity with native redis") +public class ExpireDUnitTest { + + @ClassRule + public static ClusterStartupRule clusterStartUp = new ClusterStartupRule(4); + + static final String LOCAL_HOST = "127.0.0.1"; + static int[] availablePorts; + private static final int JEDIS_TIMEOUT = + Math.toIntExact(GeodeAwaitility.getTimeout().toMillis()); + static Jedis jedis1; + static Jedis jedis2; + static Jedis jedis3; + + static Properties locatorProperties; + static Properties serverProperties1; + static Properties serverProperties2; + static Properties serverProperties3; + + static MemberVM locator; + static MemberVM server1; + static MemberVM server2; + static MemberVM server3; + + @BeforeClass + public static void classSetup() { + +availablePorts = AvailablePortHelper.getRandomAvailableTCPPorts(3); + +locatorProperties = new Properties(); +serverProperties1 = new Properties(); +serverProperties2 = new Properties(); +serverProperties3 = new Properties(); + +locatorProperties.setProperty(MAX_WAIT_TIME_RECONNECT, "15000"); + +serverProperties1.setProperty(REDIS_PORT, Integer.toString(availablePorts[0])); +serverProperties1.setProperty(REDIS_BIND_ADDRESS, LOCAL_HOST); + +serverProperties2.setProperty(REDIS_PORT, Integer.toString(availablePorts[1])); +serverProperties2.setProperty(REDIS_BIND_ADDRESS, LOCAL_HOST); + +serverProperties3.setProperty(REDIS_PORT, Integer.toString(availablePorts[2])); +serverProperties3.setProperty(REDIS_BIND_ADDRESS, LOCAL_HOST); + +locator = clusterStartUp.startLocatorVM(0, locatorProperties); +server1 = clusterStartUp.startServerVM(1, serverProperties1, locator.getPort()); +server2 = clusterStartUp.startServerVM(2, serverProperties2, locator.getPort()); +server3 = clusterStartUp.startServerVM(3, serverProperties3, locator.getPort()); + +jedis1 = new Jedis(LOCAL_HOST, availablePorts[0], JEDIS_TIMEOUT); +jedis2 = new Jedis(LOCAL_HOST, availablePorts[1], JEDIS_TIMEOUT); +jedis3 = new Jedis(LOCAL_HOST, availablePorts[2], JEDIS_TIMEOUT); + } + + @After + public voi
[jira] [Commented] (GEODE-8004) Regression Introduced Through GEODE-7565
[ https://issues.apache.org/jira/browse/GEODE-8004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17099936#comment-17099936 ] ASF GitHub Bot commented on GEODE-8004: --- alb3rtobr commented on pull request #4978: URL: https://github.com/apache/geode/pull/4978#issuecomment-624083268 > Hello @alb3rtobr > It's looking good!!, some last small requests from my side, and I'll be ready to approve the PR afterwards. @bschuchardt: can you also have another look and make sure I didn't miss anything?. > Cheers. I have added all the requested tests :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Regression Introduced Through GEODE-7565 > > > Key: GEODE-8004 > URL: https://issues.apache.org/jira/browse/GEODE-8004 > Project: Geode > Issue Type: Bug > Components: client/server >Reporter: Juan Ramos >Assignee: Juan Ramos >Priority: Major > Labels: GeodeCommons > > Intermittent errors were observed while executing some internal tests and > commit > [dd23ee8|https://github.com/apache/geode/commit/dd23ee8200cba67cea82e57e2e4ccedcdf9e8266] > was determined to be responsible. As of yet, no local reproduction of the > issue is available, but work is ongoing to provide a test that can be used to > debug the issue (a [PR|https://github.com/apache/geode/pull/4974] to revert > of the original commit has been opened and will be merged shortly, though, > this ticket is to investigate the root cause so the original commit can be > merged again into {{develop}}). > --- > It seems that a server is trying to read an {{ack}} response and, instead, it > receives a {{PING}} message: > {noformat} > [error 2020/04/18 23:44:22.758 PDT tid=0x165] > Unexpected error in pool task > > org.apache.geode.InternalGemFireError: Unexpected message type PING > at > org.apache.geode.cache.client.internal.AbstractOp.processAck(AbstractOp.java:264) > at > org.apache.geode.cache.client.internal.PingOp$PingOpImpl.processResponse(PingOp.java:82) > at > org.apache.geode.cache.client.internal.AbstractOp.processResponse(AbstractOp.java:222) > at > org.apache.geode.cache.client.internal.AbstractOp.attemptReadResponse(AbstractOp.java:207) > at > org.apache.geode.cache.client.internal.AbstractOp.attempt(AbstractOp.java:382) > at > org.apache.geode.cache.client.internal.ConnectionImpl.execute(ConnectionImpl.java:268) > at > org.apache.geode.cache.client.internal.pooling.PooledConnection.execute(PooledConnection.java:352) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.executeWithPossibleReAuthentication(OpExecutorImpl.java:753) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.executeOnServer(OpExecutorImpl.java:332) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.executeOn(OpExecutorImpl.java:303) > at > org.apache.geode.cache.client.internal.PoolImpl.executeOn(PoolImpl.java:839) > at org.apache.geode.cache.client.internal.PingOp.execute(PingOp.java:38) > at > org.apache.geode.cache.client.internal.LiveServerPinger$PingTask.run2(LiveServerPinger.java:90) > at > org.apache.geode.cache.client.internal.PoolImpl$PoolTask.run(PoolImpl.java:1329) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > org.apache.geode.internal.ScheduledThreadPoolExecutorWithKeepAlive$DelegatingScheduledFuture.run(ScheduledThreadPoolExecutorWithKeepAlive.java:276) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} > Around the same time, another member of the distributed system logs the > following warning, which seems to be related to the original changes as well: > {noformat} > [warn 2020/04/18 23:44:22.757 PDT > tid=0x298] Unable to ping non-member > rs-FullRegression19040559a2i32xlarge-hydra-client-63(bridgegemfire1_host1_4749:4749):41003 > for client > identity(rs-FullRegression19040559a2i32xlarge-hydra-client-63(edgegemfire3_host1_1071:1071:loner):50046:5a182991:edgegemfire3_host1_1071,connection=2 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8004) Regression Introduced Through GEODE-7565
[ https://issues.apache.org/jira/browse/GEODE-8004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17099932#comment-17099932 ] ASF GitHub Bot commented on GEODE-8004: --- alb3rtobr commented on a change in pull request #4978: URL: https://github.com/apache/geode/pull/4978#discussion_r420143725 ## File path: geode-core/src/main/java/org/apache/geode/distributed/internal/LocatorLoadSnapshot.java ## @@ -425,6 +429,22 @@ private void addGroups(Map> map, String[ } } + private void addGroups(Map> map, Review comment: done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Regression Introduced Through GEODE-7565 > > > Key: GEODE-8004 > URL: https://issues.apache.org/jira/browse/GEODE-8004 > Project: Geode > Issue Type: Bug > Components: client/server >Reporter: Juan Ramos >Assignee: Juan Ramos >Priority: Major > Labels: GeodeCommons > > Intermittent errors were observed while executing some internal tests and > commit > [dd23ee8|https://github.com/apache/geode/commit/dd23ee8200cba67cea82e57e2e4ccedcdf9e8266] > was determined to be responsible. As of yet, no local reproduction of the > issue is available, but work is ongoing to provide a test that can be used to > debug the issue (a [PR|https://github.com/apache/geode/pull/4974] to revert > of the original commit has been opened and will be merged shortly, though, > this ticket is to investigate the root cause so the original commit can be > merged again into {{develop}}). > --- > It seems that a server is trying to read an {{ack}} response and, instead, it > receives a {{PING}} message: > {noformat} > [error 2020/04/18 23:44:22.758 PDT tid=0x165] > Unexpected error in pool task > > org.apache.geode.InternalGemFireError: Unexpected message type PING > at > org.apache.geode.cache.client.internal.AbstractOp.processAck(AbstractOp.java:264) > at > org.apache.geode.cache.client.internal.PingOp$PingOpImpl.processResponse(PingOp.java:82) > at > org.apache.geode.cache.client.internal.AbstractOp.processResponse(AbstractOp.java:222) > at > org.apache.geode.cache.client.internal.AbstractOp.attemptReadResponse(AbstractOp.java:207) > at > org.apache.geode.cache.client.internal.AbstractOp.attempt(AbstractOp.java:382) > at > org.apache.geode.cache.client.internal.ConnectionImpl.execute(ConnectionImpl.java:268) > at > org.apache.geode.cache.client.internal.pooling.PooledConnection.execute(PooledConnection.java:352) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.executeWithPossibleReAuthentication(OpExecutorImpl.java:753) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.executeOnServer(OpExecutorImpl.java:332) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.executeOn(OpExecutorImpl.java:303) > at > org.apache.geode.cache.client.internal.PoolImpl.executeOn(PoolImpl.java:839) > at org.apache.geode.cache.client.internal.PingOp.execute(PingOp.java:38) > at > org.apache.geode.cache.client.internal.LiveServerPinger$PingTask.run2(LiveServerPinger.java:90) > at > org.apache.geode.cache.client.internal.PoolImpl$PoolTask.run(PoolImpl.java:1329) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > org.apache.geode.internal.ScheduledThreadPoolExecutorWithKeepAlive$DelegatingScheduledFuture.run(ScheduledThreadPoolExecutorWithKeepAlive.java:276) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} > Around the same time, another member of the distributed system logs the > following warning, which seems to be related to the original changes as well: > {noformat} > [warn 2020/04/18 23:44:22.757 PDT > tid=0x298] Unable to ping non-member > rs-FullRegression19040559a2i32xlarge-hydra-client-63(bridgegemfire1_host1_4749:4749):41003 > for client > identity(rs-FullRegression19040559a2i32xlarge-hydra-client-63(edgegemfire3_host1_1071:1071:loner):50046:5a182991:edgegemfire3_host1_1071,connection=2 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8004) Regression Introduced Through GEODE-7565
[ https://issues.apache.org/jira/browse/GEODE-8004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17099931#comment-17099931 ] ASF GitHub Bot commented on GEODE-8004: --- alb3rtobr commented on a change in pull request #4978: URL: https://github.com/apache/geode/pull/4978#discussion_r420143599 ## File path: geode-core/src/main/java/org/apache/geode/distributed/internal/LocatorLoadSnapshot.java ## @@ -440,6 +460,24 @@ private void removeFromMap(Map> map, Str groupMap.remove(location); } + private void removeFromMap(Map> map, Review comment: done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Regression Introduced Through GEODE-7565 > > > Key: GEODE-8004 > URL: https://issues.apache.org/jira/browse/GEODE-8004 > Project: Geode > Issue Type: Bug > Components: client/server >Reporter: Juan Ramos >Assignee: Juan Ramos >Priority: Major > Labels: GeodeCommons > > Intermittent errors were observed while executing some internal tests and > commit > [dd23ee8|https://github.com/apache/geode/commit/dd23ee8200cba67cea82e57e2e4ccedcdf9e8266] > was determined to be responsible. As of yet, no local reproduction of the > issue is available, but work is ongoing to provide a test that can be used to > debug the issue (a [PR|https://github.com/apache/geode/pull/4974] to revert > of the original commit has been opened and will be merged shortly, though, > this ticket is to investigate the root cause so the original commit can be > merged again into {{develop}}). > --- > It seems that a server is trying to read an {{ack}} response and, instead, it > receives a {{PING}} message: > {noformat} > [error 2020/04/18 23:44:22.758 PDT tid=0x165] > Unexpected error in pool task > > org.apache.geode.InternalGemFireError: Unexpected message type PING > at > org.apache.geode.cache.client.internal.AbstractOp.processAck(AbstractOp.java:264) > at > org.apache.geode.cache.client.internal.PingOp$PingOpImpl.processResponse(PingOp.java:82) > at > org.apache.geode.cache.client.internal.AbstractOp.processResponse(AbstractOp.java:222) > at > org.apache.geode.cache.client.internal.AbstractOp.attemptReadResponse(AbstractOp.java:207) > at > org.apache.geode.cache.client.internal.AbstractOp.attempt(AbstractOp.java:382) > at > org.apache.geode.cache.client.internal.ConnectionImpl.execute(ConnectionImpl.java:268) > at > org.apache.geode.cache.client.internal.pooling.PooledConnection.execute(PooledConnection.java:352) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.executeWithPossibleReAuthentication(OpExecutorImpl.java:753) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.executeOnServer(OpExecutorImpl.java:332) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.executeOn(OpExecutorImpl.java:303) > at > org.apache.geode.cache.client.internal.PoolImpl.executeOn(PoolImpl.java:839) > at org.apache.geode.cache.client.internal.PingOp.execute(PingOp.java:38) > at > org.apache.geode.cache.client.internal.LiveServerPinger$PingTask.run2(LiveServerPinger.java:90) > at > org.apache.geode.cache.client.internal.PoolImpl$PoolTask.run(PoolImpl.java:1329) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > org.apache.geode.internal.ScheduledThreadPoolExecutorWithKeepAlive$DelegatingScheduledFuture.run(ScheduledThreadPoolExecutorWithKeepAlive.java:276) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} > Around the same time, another member of the distributed system logs the > following warning, which seems to be related to the original changes as well: > {noformat} > [warn 2020/04/18 23:44:22.757 PDT > tid=0x298] Unable to ping non-member > rs-FullRegression19040559a2i32xlarge-hydra-client-63(bridgegemfire1_host1_4749:4749):41003 > for client > identity(rs-FullRegression19040559a2i32xlarge-hydra-client-63(edgegemfire3_host1_1071:1071:loner):50046:5a182991:edgegemfire3_host1_1071,connection=2 > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8004) Regression Introduced Through GEODE-7565
[ https://issues.apache.org/jira/browse/GEODE-8004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17099869#comment-17099869 ] ASF GitHub Bot commented on GEODE-8004: --- alb3rtobr commented on a change in pull request #4978: URL: https://github.com/apache/geode/pull/4978#discussion_r420103367 ## File path: geode-core/src/main/java/org/apache/geode/distributed/internal/LocatorLoadSnapshot.java ## @@ -448,14 +486,28 @@ private void updateMap(Map map, ServerLocation location, float load, float loadP } } + private void updateMap(Map map, ServerLocation location, String memberId, float load, + float loadPerConnection) { +Map groupMap = (Map) map.get(null); +ServerLocationAndMemberId locationAndMemberId = +new ServerLocationAndMemberId(location, memberId); +LoadHolder holder = +(LoadHolder) groupMap.get(locationAndMemberId); + +if (holder != null) { + holder.setLoad(load, loadPerConnection); +} + } + /** * * @param groupServers the servers to consider * @param excludedServers servers to exclude * @param count how many you want. a negative number means all of them in order of best to worst * @return a list of best...worst server LoadHolders */ - private List findBestServers(Map groupServers, + private List findBestServers( Review comment: done ## File path: geode-core/src/main/java/org/apache/geode/distributed/internal/LocatorLoadSnapshot.java ## @@ -448,14 +486,28 @@ private void updateMap(Map map, ServerLocation location, float load, float loadP } } + private void updateMap(Map map, ServerLocation location, String memberId, float load, Review comment: done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Regression Introduced Through GEODE-7565 > > > Key: GEODE-8004 > URL: https://issues.apache.org/jira/browse/GEODE-8004 > Project: Geode > Issue Type: Bug > Components: client/server >Reporter: Juan Ramos >Assignee: Juan Ramos >Priority: Major > Labels: GeodeCommons > > Intermittent errors were observed while executing some internal tests and > commit > [dd23ee8|https://github.com/apache/geode/commit/dd23ee8200cba67cea82e57e2e4ccedcdf9e8266] > was determined to be responsible. As of yet, no local reproduction of the > issue is available, but work is ongoing to provide a test that can be used to > debug the issue (a [PR|https://github.com/apache/geode/pull/4974] to revert > of the original commit has been opened and will be merged shortly, though, > this ticket is to investigate the root cause so the original commit can be > merged again into {{develop}}). > --- > It seems that a server is trying to read an {{ack}} response and, instead, it > receives a {{PING}} message: > {noformat} > [error 2020/04/18 23:44:22.758 PDT tid=0x165] > Unexpected error in pool task > > org.apache.geode.InternalGemFireError: Unexpected message type PING > at > org.apache.geode.cache.client.internal.AbstractOp.processAck(AbstractOp.java:264) > at > org.apache.geode.cache.client.internal.PingOp$PingOpImpl.processResponse(PingOp.java:82) > at > org.apache.geode.cache.client.internal.AbstractOp.processResponse(AbstractOp.java:222) > at > org.apache.geode.cache.client.internal.AbstractOp.attemptReadResponse(AbstractOp.java:207) > at > org.apache.geode.cache.client.internal.AbstractOp.attempt(AbstractOp.java:382) > at > org.apache.geode.cache.client.internal.ConnectionImpl.execute(ConnectionImpl.java:268) > at > org.apache.geode.cache.client.internal.pooling.PooledConnection.execute(PooledConnection.java:352) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.executeWithPossibleReAuthentication(OpExecutorImpl.java:753) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.executeOnServer(OpExecutorImpl.java:332) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.executeOn(OpExecutorImpl.java:303) > at > org.apache.geode.cache.client.internal.PoolImpl.executeOn(PoolImpl.java:839) > at org.apache.geode.cache.client.internal.PingOp.execute(PingOp.java:38) > at > org.apache.geode.cache.client.internal.LiveServerPinger$PingTask.run2(LiveServerPinger.java:90) > at > org.apache.geode.cache.client.internal.PoolImpl$PoolTask.run(PoolImpl.java:1329) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.
[jira] [Commented] (GEODE-8004) Regression Introduced Through GEODE-7565
[ https://issues.apache.org/jira/browse/GEODE-8004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17099870#comment-17099870 ] ASF GitHub Bot commented on GEODE-8004: --- alb3rtobr commented on a change in pull request #4978: URL: https://github.com/apache/geode/pull/4978#discussion_r420103948 ## File path: geode-core/src/main/java/org/apache/geode/distributed/internal/ServerLocationAndMemberId.java ## @@ -0,0 +1,74 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.geode.distributed.internal; + +public class ServerLocationAndMemberId { + + private final ServerLocation serverLocation; + private final String memberId; + + public ServerLocationAndMemberId() { Review comment: done. After removing this constructor I was able to remove other one more. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Regression Introduced Through GEODE-7565 > > > Key: GEODE-8004 > URL: https://issues.apache.org/jira/browse/GEODE-8004 > Project: Geode > Issue Type: Bug > Components: client/server >Reporter: Juan Ramos >Assignee: Juan Ramos >Priority: Major > Labels: GeodeCommons > > Intermittent errors were observed while executing some internal tests and > commit > [dd23ee8|https://github.com/apache/geode/commit/dd23ee8200cba67cea82e57e2e4ccedcdf9e8266] > was determined to be responsible. As of yet, no local reproduction of the > issue is available, but work is ongoing to provide a test that can be used to > debug the issue (a [PR|https://github.com/apache/geode/pull/4974] to revert > of the original commit has been opened and will be merged shortly, though, > this ticket is to investigate the root cause so the original commit can be > merged again into {{develop}}). > --- > It seems that a server is trying to read an {{ack}} response and, instead, it > receives a {{PING}} message: > {noformat} > [error 2020/04/18 23:44:22.758 PDT tid=0x165] > Unexpected error in pool task > > org.apache.geode.InternalGemFireError: Unexpected message type PING > at > org.apache.geode.cache.client.internal.AbstractOp.processAck(AbstractOp.java:264) > at > org.apache.geode.cache.client.internal.PingOp$PingOpImpl.processResponse(PingOp.java:82) > at > org.apache.geode.cache.client.internal.AbstractOp.processResponse(AbstractOp.java:222) > at > org.apache.geode.cache.client.internal.AbstractOp.attemptReadResponse(AbstractOp.java:207) > at > org.apache.geode.cache.client.internal.AbstractOp.attempt(AbstractOp.java:382) > at > org.apache.geode.cache.client.internal.ConnectionImpl.execute(ConnectionImpl.java:268) > at > org.apache.geode.cache.client.internal.pooling.PooledConnection.execute(PooledConnection.java:352) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.executeWithPossibleReAuthentication(OpExecutorImpl.java:753) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.executeOnServer(OpExecutorImpl.java:332) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.executeOn(OpExecutorImpl.java:303) > at > org.apache.geode.cache.client.internal.PoolImpl.executeOn(PoolImpl.java:839) > at org.apache.geode.cache.client.internal.PingOp.execute(PingOp.java:38) > at > org.apache.geode.cache.client.internal.LiveServerPinger$PingTask.run2(LiveServerPinger.java:90) > at > org.apache.geode.cache.client.internal.PoolImpl$PoolTask.run(PoolImpl.java:1329) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > org.apache.geode.internal.ScheduledThreadPoolExecutorWithKeepAlive$DelegatingScheduledFuture.run
[jira] [Commented] (GEODE-8004) Regression Introduced Through GEODE-7565
[ https://issues.apache.org/jira/browse/GEODE-8004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17099867#comment-17099867 ] ASF GitHub Bot commented on GEODE-8004: --- alb3rtobr commented on a change in pull request #4978: URL: https://github.com/apache/geode/pull/4978#discussion_r420103151 ## File path: geode-core/src/main/java/org/apache/geode/distributed/internal/LocatorLoadSnapshot.java ## @@ -497,13 +558,21 @@ private void updateMap(Map map, ServerLocation location, float load, float loadP * If it is most loaded then return its LoadHolder; otherwise return null; */ private LoadHolder isCurrentServerMostLoaded(ServerLocation currentServer, - Map groupServers) { -final LoadHolder currentLH = groupServers.get(currentServer); + Map groupServers) { Review comment: done! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Regression Introduced Through GEODE-7565 > > > Key: GEODE-8004 > URL: https://issues.apache.org/jira/browse/GEODE-8004 > Project: Geode > Issue Type: Bug > Components: client/server >Reporter: Juan Ramos >Assignee: Juan Ramos >Priority: Major > Labels: GeodeCommons > > Intermittent errors were observed while executing some internal tests and > commit > [dd23ee8|https://github.com/apache/geode/commit/dd23ee8200cba67cea82e57e2e4ccedcdf9e8266] > was determined to be responsible. As of yet, no local reproduction of the > issue is available, but work is ongoing to provide a test that can be used to > debug the issue (a [PR|https://github.com/apache/geode/pull/4974] to revert > of the original commit has been opened and will be merged shortly, though, > this ticket is to investigate the root cause so the original commit can be > merged again into {{develop}}). > --- > It seems that a server is trying to read an {{ack}} response and, instead, it > receives a {{PING}} message: > {noformat} > [error 2020/04/18 23:44:22.758 PDT tid=0x165] > Unexpected error in pool task > > org.apache.geode.InternalGemFireError: Unexpected message type PING > at > org.apache.geode.cache.client.internal.AbstractOp.processAck(AbstractOp.java:264) > at > org.apache.geode.cache.client.internal.PingOp$PingOpImpl.processResponse(PingOp.java:82) > at > org.apache.geode.cache.client.internal.AbstractOp.processResponse(AbstractOp.java:222) > at > org.apache.geode.cache.client.internal.AbstractOp.attemptReadResponse(AbstractOp.java:207) > at > org.apache.geode.cache.client.internal.AbstractOp.attempt(AbstractOp.java:382) > at > org.apache.geode.cache.client.internal.ConnectionImpl.execute(ConnectionImpl.java:268) > at > org.apache.geode.cache.client.internal.pooling.PooledConnection.execute(PooledConnection.java:352) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.executeWithPossibleReAuthentication(OpExecutorImpl.java:753) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.executeOnServer(OpExecutorImpl.java:332) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.executeOn(OpExecutorImpl.java:303) > at > org.apache.geode.cache.client.internal.PoolImpl.executeOn(PoolImpl.java:839) > at org.apache.geode.cache.client.internal.PingOp.execute(PingOp.java:38) > at > org.apache.geode.cache.client.internal.LiveServerPinger$PingTask.run2(LiveServerPinger.java:90) > at > org.apache.geode.cache.client.internal.PoolImpl$PoolTask.run(PoolImpl.java:1329) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > org.apache.geode.internal.ScheduledThreadPoolExecutorWithKeepAlive$DelegatingScheduledFuture.run(ScheduledThreadPoolExecutorWithKeepAlive.java:276) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} > Around the same time, another member of the distributed system logs the > following warning, which seems to be related to the original changes as well: > {noformat} > [warn 2020/04/18 23:44:22.757 PDT > tid=0x298] Unable to ping non-member > rs-FullRegression19040559a2i32xlarge-hydra-client-63(bridgegemfire1_host1_4749:4749):41003 > for client > identity(rs-FullRegression19040559a2i32xlarge-hydra-client-63(edgegemfire3_h
[jira] [Commented] (GEODE-8004) Regression Introduced Through GEODE-7565
[ https://issues.apache.org/jira/browse/GEODE-8004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17099759#comment-17099759 ] ASF GitHub Bot commented on GEODE-8004: --- jujoramos commented on a change in pull request #4978: URL: https://github.com/apache/geode/pull/4978#discussion_r41012 ## File path: geode-core/src/main/java/org/apache/geode/distributed/internal/ServerLocationAndMemberId.java ## @@ -0,0 +1,74 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.geode.distributed.internal; + +public class ServerLocationAndMemberId { + + private final ServerLocation serverLocation; + private final String memberId; + + public ServerLocationAndMemberId() { Review comment: This constructor doesn't seem to be used anywhere, so we could just delete it. ## File path: geode-core/src/main/java/org/apache/geode/distributed/internal/LocatorLoadSnapshot.java ## @@ -425,6 +429,22 @@ private void addGroups(Map> map, String[ } } + private void addGroups(Map> map, Review comment: This is a new method and should be, at leat, unit tested. You can make it package private, annotate it with `@VisibleForTesting` and access it directly from `LocatorLoadSnapshotJUnitTest` and/or `LocatorLoadSnapshotIntegrationTest`. ## File path: geode-core/src/main/java/org/apache/geode/distributed/internal/LocatorLoadSnapshot.java ## @@ -440,6 +460,24 @@ private void removeFromMap(Map> map, Str groupMap.remove(location); } + private void removeFromMap(Map> map, Review comment: This is a new method and should be, at leat, unit tested. You can make it package private, annotate it with `@VisibleForTesting` and access it directly from `LocatorLoadSnapshotJUnitTest` and/or `LocatorLoadSnapshotIntegrationTest`. ## File path: geode-core/src/main/java/org/apache/geode/distributed/internal/LocatorLoadSnapshot.java ## @@ -448,14 +486,28 @@ private void updateMap(Map map, ServerLocation location, float load, float loadP } } + private void updateMap(Map map, ServerLocation location, String memberId, float load, + float loadPerConnection) { +Map groupMap = (Map) map.get(null); +ServerLocationAndMemberId locationAndMemberId = +new ServerLocationAndMemberId(location, memberId); +LoadHolder holder = +(LoadHolder) groupMap.get(locationAndMemberId); + +if (holder != null) { + holder.setLoad(load, loadPerConnection); +} + } + /** * * @param groupServers the servers to consider * @param excludedServers servers to exclude * @param count how many you want. a negative number means all of them in order of best to worst * @return a list of best...worst server LoadHolders */ - private List findBestServers(Map groupServers, + private List findBestServers( Review comment: Not a new method but significantly changed, it should be, at leat, unit tested. You can make it package private, annotate it with `@VisibleForTesting` and access it directly from `LocatorLoadSnapshotJUnitTest` and/or `LocatorLoadSnapshotIntegrationTest`. ## File path: geode-core/src/main/java/org/apache/geode/distributed/internal/LocatorLoadSnapshot.java ## @@ -497,13 +558,21 @@ private void updateMap(Map map, ServerLocation location, float load, float loadP * If it is most loaded then return its LoadHolder; otherwise return null; */ private LoadHolder isCurrentServerMostLoaded(ServerLocation currentServer, - Map groupServers) { -final LoadHolder currentLH = groupServers.get(currentServer); + Map groupServers) { Review comment: Not a new method but significantly changed, it should be, at leat, unit tested. You can make it package private, annotate it with `@VisibleForTesting` and access it directly from `LocatorLoadSnapshotJUnitTest` and/or `LocatorLoadSnapshotIntegrationTest`. ## File path: geode-core/src/main/java/org/apache/geode/distributed/internal/LocatorLoadSnapshot.java ## @@ -448,14 +486,28 @@ private void updateMap(Map map, Serve
[jira] [Updated] (GEODE-8029) java.lang.IllegalArgumentException: Too large (805306401 expected elements with load factor 0.75)
[ https://issues.apache.org/jira/browse/GEODE-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Juan Ramos updated GEODE-8029: -- Fix Version/s: 1.14.0 > java.lang.IllegalArgumentException: Too large (805306401 expected elements > with load factor 0.75) > - > > Key: GEODE-8029 > URL: https://issues.apache.org/jira/browse/GEODE-8029 > Project: Geode > Issue Type: Bug > Components: configuration, core, gfsh >Affects Versions: 1.9.0 >Reporter: Jagadeesh sivasankaran >Assignee: Juan Ramos >Priority: Major > Labels: GeodeCommons, caching-applications > Fix For: 1.14.0 > > Attachments: Screen Shot 2020-04-27 at 12.21.19 PM.png, Screen Shot > 2020-04-27 at 12.21.19 PM.png, server02.log > > > we have a cluster of three Locator Geode and three Cache Server running in > CentOS servers. Today (April 27) after patching our CENTOS servers , all > locator and 2 servers came up , But one Cache server was not starting . here > is the Exception details. Please let me know how to resolve the beloe issue > and need any configuration changes to diskstore ? > > > Starting a Geode Server in /app/provServerHO2... > The > Cache Server process terminated unexpectedly with exit status 1. Please > refer to the log file in /app/provServerHO2 for full details. > Exception in thread "main" java.lang.IllegalArgumentException: Too large > (805306401 expected elements with load factor 0.75) > at it.unimi.dsi.fastutil.HashCommon.arraySize(HashCommon.java:222) > at it.unimi.dsi.fastutil.ints.IntOpenHashSet.add(IntOpenHashSet.java:308) > at > org.apache.geode.internal.cache.DiskStoreImpl$OplogEntryIdSet.add(DiskStoreImpl.java:3474) > at org.apache.geode.internal.cache.Oplog.readDelEntry(Oplog.java:3007) > at org.apache.geode.internal.cache.Oplog.recoverDrf(Oplog.java:1500) > at > org.apache.geode.internal.cache.PersistentOplogSet.recoverOplogs(PersistentOplogSet.java:445) > at > org.apache.geode.internal.cache.PersistentOplogSet.recoverRegionsThatAreReady(PersistentOplogSet.java:369) > at > org.apache.geode.internal.cache.DiskStoreImpl.recoverRegionsThatAreReady(DiskStoreImpl.java:2053) > at > org.apache.geode.internal.cache.DiskStoreImpl.initializeIfNeeded(DiskStoreImpl.java:2041) > security-peer-auth-init= > at > org.apache.geode.internal.cache.DiskStoreImpl.doInitialRecovery(DiskStoreImpl.java:2046) > at > org.apache.geode.internal.cache.DiskStoreFactoryImpl.initializeDiskStore(DiskStoreFactoryImpl.java:184) > at > org.apache.geode.internal.cache.DiskStoreFactoryImpl.create(DiskStoreFactoryImpl.java:150) > at > org.apache.geode.internal.cache.xmlcache.CacheCreation.createDiskStore(CacheCreation.java:794) > at > org.apache.geode.internal.cache.xmlcache.CacheCreation.initializePdxDiskStore(CacheCreation.java:785) > at > org.apache.geode.internal.cache.xmlcache.CacheCreation.create(CacheCreation.java:509) > at > org.apache.geode.internal.cache.xmlcache.CacheXmlParser.create(CacheXmlParser.java:337) > at > org.apache.geode.internal.cache.GemFireCacheImpl.loadCacheXml(GemFireCacheImpl.java:4272) > at > org.apache.geode.internal.cache.ClusterConfigurationLoader.applyClusterXmlConfiguration(ClusterConfigurationLoader.java:197) > at > org.apache.geode.internal.cache.GemFireCacheImpl.applyJarAndXmlFromClusterConfig(GemFireCacheImpl.java:1240) > at > org.apache.geode.internal.cache.GemFireCacheImpl.initialize(GemFireCacheImpl.java:1206) > at > org.apache.geode.internal.cache.InternalCacheBuilder.create(InternalCacheBuilder.java:207) > at > org.apache.geode.internal.cache.InternalCacheBuilder.create(InternalCacheBuilder.java:164) > at org.apache.geode.cache.CacheFactory.create(CacheFactory.java:139) > at > org.apache.geode.distributed.internal.DefaultServerLauncherCacheProvider.createCache(DefaultServerLauncherCacheProvider.java:52) > at > org.apache.geode.distributed.ServerLauncher.createCache(ServerLauncher.java:869) > at org.apache.geode.distributed.ServerLauncher.start(ServerLauncher.java:786) > at org.apache.geode.distributed.ServerLauncher.run(ServerLauncher.java:716) > at org.apache.geode.distributed.ServerLauncher.main(ServerLauncher.java:236) > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (GEODE-8029) java.lang.IllegalArgumentException: Too large (805306401 expected elements with load factor 0.75)
[ https://issues.apache.org/jira/browse/GEODE-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nabarun Nag resolved GEODE-8029. Resolution: Fixed > java.lang.IllegalArgumentException: Too large (805306401 expected elements > with load factor 0.75) > - > > Key: GEODE-8029 > URL: https://issues.apache.org/jira/browse/GEODE-8029 > Project: Geode > Issue Type: Bug > Components: configuration, core, gfsh >Affects Versions: 1.9.0 >Reporter: Jagadeesh sivasankaran >Assignee: Juan Ramos >Priority: Major > Labels: GeodeCommons, caching-applications > Attachments: Screen Shot 2020-04-27 at 12.21.19 PM.png, Screen Shot > 2020-04-27 at 12.21.19 PM.png, server02.log > > > we have a cluster of three Locator Geode and three Cache Server running in > CentOS servers. Today (April 27) after patching our CENTOS servers , all > locator and 2 servers came up , But one Cache server was not starting . here > is the Exception details. Please let me know how to resolve the beloe issue > and need any configuration changes to diskstore ? > > > Starting a Geode Server in /app/provServerHO2... > The > Cache Server process terminated unexpectedly with exit status 1. Please > refer to the log file in /app/provServerHO2 for full details. > Exception in thread "main" java.lang.IllegalArgumentException: Too large > (805306401 expected elements with load factor 0.75) > at it.unimi.dsi.fastutil.HashCommon.arraySize(HashCommon.java:222) > at it.unimi.dsi.fastutil.ints.IntOpenHashSet.add(IntOpenHashSet.java:308) > at > org.apache.geode.internal.cache.DiskStoreImpl$OplogEntryIdSet.add(DiskStoreImpl.java:3474) > at org.apache.geode.internal.cache.Oplog.readDelEntry(Oplog.java:3007) > at org.apache.geode.internal.cache.Oplog.recoverDrf(Oplog.java:1500) > at > org.apache.geode.internal.cache.PersistentOplogSet.recoverOplogs(PersistentOplogSet.java:445) > at > org.apache.geode.internal.cache.PersistentOplogSet.recoverRegionsThatAreReady(PersistentOplogSet.java:369) > at > org.apache.geode.internal.cache.DiskStoreImpl.recoverRegionsThatAreReady(DiskStoreImpl.java:2053) > at > org.apache.geode.internal.cache.DiskStoreImpl.initializeIfNeeded(DiskStoreImpl.java:2041) > security-peer-auth-init= > at > org.apache.geode.internal.cache.DiskStoreImpl.doInitialRecovery(DiskStoreImpl.java:2046) > at > org.apache.geode.internal.cache.DiskStoreFactoryImpl.initializeDiskStore(DiskStoreFactoryImpl.java:184) > at > org.apache.geode.internal.cache.DiskStoreFactoryImpl.create(DiskStoreFactoryImpl.java:150) > at > org.apache.geode.internal.cache.xmlcache.CacheCreation.createDiskStore(CacheCreation.java:794) > at > org.apache.geode.internal.cache.xmlcache.CacheCreation.initializePdxDiskStore(CacheCreation.java:785) > at > org.apache.geode.internal.cache.xmlcache.CacheCreation.create(CacheCreation.java:509) > at > org.apache.geode.internal.cache.xmlcache.CacheXmlParser.create(CacheXmlParser.java:337) > at > org.apache.geode.internal.cache.GemFireCacheImpl.loadCacheXml(GemFireCacheImpl.java:4272) > at > org.apache.geode.internal.cache.ClusterConfigurationLoader.applyClusterXmlConfiguration(ClusterConfigurationLoader.java:197) > at > org.apache.geode.internal.cache.GemFireCacheImpl.applyJarAndXmlFromClusterConfig(GemFireCacheImpl.java:1240) > at > org.apache.geode.internal.cache.GemFireCacheImpl.initialize(GemFireCacheImpl.java:1206) > at > org.apache.geode.internal.cache.InternalCacheBuilder.create(InternalCacheBuilder.java:207) > at > org.apache.geode.internal.cache.InternalCacheBuilder.create(InternalCacheBuilder.java:164) > at org.apache.geode.cache.CacheFactory.create(CacheFactory.java:139) > at > org.apache.geode.distributed.internal.DefaultServerLauncherCacheProvider.createCache(DefaultServerLauncherCacheProvider.java:52) > at > org.apache.geode.distributed.ServerLauncher.createCache(ServerLauncher.java:869) > at org.apache.geode.distributed.ServerLauncher.start(ServerLauncher.java:786) > at org.apache.geode.distributed.ServerLauncher.run(ServerLauncher.java:716) > at org.apache.geode.distributed.ServerLauncher.main(ServerLauncher.java:236) > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (GEODE-8029) java.lang.IllegalArgumentException: Too large (805306401 expected elements with load factor 0.75)
[ https://issues.apache.org/jira/browse/GEODE-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17099670#comment-17099670 ] ASF subversion and git services commented on GEODE-8029: Commit be8ac497eb1ece588e9a6c299d6aab4feb192ed3 in geode's branch refs/heads/develop from Juan José Ramos [ https://gitbox.apache.org/repos/asf?p=geode.git;h=be8ac49 ] GEODE-8029: Delete orphaned drf files (#5037) The OpLog initialization now delete unused drf files to prevent the proliferation of unused records and files within the system, which can could cause members to fail during startup while recovering disk-stores (especially when they are isolated for gateway-senders). - Added distributed tests. - Delete orphaned drf files when deleting the corresponding crf during recovery. > java.lang.IllegalArgumentException: Too large (805306401 expected elements > with load factor 0.75) > - > > Key: GEODE-8029 > URL: https://issues.apache.org/jira/browse/GEODE-8029 > Project: Geode > Issue Type: Bug > Components: configuration, core, gfsh >Affects Versions: 1.9.0 >Reporter: Jagadeesh sivasankaran >Assignee: Juan Ramos >Priority: Major > Labels: GeodeCommons, caching-applications > Attachments: Screen Shot 2020-04-27 at 12.21.19 PM.png, Screen Shot > 2020-04-27 at 12.21.19 PM.png, server02.log > > > we have a cluster of three Locator Geode and three Cache Server running in > CentOS servers. Today (April 27) after patching our CENTOS servers , all > locator and 2 servers came up , But one Cache server was not starting . here > is the Exception details. Please let me know how to resolve the beloe issue > and need any configuration changes to diskstore ? > > > Starting a Geode Server in /app/provServerHO2... > The > Cache Server process terminated unexpectedly with exit status 1. Please > refer to the log file in /app/provServerHO2 for full details. > Exception in thread "main" java.lang.IllegalArgumentException: Too large > (805306401 expected elements with load factor 0.75) > at it.unimi.dsi.fastutil.HashCommon.arraySize(HashCommon.java:222) > at it.unimi.dsi.fastutil.ints.IntOpenHashSet.add(IntOpenHashSet.java:308) > at > org.apache.geode.internal.cache.DiskStoreImpl$OplogEntryIdSet.add(DiskStoreImpl.java:3474) > at org.apache.geode.internal.cache.Oplog.readDelEntry(Oplog.java:3007) > at org.apache.geode.internal.cache.Oplog.recoverDrf(Oplog.java:1500) > at > org.apache.geode.internal.cache.PersistentOplogSet.recoverOplogs(PersistentOplogSet.java:445) > at > org.apache.geode.internal.cache.PersistentOplogSet.recoverRegionsThatAreReady(PersistentOplogSet.java:369) > at > org.apache.geode.internal.cache.DiskStoreImpl.recoverRegionsThatAreReady(DiskStoreImpl.java:2053) > at > org.apache.geode.internal.cache.DiskStoreImpl.initializeIfNeeded(DiskStoreImpl.java:2041) > security-peer-auth-init= > at > org.apache.geode.internal.cache.DiskStoreImpl.doInitialRecovery(DiskStoreImpl.java:2046) > at > org.apache.geode.internal.cache.DiskStoreFactoryImpl.initializeDiskStore(DiskStoreFactoryImpl.java:184) > at > org.apache.geode.internal.cache.DiskStoreFactoryImpl.create(DiskStoreFactoryImpl.java:150) > at > org.apache.geode.internal.cache.xmlcache.CacheCreation.createDiskStore(CacheCreation.java:794) > at > org.apache.geode.internal.cache.xmlcache.CacheCreation.initializePdxDiskStore(CacheCreation.java:785) > at > org.apache.geode.internal.cache.xmlcache.CacheCreation.create(CacheCreation.java:509) > at > org.apache.geode.internal.cache.xmlcache.CacheXmlParser.create(CacheXmlParser.java:337) > at > org.apache.geode.internal.cache.GemFireCacheImpl.loadCacheXml(GemFireCacheImpl.java:4272) > at > org.apache.geode.internal.cache.ClusterConfigurationLoader.applyClusterXmlConfiguration(ClusterConfigurationLoader.java:197) > at > org.apache.geode.internal.cache.GemFireCacheImpl.applyJarAndXmlFromClusterConfig(GemFireCacheImpl.java:1240) > at > org.apache.geode.internal.cache.GemFireCacheImpl.initialize(GemFireCacheImpl.java:1206) > at > org.apache.geode.internal.cache.InternalCacheBuilder.create(InternalCacheBuilder.java:207) > at > org.apache.geode.internal.cache.InternalCacheBuilder.create(InternalCacheBuilder.java:164) > at org.apache.geode.cache.CacheFactory.create(CacheFactory.java:139) > at > org.apache.geode.distributed.internal.DefaultServerLauncherCacheProvider.createCache(DefaultServerLauncherCacheProvider.java:52) > at > org.apache.geode.distributed.ServerLauncher.createC