[jira] [Commented] (GEODE-9910) Failure to auto-reconnect upon network partition
[ https://issues.apache.org/jira/browse/GEODE-9910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493725#comment-17493725 ] Barrett Oglesby commented on GEODE-9910: Can you attach the full logs of both servers? > Failure to auto-reconnect upon network partition > > > Key: GEODE-9910 > URL: https://issues.apache.org/jira/browse/GEODE-9910 > Project: Geode > Issue Type: Bug >Affects Versions: 1.14.0 >Reporter: Surya Mudundi >Priority: Major > Labels: GeodeOperationAPI, blocks-1.15.0, needsTriage > > Two node cluster with embedded locators failed to auto-reconnect when node-1 > experienced network outage for couple of minutes and when node-1 recovered > from the outage, node-2 failed to auto-reconnect. > node-2 tried to re-connect to node-1 as: > [org.apache.geode.distributed.internal.InternalDistributedSystem]-[ReconnectThread] > [] Attempting to reconnect to the distributed system. This is attempt #1. > [org.apache.geode.distributed.internal.InternalDistributedSystem]-[ReconnectThread] > [] Attempting to reconnect to the distributed system. This is attempt #2. > [org.apache.geode.distributed.internal.InternalDistributedSystem]-[ReconnectThread] > [] Attempting to reconnect to the distributed system. This is attempt #3. > Finally reported below error after 3 attempts as: > INFO > [org.apache.geode.logging.internal.LoggingProviderLoader]-[ReconnectThread] > [] Using org.apache.geode.logging.internal.SimpleLoggingProvider for service > org.apache.geode.logging.internal.spi.LoggingProvider > INFO [org.apache.geode.internal.InternalDataSerializer]-[ReconnectThread] [] > initializing InternalDataSerializer with 0 services > INFO > [org.apache.geode.distributed.internal.InternalDistributedSystem]-[ReconnectThread] > [] performing a quorum check to see if location services can be started early > INFO > [org.apache.geode.distributed.internal.InternalDistributedSystem]-[ReconnectThread] > [] Quorum check passed - allowing location services to start early > WARN > [org.apache.geode.distributed.internal.InternalDistributedSystem]-[ReconnectThread] > [] Exception occurred while trying to connect the system during reconnect > java.lang.IllegalStateException: A locator can not be created because one > already exists in this JVM. > at > org.apache.geode.distributed.internal.InternalLocator.createLocator(InternalLocator.java:298) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalLocator.createLocator(InternalLocator.java:273) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem.startInitLocator(InternalDistributedSystem.java:916) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem.initialize(InternalDistributedSystem.java:768) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem.access$200(InternalDistributedSystem.java:135) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem$Builder.build(InternalDistributedSystem.java:3034) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem.connectInternal(InternalDistributedSystem.java:290) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem.reconnect(InternalDistributedSystem.java:2605) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem.tryReconnect(InternalDistributedSystem.java:2424) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem.disconnect(InternalDistributedSystem.java:1275) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.ClusterDistributionManager$DMListener.membershipFailure(ClusterDistributionManager.java:2326) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.membership.gms.GMSMembership.uncleanShutdown(GMSMembership.java:1187) > ~[geode-membership-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.membership.gms.GMSMembership$ManagerImpl.lambda$forceDisconnect$0(GMSMembership.java:1811) > ~[geode-membership-1.14.0.jar:?] > at java.lang.Thread.run(Thread.java:829) [?:?] > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (GEODE-9910) Failure to auto-reconnect upon network partition
[ https://issues.apache.org/jira/browse/GEODE-9910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Barrett Oglesby reassigned GEODE-9910: -- Assignee: Barrett Oglesby > Failure to auto-reconnect upon network partition > > > Key: GEODE-9910 > URL: https://issues.apache.org/jira/browse/GEODE-9910 > Project: Geode > Issue Type: Bug >Affects Versions: 1.14.0 >Reporter: Surya Mudundi >Assignee: Barrett Oglesby >Priority: Major > Labels: GeodeOperationAPI, blocks-1.15.0, needsTriage > > Two node cluster with embedded locators failed to auto-reconnect when node-1 > experienced network outage for couple of minutes and when node-1 recovered > from the outage, node-2 failed to auto-reconnect. > node-2 tried to re-connect to node-1 as: > [org.apache.geode.distributed.internal.InternalDistributedSystem]-[ReconnectThread] > [] Attempting to reconnect to the distributed system. This is attempt #1. > [org.apache.geode.distributed.internal.InternalDistributedSystem]-[ReconnectThread] > [] Attempting to reconnect to the distributed system. This is attempt #2. > [org.apache.geode.distributed.internal.InternalDistributedSystem]-[ReconnectThread] > [] Attempting to reconnect to the distributed system. This is attempt #3. > Finally reported below error after 3 attempts as: > INFO > [org.apache.geode.logging.internal.LoggingProviderLoader]-[ReconnectThread] > [] Using org.apache.geode.logging.internal.SimpleLoggingProvider for service > org.apache.geode.logging.internal.spi.LoggingProvider > INFO [org.apache.geode.internal.InternalDataSerializer]-[ReconnectThread] [] > initializing InternalDataSerializer with 0 services > INFO > [org.apache.geode.distributed.internal.InternalDistributedSystem]-[ReconnectThread] > [] performing a quorum check to see if location services can be started early > INFO > [org.apache.geode.distributed.internal.InternalDistributedSystem]-[ReconnectThread] > [] Quorum check passed - allowing location services to start early > WARN > [org.apache.geode.distributed.internal.InternalDistributedSystem]-[ReconnectThread] > [] Exception occurred while trying to connect the system during reconnect > java.lang.IllegalStateException: A locator can not be created because one > already exists in this JVM. > at > org.apache.geode.distributed.internal.InternalLocator.createLocator(InternalLocator.java:298) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalLocator.createLocator(InternalLocator.java:273) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem.startInitLocator(InternalDistributedSystem.java:916) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem.initialize(InternalDistributedSystem.java:768) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem.access$200(InternalDistributedSystem.java:135) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem$Builder.build(InternalDistributedSystem.java:3034) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem.connectInternal(InternalDistributedSystem.java:290) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem.reconnect(InternalDistributedSystem.java:2605) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem.tryReconnect(InternalDistributedSystem.java:2424) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem.disconnect(InternalDistributedSystem.java:1275) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.ClusterDistributionManager$DMListener.membershipFailure(ClusterDistributionManager.java:2326) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.membership.gms.GMSMembership.uncleanShutdown(GMSMembership.java:1187) > ~[geode-membership-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.membership.gms.GMSMembership$ManagerImpl.lambda$forceDisconnect$0(GMSMembership.java:1811) > ~[geode-membership-1.14.0.jar:?] > at java.lang.Thread.run(Thread.java:829) [?:?] > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-6588) Cleanup internal use of generics and other static analyzer warnings [PERMANENT]
[ https://issues.apache.org/jira/browse/GEODE-6588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493636#comment-17493636 ] ASF subversion and git services commented on GEODE-6588: Commit 77dd1ad69d71e5326d2d21e6ab0a9348b70d59df in geode's branch refs/heads/develop from Jacob Barrett [ https://gitbox.apache.org/repos/asf?p=geode.git;h=77dd1ad ] GEODE-6588: Cleanup static analyzer warnings. (#7373) * Fixes logging bugs. > Cleanup internal use of generics and other static analyzer warnings > [PERMANENT] > --- > > Key: GEODE-6588 > URL: https://issues.apache.org/jira/browse/GEODE-6588 > Project: Geode > Issue Type: Task >Reporter: Jacob Barrett >Assignee: Jacob Barrett >Priority: Major > Labels: pull-request-available > Time Spent: 8h 40m > Remaining Estimate: 0h > > Use generics where possible. > Cleanup other static analyzer issues along the way. > Generally make the IntelliJ analyzer gutter less cluttered. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-9708) Clean up FunctionCommandsDistributedTestBase
[ https://issues.apache.org/jira/browse/GEODE-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493635#comment-17493635 ] ASF subversion and git services commented on GEODE-9708: Commit 97d1b2acf3b2dab7f7d1ff3540261b417de8600a in geode's branch refs/heads/develop from Nabarun Nag [ https://gitbox.apache.org/repos/asf?p=geode.git;h=97d1b2a ] GEODE-9708: Removed analyzer warnings from FunctionCommandsDistributedTestBase (#6967) > Clean up FunctionCommandsDistributedTestBase > - > > Key: GEODE-9708 > URL: https://issues.apache.org/jira/browse/GEODE-9708 > Project: Geode > Issue Type: Bug > Components: tests >Reporter: Nabarun Nag >Priority: Major > Labels: needsTriage, pull-request-available > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-9892) Create Infrastructure for Redis Lists
[ https://issues.apache.org/jira/browse/GEODE-9892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493634#comment-17493634 ] ASF subversion and git services commented on GEODE-9892: Commit a049b6e6e6433cf6e58eef9430f339b0a489f33e in geode's branch refs/heads/develop from Ray Ingles [ https://gitbox.apache.org/repos/asf?p=geode.git;h=a049b6e ] GEODE-9892: Create Infrastructure for Redis Lists (#7261) GEODE-9892: Add initial support for RedisLists - implements LPUSH, LPOP, LLEN > Create Infrastructure for Redis Lists > - > > Key: GEODE-9892 > URL: https://issues.apache.org/jira/browse/GEODE-9892 > Project: Geode > Issue Type: New Feature > Components: redis >Reporter: Wayne >Assignee: Ray Ingles >Priority: Major > Labels: pull-request-available > > Create the infrastructure for supporting Redis Lists including: > * Use of the appropriate underlying Java collection > * Implementing the [LPUSH|https://redis.io/commands/lpush] Command > * Implementing the [LRANGE|https://redis.io/commands/lrange] command > +Acceptance Critera+ > The LPUSH and LRANGE commands have been implemented with appropriate unit > testing. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-10061) Mass-Test-Run: ReconnectDUnitTest > testReconnectWithRoleLoss
[ https://issues.apache.org/jira/browse/GEODE-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493573#comment-17493573 ] Geode Integration commented on GEODE-10061: --- Seen in [distributed-test-openjdk8 #1090|https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-mass-test-run/jobs/distributed-test-openjdk8/builds/1090] ... see [test results|http://files.apachegeode-ci.info/builds/apache-develop-mass-test-run/1.16.0-build.0062/test-results/distributedTest/1644702754/] or download [artifacts|http://files.apachegeode-ci.info/builds/apache-develop-mass-test-run/1.16.0-build.0062/test-artifacts/1644702754/distributedtestfiles-openjdk8-1.16.0-build.0062.tgz]. > Mass-Test-Run: ReconnectDUnitTest > testReconnectWithRoleLoss > - > > Key: GEODE-10061 > URL: https://issues.apache.org/jira/browse/GEODE-10061 > Project: Geode > Issue Type: Bug >Affects Versions: 1.16.0 >Reporter: Kristen >Priority: Major > Labels: needsTriage > > {code:java} > > Task :geode-core:distributedTest > PRClientServerRegionFunctionExecutionDUnitTest > > testserverMultiKeyExecution_ThrowException[ExecuteFunctionById] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest$$Lambda$337/1157292204.run > in VM 3 running on Host > heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1.c.apachegeode-ci.internal > with 4 VMs > at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:631) > at org.apache.geode.test.dunit.VM.invoke(VM.java:448) > at > org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest.testserverMultiKeyExecution_ThrowException(PRClientServerRegionFunctionExecutionDUnitTest.java:379) > Caused by: > org.apache.geode.cache.client.ServerConnectivityException: Pool > unexpected Socket closed connection=Pooled Connection to > heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1.c.apachegeode-ci.internal:20923,heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1(304549):42669: > Connection[DESTROYED] attempt=3). Server unreachable: could not connect > after 3 attempts > at > org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:671) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:502) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:155) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:120) > at > org.apache.geode.cache.client.internal.PoolImpl.execute(PoolImpl.java:805) > at > org.apache.geode.cache.client.internal.PutOp.execute(PutOp.java:92) > at > org.apache.geode.cache.client.internal.ServerRegionProxy.put(ServerRegionProxy.java:158) > at > org.apache.geode.internal.cache.LocalRegion.serverPut(LocalRegion.java:3048) > at > org.apache.geode.internal.cache.LocalRegion.cacheWriteBeforePut(LocalRegion.java:3163) > at > org.apache.geode.internal.cache.ProxyRegionMap.basicPut(ProxyRegionMap.java:238) > at > org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5620) > at > org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5598) > at > org.apache.geode.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:157) > at > org.apache.geode.internal.cache.LocalRegion.basicPut(LocalRegion.java:5053) > at > org.apache.geode.internal.cache.LocalRegion.validatedPut(LocalRegion.java:1649) > at > org.apache.geode.internal.cache.LocalRegion.put(LocalRegion.java:1636) > at > org.apache.geode.internal.cache.AbstractRegion.put(AbstractRegion.java:445) > at > org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest.serverMultiKeyExecution_ThrowException(PRClientServerRegionFunctionExecutionDUnitTest.java:916) > at > org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest.lambda$testserverMultiKeyExecution_ThrowException$bb17a952$1(PRClientServerRegionFunctionExecutionDUnitTest.java:380) > Caused by: > java.net.SocketException: Socket closed > at java.net.SocketInputStream.read(SocketInputStream.java:204) > at java.net.SocketInputStream.read(SocketInputStream.java:141) > ... > {code} > {code:java} > ReconnectDUnitTest > testReconnectWithRoleLoss FAILED > java.lang.AssertionError: Suspicious strings were written to t
[jira] [Updated] (GEODE-10063) A closed/destroyed connection can be set as a primary queueConnection in QueueManager
[ https://issues.apache.org/jira/browse/GEODE-10063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Shu updated GEODE-10063: - Labels: GeodeOperationAPI blocks-1.15.0 (was: blocks-1.15.0) > A closed/destroyed connection can be set as a primary queueConnection in > QueueManager > - > > Key: GEODE-10063 > URL: https://issues.apache.org/jira/browse/GEODE-10063 > Project: Geode > Issue Type: Bug > Components: client queues, security >Affects Versions: 1.15.0 >Reporter: Eric Shu >Assignee: Eric Shu >Priority: Major > Labels: GeodeOperationAPI, blocks-1.15.0 > > In certain race cases, a destroyed connection is set to be the primary queue > connection connected to servers. If re-auth is enabled, and server pauses the > primary queue waiting for the re-auth token, there will be no client to > server connection available to send the valid re-auth token for server to > unpause the queue. And the said client can not receive any events afterwards. > The situation should be detected during RedundancySatisfierTask, but it could > not. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-10052) CI Failure: OutOfMemoryDUnitTest tests of Publish command fail expecting exception that was not thrown
[ https://issues.apache.org/jira/browse/GEODE-10052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493561#comment-17493561 ] ASF subversion and git services commented on GEODE-10052: - Commit 920268b675ebb515a38a1c1e4ead980d29f934ef in geode's branch refs/heads/develop from Donal Evans [ https://gitbox.apache.org/repos/asf?p=geode.git;h=920268b ] GEODE-10052: Fix flakiness in OutOfMemoryDUnitTest (#7369) - Add larger values to get over critical memory threshold faster and make it more likely to stay there - Force GC before each attempt to add a large value to make it less likely that we'll drop back below critical threshold due to small objects being GCd while over critical threshold > CI Failure: OutOfMemoryDUnitTest tests of Publish command fail expecting > exception that was not thrown > -- > > Key: GEODE-10052 > URL: https://issues.apache.org/jira/browse/GEODE-10052 > Project: Geode > Issue Type: Bug > Components: redis >Affects Versions: 1.16.0 >Reporter: Hale Bales >Assignee: Donal Evans >Priority: Major > Labels: pull-request-available > Fix For: 1.16.0 > > > There were three failures within a couple of days. They are all in publish > tests. > {code:java} > OutOfMemoryDUnitTest > shouldReturnOOMError_forPublish_whenThresholdReached > FAILED > java.lang.AssertionError: > Expecting code to raise a throwable. > at > org.apache.geode.redis.OutOfMemoryDUnitTest.addMultipleKeysToServer1UntilOOMExceptionIsThrown(OutOfMemoryDUnitTest.java:357) > at > org.apache.geode.redis.OutOfMemoryDUnitTest.fillServer1Memory(OutOfMemoryDUnitTest.java:344) > at > org.apache.geode.redis.OutOfMemoryDUnitTest.shouldReturnOOMError_forPublish_whenThresholdReached(OutOfMemoryDUnitTest.java:210) > {code} > {code:java} > OutOfMemoryDUnitTest > shouldReturnOOMError_forPublish_whenThresholdReached > FAILED > java.lang.AssertionError: > Expecting code to raise a throwable. > at > org.apache.geode.redis.OutOfMemoryDUnitTest.addMultipleKeysToServer1UntilOOMExceptionIsThrown(OutOfMemoryDUnitTest.java:357) > at > org.apache.geode.redis.OutOfMemoryDUnitTest.fillServer1Memory(OutOfMemoryDUnitTest.java:344) > at > org.apache.geode.redis.OutOfMemoryDUnitTest.shouldReturnOOMError_forPublish_whenThresholdReached(OutOfMemoryDUnitTest.java:210) > {code} > {code:java} > OutOfMemoryDUnitTest > shouldAllowPublish_afterDroppingBelowCriticalThreshold > FAILED > org.awaitility.core.ConditionTimeoutException: Assertion condition > defined as a org.apache.geode.redis.OutOfMemoryDUnitTest > Expecting code to raise a throwable within 5 minutes. > at > org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:164) > at > org.awaitility.core.AssertionCondition.await(AssertionCondition.java:119) > at > org.awaitility.core.AssertionCondition.await(AssertionCondition.java:31) > at > org.awaitility.core.ConditionFactory.until(ConditionFactory.java:939) > at > org.awaitility.core.ConditionFactory.untilAsserted(ConditionFactory.java:723) > at > org.apache.geode.redis.OutOfMemoryDUnitTest.shouldAllowPublish_afterDroppingBelowCriticalThreshold(OutOfMemoryDUnitTest.java:328) > Caused by: > java.lang.AssertionError: > Expecting code to raise a throwable. > at > org.apache.geode.redis.OutOfMemoryDUnitTest.lambda$shouldAllowPublish_afterDroppingBelowCriticalThreshold$36(OutOfMemoryDUnitTest.java:328) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (GEODE-10063) A closed/destroyed connection can be set as a primary queueConnection in QueueManager
[ https://issues.apache.org/jira/browse/GEODE-10063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Shu updated GEODE-10063: - Description: In certain race cases, a destroyed connection is set to be the primary queue connection connected to servers. If re-auth is enabled, and server pauses the primary queue waiting for the re-auth token, there will be no client to server connection available to send the valid re-auth token for server to unpause the queue. And the said client can not receive any events afterwards. The situation should be detected during RedundancySatisfierTask, but it could not. was: In certain race cases, a destroyed connection is set to be the primary queue connection connected to servers. If re-auth is enabled, and server pauses the primary queue waiting for the re-auth token, there will be no client to server connection available to send the valid re-auth token for server to unpause the queue. And said client can not receive any events. The situation should be detected during RedundancySatisfierTask, but it could not. > A closed/destroyed connection can be set as a primary queueConnection in > QueueManager > - > > Key: GEODE-10063 > URL: https://issues.apache.org/jira/browse/GEODE-10063 > Project: Geode > Issue Type: Bug > Components: client queues, security >Affects Versions: 1.15.0 >Reporter: Eric Shu >Assignee: Eric Shu >Priority: Major > Labels: blocks-1.15.0 > > In certain race cases, a destroyed connection is set to be the primary queue > connection connected to servers. If re-auth is enabled, and server pauses the > primary queue waiting for the re-auth token, there will be no client to > server connection available to send the valid re-auth token for server to > unpause the queue. And the said client can not receive any events afterwards. > The situation should be detected during RedundancySatisfierTask, but it could > not. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (GEODE-10063) A closed/destroyed connection can be set as a primary queueConnection in QueueManager
[ https://issues.apache.org/jira/browse/GEODE-10063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Shu updated GEODE-10063: - Labels: blocks-1.15.0 (was: needsTriage) > A closed/destroyed connection can be set as a primary queueConnection in > QueueManager > - > > Key: GEODE-10063 > URL: https://issues.apache.org/jira/browse/GEODE-10063 > Project: Geode > Issue Type: Bug > Components: client queues, security >Affects Versions: 1.15.0 >Reporter: Eric Shu >Assignee: Eric Shu >Priority: Major > Labels: blocks-1.15.0 > > In certain race cases, a destroyed connection is set to be the primary queue > connection connected to servers. If re-auth is enabled, and server pauses the > primary queue waiting for the re-auth token, there will be no client to > server connection available to send the valid re-auth token for server to > unpause the queue. And said client can not receive any events. > The situation should be detected during RedundancySatisfierTask, but it could > not. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (GEODE-10063) A closed/destroyed connection can be set as a primary queueConnection in QueueManager
[ https://issues.apache.org/jira/browse/GEODE-10063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Murmann updated GEODE-10063: -- Labels: needsTriage (was: ) > A closed/destroyed connection can be set as a primary queueConnection in > QueueManager > - > > Key: GEODE-10063 > URL: https://issues.apache.org/jira/browse/GEODE-10063 > Project: Geode > Issue Type: Bug > Components: client queues >Affects Versions: 1.15.0 >Reporter: Eric Shu >Priority: Major > Labels: needsTriage > > In certain race cases, a destroyed connection is set to be the primary queue > connection connected to servers. If re-auth is enabled, and server pauses the > primary queue waiting for the re-auth token, there will be no client to > server connection available to send the valid re-auth token for server to > unpause the queue. And said client can not receive any events. > The situation should be detected during RedundancySatisfierTask, but it could > not. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (GEODE-10063) A closed/destroyed connection can be set as a primary queueConnection in QueueManager
[ https://issues.apache.org/jira/browse/GEODE-10063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Shu reassigned GEODE-10063: Assignee: Eric Shu > A closed/destroyed connection can be set as a primary queueConnection in > QueueManager > - > > Key: GEODE-10063 > URL: https://issues.apache.org/jira/browse/GEODE-10063 > Project: Geode > Issue Type: Bug > Components: client queues, security >Affects Versions: 1.15.0 >Reporter: Eric Shu >Assignee: Eric Shu >Priority: Major > Labels: needsTriage > > In certain race cases, a destroyed connection is set to be the primary queue > connection connected to servers. If re-auth is enabled, and server pauses the > primary queue waiting for the re-auth token, there will be no client to > server connection available to send the valid re-auth token for server to > unpause the queue. And said client can not receive any events. > The situation should be detected during RedundancySatisfierTask, but it could > not. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (GEODE-10063) A closed/destroyed connection can be set as a primary queueConnection in QueueManager
Eric Shu created GEODE-10063: Summary: A closed/destroyed connection can be set as a primary queueConnection in QueueManager Key: GEODE-10063 URL: https://issues.apache.org/jira/browse/GEODE-10063 Project: Geode Issue Type: Bug Components: client queues Affects Versions: 1.15.0 Reporter: Eric Shu In certain race cases, a destroyed connection is set to be the primary queue connection connected to servers. If re-auth is enabled, and server pauses the primary queue waiting for the re-auth token, there will be no client to server connection available to send the valid re-auth token for server to unpause the queue. And said client can not receive any events. The situation should be detected during RedundancySatisfierTask, but it could not. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (GEODE-10063) A closed/destroyed connection can be set as a primary queueConnection in QueueManager
[ https://issues.apache.org/jira/browse/GEODE-10063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Shu updated GEODE-10063: - Component/s: security > A closed/destroyed connection can be set as a primary queueConnection in > QueueManager > - > > Key: GEODE-10063 > URL: https://issues.apache.org/jira/browse/GEODE-10063 > Project: Geode > Issue Type: Bug > Components: client queues, security >Affects Versions: 1.15.0 >Reporter: Eric Shu >Priority: Major > Labels: needsTriage > > In certain race cases, a destroyed connection is set to be the primary queue > connection connected to servers. If re-auth is enabled, and server pauses the > primary queue waiting for the re-auth token, there will be no client to > server connection available to send the valid re-auth token for server to > unpause the queue. And said client can not receive any events. > The situation should be detected during RedundancySatisfierTask, but it could > not. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (GEODE-10052) CI Failure: OutOfMemoryDUnitTest tests of Publish command fail expecting exception that was not thrown
[ https://issues.apache.org/jira/browse/GEODE-10052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Donal Evans resolved GEODE-10052. - Fix Version/s: 1.16.0 Resolution: Fixed > CI Failure: OutOfMemoryDUnitTest tests of Publish command fail expecting > exception that was not thrown > -- > > Key: GEODE-10052 > URL: https://issues.apache.org/jira/browse/GEODE-10052 > Project: Geode > Issue Type: Bug > Components: redis >Affects Versions: 1.16.0 >Reporter: Hale Bales >Assignee: Donal Evans >Priority: Major > Labels: pull-request-available > Fix For: 1.16.0 > > > There were three failures within a couple of days. They are all in publish > tests. > {code:java} > OutOfMemoryDUnitTest > shouldReturnOOMError_forPublish_whenThresholdReached > FAILED > java.lang.AssertionError: > Expecting code to raise a throwable. > at > org.apache.geode.redis.OutOfMemoryDUnitTest.addMultipleKeysToServer1UntilOOMExceptionIsThrown(OutOfMemoryDUnitTest.java:357) > at > org.apache.geode.redis.OutOfMemoryDUnitTest.fillServer1Memory(OutOfMemoryDUnitTest.java:344) > at > org.apache.geode.redis.OutOfMemoryDUnitTest.shouldReturnOOMError_forPublish_whenThresholdReached(OutOfMemoryDUnitTest.java:210) > {code} > {code:java} > OutOfMemoryDUnitTest > shouldReturnOOMError_forPublish_whenThresholdReached > FAILED > java.lang.AssertionError: > Expecting code to raise a throwable. > at > org.apache.geode.redis.OutOfMemoryDUnitTest.addMultipleKeysToServer1UntilOOMExceptionIsThrown(OutOfMemoryDUnitTest.java:357) > at > org.apache.geode.redis.OutOfMemoryDUnitTest.fillServer1Memory(OutOfMemoryDUnitTest.java:344) > at > org.apache.geode.redis.OutOfMemoryDUnitTest.shouldReturnOOMError_forPublish_whenThresholdReached(OutOfMemoryDUnitTest.java:210) > {code} > {code:java} > OutOfMemoryDUnitTest > shouldAllowPublish_afterDroppingBelowCriticalThreshold > FAILED > org.awaitility.core.ConditionTimeoutException: Assertion condition > defined as a org.apache.geode.redis.OutOfMemoryDUnitTest > Expecting code to raise a throwable within 5 minutes. > at > org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:164) > at > org.awaitility.core.AssertionCondition.await(AssertionCondition.java:119) > at > org.awaitility.core.AssertionCondition.await(AssertionCondition.java:31) > at > org.awaitility.core.ConditionFactory.until(ConditionFactory.java:939) > at > org.awaitility.core.ConditionFactory.untilAsserted(ConditionFactory.java:723) > at > org.apache.geode.redis.OutOfMemoryDUnitTest.shouldAllowPublish_afterDroppingBelowCriticalThreshold(OutOfMemoryDUnitTest.java:328) > Caused by: > java.lang.AssertionError: > Expecting code to raise a throwable. > at > org.apache.geode.redis.OutOfMemoryDUnitTest.lambda$shouldAllowPublish_afterDroppingBelowCriticalThreshold$36(OutOfMemoryDUnitTest.java:328) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (GEODE-9817) Allow analyze serializables tests to provide custom source set paths to ClassAnalysisRule
[ https://issues.apache.org/jira/browse/GEODE-9817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Owen Nichols updated GEODE-9817: Fix Version/s: 1.14.4 > Allow analyze serializables tests to provide custom source set paths to > ClassAnalysisRule > - > > Key: GEODE-9817 > URL: https://issues.apache.org/jira/browse/GEODE-9817 > Project: Geode > Issue Type: Wish > Components: tests >Reporter: Kirk Lund >Assignee: Kirk Lund >Priority: Major > Labels: pull-request-available > Fix For: 1.14.4, 1.15.0 > > > In order to make SanctionedSerializablesService and the related tests to be > more pluggable by external modules, I need to make changes to allow analyze > serializables tests to provide custom source set paths to ClassAnalysisRule. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (GEODE-10062) Update Native Client Docs to minimize redirects
Dave Barnes created GEODE-10062: --- Summary: Update Native Client Docs to minimize redirects Key: GEODE-10062 URL: https://issues.apache.org/jira/browse/GEODE-10062 Project: Geode Issue Type: Improvement Components: docs, native client Affects Versions: 1.14.3 Reporter: Dave Barnes Geode Native Client doc sources could be improved for ease of building and use by reducing dependency on the ruby redirect feature, through measures such as: - Replace ruby redirects, where feasible, with template variables, e.g. links to API docs and server guide - Standardizing template variables, e.g. use 'serverman' consistently, retiring other alternatives such as 'geodeman' -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (GEODE-10060) Improve performance of serialization filter
[ https://issues.apache.org/jira/browse/GEODE-10060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated GEODE-10060: --- Labels: GeodeOperationAPI pull-request-available (was: GeodeOperationAPI) > Improve performance of serialization filter > --- > > Key: GEODE-10060 > URL: https://issues.apache.org/jira/browse/GEODE-10060 > Project: Geode > Issue Type: Improvement > Components: core, serialization >Reporter: Kirk Lund >Assignee: Kirk Lund >Priority: Major > Labels: GeodeOperationAPI, pull-request-available > > The goal of this ticket is to identify various things we could do to improve > the performance of how Geode configures and uses serialization filters. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (GEODE-10061) Mass-Test-Run: ReconnectDUnitTest > testReconnectWithRoleLoss
[ https://issues.apache.org/jira/browse/GEODE-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kristen updated GEODE-10061: Summary: Mass-Test-Run: ReconnectDUnitTest > testReconnectWithRoleLoss (was: Mass-Test-Run: PRClientServerRegionFunctionExecutionDUnitTest > testserverMultiKeyExecution_ThrowException[ExecuteFunctionById] FAILED) > Mass-Test-Run: ReconnectDUnitTest > testReconnectWithRoleLoss > - > > Key: GEODE-10061 > URL: https://issues.apache.org/jira/browse/GEODE-10061 > Project: Geode > Issue Type: Bug >Affects Versions: 1.16.0 >Reporter: Kristen >Priority: Major > Labels: needsTriage > > {code:java} > > Task :geode-core:distributedTest > PRClientServerRegionFunctionExecutionDUnitTest > > testserverMultiKeyExecution_ThrowException[ExecuteFunctionById] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest$$Lambda$337/1157292204.run > in VM 3 running on Host > heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1.c.apachegeode-ci.internal > with 4 VMs > at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:631) > at org.apache.geode.test.dunit.VM.invoke(VM.java:448) > at > org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest.testserverMultiKeyExecution_ThrowException(PRClientServerRegionFunctionExecutionDUnitTest.java:379) > Caused by: > org.apache.geode.cache.client.ServerConnectivityException: Pool > unexpected Socket closed connection=Pooled Connection to > heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1.c.apachegeode-ci.internal:20923,heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1(304549):42669: > Connection[DESTROYED] attempt=3). Server unreachable: could not connect > after 3 attempts > at > org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:671) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:502) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:155) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:120) > at > org.apache.geode.cache.client.internal.PoolImpl.execute(PoolImpl.java:805) > at > org.apache.geode.cache.client.internal.PutOp.execute(PutOp.java:92) > at > org.apache.geode.cache.client.internal.ServerRegionProxy.put(ServerRegionProxy.java:158) > at > org.apache.geode.internal.cache.LocalRegion.serverPut(LocalRegion.java:3048) > at > org.apache.geode.internal.cache.LocalRegion.cacheWriteBeforePut(LocalRegion.java:3163) > at > org.apache.geode.internal.cache.ProxyRegionMap.basicPut(ProxyRegionMap.java:238) > at > org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5620) > at > org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5598) > at > org.apache.geode.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:157) > at > org.apache.geode.internal.cache.LocalRegion.basicPut(LocalRegion.java:5053) > at > org.apache.geode.internal.cache.LocalRegion.validatedPut(LocalRegion.java:1649) > at > org.apache.geode.internal.cache.LocalRegion.put(LocalRegion.java:1636) > at > org.apache.geode.internal.cache.AbstractRegion.put(AbstractRegion.java:445) > at > org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest.serverMultiKeyExecution_ThrowException(PRClientServerRegionFunctionExecutionDUnitTest.java:916) > at > org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest.lambda$testserverMultiKeyExecution_ThrowException$bb17a952$1(PRClientServerRegionFunctionExecutionDUnitTest.java:380) > Caused by: > java.net.SocketException: Socket closed > at java.net.SocketInputStream.read(SocketInputStream.java:204) > at java.net.SocketInputStream.read(SocketInputStream.java:141) > ... > {code} > {code:java} > ReconnectDUnitTest > testReconnectWithRoleLoss FAILED > java.lang.AssertionError: Suspicious strings were written to the log > during this run. > Fix the strings or use IgnoredException.addIgnoredException to ignore. > --- > Found suspect string in 'dunit_suspect-vm0.log' at line 444 > [fatal 2022/02/12 20:58:08.620 UTC receiver,heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1-64966> tid=108] > Membershi
[jira] [Updated] (GEODE-10061) Mass-Test-Run: PRClientServerRegionFunctionExecutionDUnitTest > testserverMultiKeyExecution_ThrowException[ExecuteFunctionById] FAILED
[ https://issues.apache.org/jira/browse/GEODE-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kristen updated GEODE-10061: Description: {code:java} > Task :geode-core:distributedTest PRClientServerRegionFunctionExecutionDUnitTest > testserverMultiKeyExecution_ThrowException[ExecuteFunctionById] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest$$Lambda$337/1157292204.run in VM 3 running on Host heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1.c.apachegeode-ci.internal with 4 VMs at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:631) at org.apache.geode.test.dunit.VM.invoke(VM.java:448) at org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest.testserverMultiKeyExecution_ThrowException(PRClientServerRegionFunctionExecutionDUnitTest.java:379) Caused by: org.apache.geode.cache.client.ServerConnectivityException: Pool unexpected Socket closed connection=Pooled Connection to heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1.c.apachegeode-ci.internal:20923,heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1(304549):42669: Connection[DESTROYED] attempt=3). Server unreachable: could not connect after 3 attempts at org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:671) at org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:502) at org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:155) at org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:120) at org.apache.geode.cache.client.internal.PoolImpl.execute(PoolImpl.java:805) at org.apache.geode.cache.client.internal.PutOp.execute(PutOp.java:92) at org.apache.geode.cache.client.internal.ServerRegionProxy.put(ServerRegionProxy.java:158) at org.apache.geode.internal.cache.LocalRegion.serverPut(LocalRegion.java:3048) at org.apache.geode.internal.cache.LocalRegion.cacheWriteBeforePut(LocalRegion.java:3163) at org.apache.geode.internal.cache.ProxyRegionMap.basicPut(ProxyRegionMap.java:238) at org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5620) at org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5598) at org.apache.geode.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:157) at org.apache.geode.internal.cache.LocalRegion.basicPut(LocalRegion.java:5053) at org.apache.geode.internal.cache.LocalRegion.validatedPut(LocalRegion.java:1649) at org.apache.geode.internal.cache.LocalRegion.put(LocalRegion.java:1636) at org.apache.geode.internal.cache.AbstractRegion.put(AbstractRegion.java:445) at org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest.serverMultiKeyExecution_ThrowException(PRClientServerRegionFunctionExecutionDUnitTest.java:916) at org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest.lambda$testserverMultiKeyExecution_ThrowException$bb17a952$1(PRClientServerRegionFunctionExecutionDUnitTest.java:380) Caused by: java.net.SocketException: Socket closed at java.net.SocketInputStream.read(SocketInputStream.java:204) at java.net.SocketInputStream.read(SocketInputStream.java:141) ... {code} {code:java} ReconnectDUnitTest > testReconnectWithRoleLoss FAILED java.lang.AssertionError: Suspicious strings were written to the log during this run. Fix the strings or use IgnoredException.addIgnoredException to ignore. --- Found suspect string in 'dunit_suspect-vm0.log' at line 444 [fatal 2022/02/12 20:58:08.620 UTC tid=108] Membership service failure: Member isn't responding to heartbeat requests org.apache.geode.distributed.internal.membership.api.MemberDisconnectedException: Member isn't responding to heartbeat requests at org.apache.geode.distributed.internal.membership.gms.GMSMembership$ManagerImpl.forceDisconnect(GMSMembership.java:1806) at org.apache.geode.distributed.internal.membership.gms.membership.GMSJoinLeave.forceDisconnect(GMSJoinLeave.java:1120) at org.apache.geode.distributed.internal.membership.gms.membership.GMSJoinLeave.processRemoveMemberMessage(GMSJoinLeave.java:723) at org.apache.geode.distributed.internal.membership.gms.messenger.JGroupsMessenger$JGroupsReceiver.receive(JGroupsMessenger.java:1367) at org.apache.geode.distributed.internal.membership.gms.messenger.JGroupsMessenger$JGrou
[jira] [Updated] (GEODE-10061) Mass-Test-Run: PRClientServerRegionFunctionExecutionDUnitTest > testserverMultiKeyExecution_ThrowException[ExecuteFunctionById] FAILED
[ https://issues.apache.org/jira/browse/GEODE-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kristen updated GEODE-10061: Affects Version/s: 1.16.0 > Mass-Test-Run: PRClientServerRegionFunctionExecutionDUnitTest > > testserverMultiKeyExecution_ThrowException[ExecuteFunctionById] FAILED > -- > > Key: GEODE-10061 > URL: https://issues.apache.org/jira/browse/GEODE-10061 > Project: Geode > Issue Type: Bug >Affects Versions: 1.16.0 >Reporter: Kristen >Priority: Major > Labels: needsTriage > > {code:java} > > Task :geode-core:distributedTest > PRClientServerRegionFunctionExecutionDUnitTest > > testserverMultiKeyExecution_ThrowException[ExecuteFunctionById] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest$$Lambda$337/1157292204.run > in VM 3 running on Host > heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1.c.apachegeode-ci.internal > with 4 VMs > at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:631) > at org.apache.geode.test.dunit.VM.invoke(VM.java:448) > at > org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest.testserverMultiKeyExecution_ThrowException(PRClientServerRegionFunctionExecutionDUnitTest.java:379) > Caused by: > org.apache.geode.cache.client.ServerConnectivityException: Pool > unexpected Socket closed connection=Pooled Connection to > heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1.c.apachegeode-ci.internal:20923,heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1(304549):42669: > Connection[DESTROYED] attempt=3). Server unreachable: could not connect > after 3 attempts > at > org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:671) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:502) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:155) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:120) > at > org.apache.geode.cache.client.internal.PoolImpl.execute(PoolImpl.java:805) > at > org.apache.geode.cache.client.internal.PutOp.execute(PutOp.java:92) > at > org.apache.geode.cache.client.internal.ServerRegionProxy.put(ServerRegionProxy.java:158) > at > org.apache.geode.internal.cache.LocalRegion.serverPut(LocalRegion.java:3048) > at > org.apache.geode.internal.cache.LocalRegion.cacheWriteBeforePut(LocalRegion.java:3163) > at > org.apache.geode.internal.cache.ProxyRegionMap.basicPut(ProxyRegionMap.java:238) > at > org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5620) > at > org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5598) > at > org.apache.geode.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:157) > at > org.apache.geode.internal.cache.LocalRegion.basicPut(LocalRegion.java:5053) > at > org.apache.geode.internal.cache.LocalRegion.validatedPut(LocalRegion.java:1649) > at > org.apache.geode.internal.cache.LocalRegion.put(LocalRegion.java:1636) > at > org.apache.geode.internal.cache.AbstractRegion.put(AbstractRegion.java:445) > at > org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest.serverMultiKeyExecution_ThrowException(PRClientServerRegionFunctionExecutionDUnitTest.java:916) > at > org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest.lambda$testserverMultiKeyExecution_ThrowException$bb17a952$1(PRClientServerRegionFunctionExecutionDUnitTest.java:380) > Caused by: > java.net.SocketException: Socket closed > at java.net.SocketInputStream.read(SocketInputStream.java:204) > at java.net.SocketInputStream.read(SocketInputStream.java:141) > ... > {code} > {code:java} > ReconnectDUnitTest > testReconnectWithRoleLoss FAILED > java.lang.AssertionError: Suspicious strings were written to the log during > this run. Fix the strings or use IgnoredException.addIgnoredException to > ignore. > --- > Found suspect string in 'dunit_suspect-vm0.log' at line 444 [fatal > 2022/02/12 20:58:08.620 UTC receiver,heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1-64966> tid=108] > Membership service failure: Member isn't resp
[jira] [Updated] (GEODE-10060) Improve performance of serialization filter
[ https://issues.apache.org/jira/browse/GEODE-10060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirk Lund updated GEODE-10060: -- Description: The goal of this ticket is to identify various things we could do to improve the performance of how Geode configures and uses serialization filters. (was: The goal of this ticket is to identify various things we could do to improve the performance of serialization filtering that Geode configures and uses.) > Improve performance of serialization filter > --- > > Key: GEODE-10060 > URL: https://issues.apache.org/jira/browse/GEODE-10060 > Project: Geode > Issue Type: Improvement > Components: core, serialization >Reporter: Kirk Lund >Assignee: Kirk Lund >Priority: Major > Labels: GeodeOperationAPI > > The goal of this ticket is to identify various things we could do to improve > the performance of how Geode configures and uses serialization filters. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (GEODE-10061) Mass-Test-Run: PRClientServerRegionFunctionExecutionDUnitTest > testserverMultiKeyExecution_ThrowException[ExecuteFunctionById] FAILED
[ https://issues.apache.org/jira/browse/GEODE-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kristen updated GEODE-10061: Summary: Mass-Test-Run: PRClientServerRegionFunctionExecutionDUnitTest > testserverMultiKeyExecution_ThrowException[ExecuteFunctionById] FAILED (was: CI: ) > Mass-Test-Run: PRClientServerRegionFunctionExecutionDUnitTest > > testserverMultiKeyExecution_ThrowException[ExecuteFunctionById] FAILED > -- > > Key: GEODE-10061 > URL: https://issues.apache.org/jira/browse/GEODE-10061 > Project: Geode > Issue Type: Bug >Reporter: Kristen >Priority: Major > Labels: needsTriage > > {code:java} > > Task :geode-core:distributedTest > PRClientServerRegionFunctionExecutionDUnitTest > > testserverMultiKeyExecution_ThrowException[ExecuteFunctionById] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest$$Lambda$337/1157292204.run > in VM 3 running on Host > heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1.c.apachegeode-ci.internal > with 4 VMs > at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:631) > at org.apache.geode.test.dunit.VM.invoke(VM.java:448) > at > org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest.testserverMultiKeyExecution_ThrowException(PRClientServerRegionFunctionExecutionDUnitTest.java:379) > Caused by: > org.apache.geode.cache.client.ServerConnectivityException: Pool > unexpected Socket closed connection=Pooled Connection to > heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1.c.apachegeode-ci.internal:20923,heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1(304549):42669: > Connection[DESTROYED] attempt=3). Server unreachable: could not connect > after 3 attempts > at > org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:671) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:502) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:155) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:120) > at > org.apache.geode.cache.client.internal.PoolImpl.execute(PoolImpl.java:805) > at > org.apache.geode.cache.client.internal.PutOp.execute(PutOp.java:92) > at > org.apache.geode.cache.client.internal.ServerRegionProxy.put(ServerRegionProxy.java:158) > at > org.apache.geode.internal.cache.LocalRegion.serverPut(LocalRegion.java:3048) > at > org.apache.geode.internal.cache.LocalRegion.cacheWriteBeforePut(LocalRegion.java:3163) > at > org.apache.geode.internal.cache.ProxyRegionMap.basicPut(ProxyRegionMap.java:238) > at > org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5620) > at > org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5598) > at > org.apache.geode.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:157) > at > org.apache.geode.internal.cache.LocalRegion.basicPut(LocalRegion.java:5053) > at > org.apache.geode.internal.cache.LocalRegion.validatedPut(LocalRegion.java:1649) > at > org.apache.geode.internal.cache.LocalRegion.put(LocalRegion.java:1636) > at > org.apache.geode.internal.cache.AbstractRegion.put(AbstractRegion.java:445) > at > org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest.serverMultiKeyExecution_ThrowException(PRClientServerRegionFunctionExecutionDUnitTest.java:916) > at > org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest.lambda$testserverMultiKeyExecution_ThrowException$bb17a952$1(PRClientServerRegionFunctionExecutionDUnitTest.java:380) > Caused by: > java.net.SocketException: Socket closed > at java.net.SocketInputStream.read(SocketInputStream.java:204) > at java.net.SocketInputStream.read(SocketInputStream.java:141) > ... > {code} > {code:java} > ReconnectDUnitTest > testReconnectWithRoleLoss FAILED > java.lang.AssertionError: Suspicious strings were written to the log during > this run. Fix the strings or use IgnoredException.addIgnoredException to > ignore. > --- > Found suspect string in 'dunit_suspect-vm0.log' at line 444 [fatal > 2022/02/12 20:58:08.620 UTC receiver,heavy-lifte
[jira] [Updated] (GEODE-10060) Improve performance of serialization filter
[ https://issues.apache.org/jira/browse/GEODE-10060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirk Lund updated GEODE-10060: -- Labels: GeodeOperationAPI (was: ) > Improve performance of serialization filter > --- > > Key: GEODE-10060 > URL: https://issues.apache.org/jira/browse/GEODE-10060 > Project: Geode > Issue Type: Improvement > Components: serialization >Reporter: Kirk Lund >Assignee: Kirk Lund >Priority: Major > Labels: GeodeOperationAPI > > The goal of this ticket is to identify various things we could do to improve > the performance of serialization filtering that Geode configures and uses. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (GEODE-10061) CI:
Kristen created GEODE-10061: --- Summary: CI: Key: GEODE-10061 URL: https://issues.apache.org/jira/browse/GEODE-10061 Project: Geode Issue Type: Bug Reporter: Kristen {code:java} > Task :geode-core:distributedTest PRClientServerRegionFunctionExecutionDUnitTest > testserverMultiKeyExecution_ThrowException[ExecuteFunctionById] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest$$Lambda$337/1157292204.run in VM 3 running on Host heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1.c.apachegeode-ci.internal with 4 VMs at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:631) at org.apache.geode.test.dunit.VM.invoke(VM.java:448) at org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest.testserverMultiKeyExecution_ThrowException(PRClientServerRegionFunctionExecutionDUnitTest.java:379) Caused by: org.apache.geode.cache.client.ServerConnectivityException: Pool unexpected Socket closed connection=Pooled Connection to heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1.c.apachegeode-ci.internal:20923,heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1(304549):42669: Connection[DESTROYED] attempt=3). Server unreachable: could not connect after 3 attempts at org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:671) at org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:502) at org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:155) at org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:120) at org.apache.geode.cache.client.internal.PoolImpl.execute(PoolImpl.java:805) at org.apache.geode.cache.client.internal.PutOp.execute(PutOp.java:92) at org.apache.geode.cache.client.internal.ServerRegionProxy.put(ServerRegionProxy.java:158) at org.apache.geode.internal.cache.LocalRegion.serverPut(LocalRegion.java:3048) at org.apache.geode.internal.cache.LocalRegion.cacheWriteBeforePut(LocalRegion.java:3163) at org.apache.geode.internal.cache.ProxyRegionMap.basicPut(ProxyRegionMap.java:238) at org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5620) at org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5598) at org.apache.geode.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:157) at org.apache.geode.internal.cache.LocalRegion.basicPut(LocalRegion.java:5053) at org.apache.geode.internal.cache.LocalRegion.validatedPut(LocalRegion.java:1649) at org.apache.geode.internal.cache.LocalRegion.put(LocalRegion.java:1636) at org.apache.geode.internal.cache.AbstractRegion.put(AbstractRegion.java:445) at org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest.serverMultiKeyExecution_ThrowException(PRClientServerRegionFunctionExecutionDUnitTest.java:916) at org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest.lambda$testserverMultiKeyExecution_ThrowException$bb17a952$1(PRClientServerRegionFunctionExecutionDUnitTest.java:380) Caused by: java.net.SocketException: Socket closed at java.net.SocketInputStream.read(SocketInputStream.java:204) at java.net.SocketInputStream.read(SocketInputStream.java:141) ... {code} {code:java} ReconnectDUnitTest > testReconnectWithRoleLoss FAILED java.lang.AssertionError: Suspicious strings were written to the log during this run. Fix the strings or use IgnoredException.addIgnoredException to ignore. --- Found suspect string in 'dunit_suspect-vm0.log' at line 444 [fatal 2022/02/12 20:58:08.620 UTC tid=108] Membership service failure: Member isn't responding to heartbeat requests org.apache.geode.distributed.internal.membership.api.MemberDisconnectedException: Member isn't responding to heartbeat requests at org.apache.geode.distributed.internal.membership.gms.GMSMembership$ManagerImpl.forceDisconnect(GMSMembership.java:1806) at org.apache.geode.distributed.internal.membership.gms.membership.GMSJoinLeave.forceDisconnect(GMSJoinLeave.java:1120) at org.apache.geode.distributed.internal.membership.gms.membership.GMSJoinLeave.processRemoveMemberMessage(GMSJoinLeave.java:723) at org.apache.geode.distributed.internal.membership.gms.messenger.JGroupsMessenger$JGroupsReceiver.receive(JGroupsMessenger.java:1367) at or
[jira] [Updated] (GEODE-10061) CI:
[ https://issues.apache.org/jira/browse/GEODE-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Murmann updated GEODE-10061: -- Labels: needsTriage (was: ) > CI: > > > Key: GEODE-10061 > URL: https://issues.apache.org/jira/browse/GEODE-10061 > Project: Geode > Issue Type: Bug >Reporter: Kristen >Priority: Major > Labels: needsTriage > > {code:java} > > Task :geode-core:distributedTest > PRClientServerRegionFunctionExecutionDUnitTest > > testserverMultiKeyExecution_ThrowException[ExecuteFunctionById] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest$$Lambda$337/1157292204.run > in VM 3 running on Host > heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1.c.apachegeode-ci.internal > with 4 VMs > at org.apache.geode.test.dunit.VM.executeMethodOnObject(VM.java:631) > at org.apache.geode.test.dunit.VM.invoke(VM.java:448) > at > org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest.testserverMultiKeyExecution_ThrowException(PRClientServerRegionFunctionExecutionDUnitTest.java:379) > Caused by: > org.apache.geode.cache.client.ServerConnectivityException: Pool > unexpected Socket closed connection=Pooled Connection to > heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1.c.apachegeode-ci.internal:20923,heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1(304549):42669: > Connection[DESTROYED] attempt=3). Server unreachable: could not connect > after 3 attempts > at > org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:671) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:502) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:155) > at > org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:120) > at > org.apache.geode.cache.client.internal.PoolImpl.execute(PoolImpl.java:805) > at > org.apache.geode.cache.client.internal.PutOp.execute(PutOp.java:92) > at > org.apache.geode.cache.client.internal.ServerRegionProxy.put(ServerRegionProxy.java:158) > at > org.apache.geode.internal.cache.LocalRegion.serverPut(LocalRegion.java:3048) > at > org.apache.geode.internal.cache.LocalRegion.cacheWriteBeforePut(LocalRegion.java:3163) > at > org.apache.geode.internal.cache.ProxyRegionMap.basicPut(ProxyRegionMap.java:238) > at > org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5620) > at > org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5598) > at > org.apache.geode.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:157) > at > org.apache.geode.internal.cache.LocalRegion.basicPut(LocalRegion.java:5053) > at > org.apache.geode.internal.cache.LocalRegion.validatedPut(LocalRegion.java:1649) > at > org.apache.geode.internal.cache.LocalRegion.put(LocalRegion.java:1636) > at > org.apache.geode.internal.cache.AbstractRegion.put(AbstractRegion.java:445) > at > org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest.serverMultiKeyExecution_ThrowException(PRClientServerRegionFunctionExecutionDUnitTest.java:916) > at > org.apache.geode.internal.cache.execute.PRClientServerRegionFunctionExecutionDUnitTest.lambda$testserverMultiKeyExecution_ThrowException$bb17a952$1(PRClientServerRegionFunctionExecutionDUnitTest.java:380) > Caused by: > java.net.SocketException: Socket closed > at java.net.SocketInputStream.read(SocketInputStream.java:204) > at java.net.SocketInputStream.read(SocketInputStream.java:141) > ... > {code} > {code:java} > ReconnectDUnitTest > testReconnectWithRoleLoss FAILED > java.lang.AssertionError: Suspicious strings were written to the log during > this run. Fix the strings or use IgnoredException.addIgnoredException to > ignore. > --- > Found suspect string in 'dunit_suspect-vm0.log' at line 444 [fatal > 2022/02/12 20:58:08.620 UTC receiver,heavy-lifter-36926f81-68ee-5ab9-b87b-3d77805b70b1-64966> tid=108] > Membership service failure: Member isn't responding to heartbeat requests > org.apache.geode.distributed.internal.membership.api.MemberDisconnectedException: > Member isn't responding to heartbeat requests at > org.apache.geode.distributed.internal.membership.gms.GMSMembership$ManagerImpl.forceDisconn
[jira] [Updated] (GEODE-10060) Improve performance of serialization filter
[ https://issues.apache.org/jira/browse/GEODE-10060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirk Lund updated GEODE-10060: -- Component/s: core > Improve performance of serialization filter > --- > > Key: GEODE-10060 > URL: https://issues.apache.org/jira/browse/GEODE-10060 > Project: Geode > Issue Type: Improvement > Components: core, serialization >Reporter: Kirk Lund >Assignee: Kirk Lund >Priority: Major > Labels: GeodeOperationAPI > > The goal of this ticket is to identify various things we could do to improve > the performance of serialization filtering that Geode configures and uses. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (GEODE-10060) Improve performance of serialization filter
Kirk Lund created GEODE-10060: - Summary: Improve performance of serialization filter Key: GEODE-10060 URL: https://issues.apache.org/jira/browse/GEODE-10060 Project: Geode Issue Type: Improvement Components: serialization Reporter: Kirk Lund The goal of this ticket is to identify various things we could do to improve the performance of serialization filtering that Geode configures and uses. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (GEODE-10060) Improve performance of serialization filter
[ https://issues.apache.org/jira/browse/GEODE-10060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirk Lund reassigned GEODE-10060: - Assignee: Kirk Lund > Improve performance of serialization filter > --- > > Key: GEODE-10060 > URL: https://issues.apache.org/jira/browse/GEODE-10060 > Project: Geode > Issue Type: Improvement > Components: serialization >Reporter: Kirk Lund >Assignee: Kirk Lund >Priority: Major > > The goal of this ticket is to identify various things we could do to improve > the performance of serialization filtering that Geode configures and uses. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-10059) CI: WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtNewSender[from_v1.13.7] FAILED
[ https://issues.apache.org/jira/browse/GEODE-10059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493540#comment-17493540 ] Geode Integration commented on GEODE-10059: --- Seen on support/1.13 in [upgrade-test-openjdk8 #26|https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-support-1-13-main/jobs/upgrade-test-openjdk8/builds/26] ... see [test results|http://files.apachegeode-ci.info/builds/apache-support-1-13-main/1.13.8-build.0651/test-results/upgradeTest/1645006683/] or download [artifacts|http://files.apachegeode-ci.info/builds/apache-support-1-13-main/1.13.8-build.0651/test-artifacts/1645006683/upgradetestfiles-openjdk8-1.13.8-build.0651.tgz]. > CI: WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtNewSender[from_v1.13.7] FAILED > - > > Key: GEODE-10059 > URL: https://issues.apache.org/jira/browse/GEODE-10059 > Project: Geode > Issue Type: Bug >Affects Versions: 1.13.8 >Reporter: Kristen >Priority: Major > Labels: needsTriage > > > {code:java} > > Task :geode-wan:upgradeTest > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.4.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtTwoNewSender[from_v1.4.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtNewSender[from_v1.4.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.5.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtTwoNewSender[from_v1.5.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtNewSender[from_v1.5.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.6.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtTwoNewSender[from_v1.6.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: >
[jira] [Updated] (GEODE-10059) CI: WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtNewSender[from_v1.13.7] FAILED
[ https://issues.apache.org/jira/browse/GEODE-10059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kristen updated GEODE-10059: Summary: CI: WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtNewSender[from_v1.13.7] FAILED (was: CI: geode-wan:upgradeTest FAILED (WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtNewSender[from_v1.13.7] FAILED)) > CI: WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtNewSender[from_v1.13.7] FAILED > - > > Key: GEODE-10059 > URL: https://issues.apache.org/jira/browse/GEODE-10059 > Project: Geode > Issue Type: Bug >Affects Versions: 1.13.8 >Reporter: Kristen >Priority: Major > Labels: needsTriage > > > {code:java} > > Task :geode-wan:upgradeTest > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.4.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtTwoNewSender[from_v1.4.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtNewSender[from_v1.4.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.5.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtTwoNewSender[from_v1.5.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtNewSender[from_v1.5.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.6.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtTwoNewSender[from_v1.6.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtNewSender[from_v1.6.0] FAILED > org.apache.geode.test.dunit.RMIException: While
[jira] [Updated] (GEODE-10059) CI: geode-wan:upgradeTest FAILED (WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtNewSender[from_v1.13.7] FAILED)
[ https://issues.apache.org/jira/browse/GEODE-10059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kristen updated GEODE-10059: Summary: CI: geode-wan:upgradeTest FAILED (WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtNewSender[from_v1.13.7] FAILED) (was: CI: geode-wan:upgradeTest FAILED) > CI: geode-wan:upgradeTest FAILED (WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtNewSender[from_v1.13.7] FAILED) > > > Key: GEODE-10059 > URL: https://issues.apache.org/jira/browse/GEODE-10059 > Project: Geode > Issue Type: Bug >Affects Versions: 1.13.8 >Reporter: Kristen >Priority: Major > Labels: needsTriage > > > {code:java} > > Task :geode-wan:upgradeTest > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.4.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtTwoNewSender[from_v1.4.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtNewSender[from_v1.4.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.5.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtTwoNewSender[from_v1.5.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtNewSender[from_v1.5.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.6.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtTwoNewSender[from_v1.6.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtNewSender[from_v1.6.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking >
[jira] [Updated] (GEODE-10059) CI: geode-wan:upgradeTest FAILED
[ https://issues.apache.org/jira/browse/GEODE-10059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kristen updated GEODE-10059: Description: {code:java} > Task :geode-wan:upgradeTest org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.4.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtTwoNewSender[from_v1.4.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtNewSender[from_v1.4.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.5.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtTwoNewSender[from_v1.5.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtNewSender[from_v1.5.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.6.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtTwoNewSender[from_v1.6.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtNewSender[from_v1.6.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.7.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtTwoNewSender[from_v1.7.0] FAILED org.apach
[jira] [Updated] (GEODE-10059) CI: geode-wan:upgradeTest FAILED
[ https://issues.apache.org/jira/browse/GEODE-10059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kristen updated GEODE-10059: Description: {code:java} > Task :geode-wan:upgradeTest org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.4.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtTwoNewSender[from_v1.4.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtNewSender[from_v1.4.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.5.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtTwoNewSender[from_v1.5.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtNewSender[from_v1.5.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.6.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtTwoNewSender[from_v1.6.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtNewSender[from_v1.6.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.7.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtTwoNewSender[from_v1.7.0] FAILED org.apache.geode.test.dunit.RM
[jira] [Updated] (GEODE-10059) CI: geode-wan:upgradeTest FAILED
[ https://issues.apache.org/jira/browse/GEODE-10059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Murmann updated GEODE-10059: -- Labels: needsTriage (was: ) > CI: geode-wan:upgradeTest FAILED > > > Key: GEODE-10059 > URL: https://issues.apache.org/jira/browse/GEODE-10059 > Project: Geode > Issue Type: Bug >Affects Versions: 1.13.8 >Reporter: Kristen >Priority: Major > Labels: needsTriage > > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.4.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtTwoNewSender[from_v1.4.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtNewSender[from_v1.4.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.5.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtTwoNewSender[from_v1.5.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtNewSender[from_v1.5.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.6.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtTwoNewSender[from_v1.6.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > oldEventShouldBeProcessedAtNewSender[from_v1.6.0] FAILED > org.apache.geode.test.dunit.RMIException: While invoking > org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host > 64b741e8194f with 7 VMs with version 1.3.0 > Caused by: > java.lang.IllegalStateException: VM not available: VM 5 running on > Host 64b741e8194f with 7 VMs with version 1.3.0 > org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1
[jira] [Created] (GEODE-10059) CI: geode-wan:upgradeTest FAILED
Kristen created GEODE-10059: --- Summary: CI: geode-wan:upgradeTest FAILED Key: GEODE-10059 URL: https://issues.apache.org/jira/browse/GEODE-10059 Project: Geode Issue Type: Bug Affects Versions: 1.13.8 Reporter: Kristen org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.4.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtTwoNewSender[from_v1.4.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtNewSender[from_v1.4.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.5.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtTwoNewSender[from_v1.5.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtNewSender[from_v1.5.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.6.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtTwoNewSender[from_v1.6.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > oldEventShouldBeProcessedAtNewSender[from_v1.6.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProcessOldEvent > bothOldAndNewEventsShouldBeProcessedByOldSender[from_v1.7.0] FAILED org.apache.geode.test.dunit.RMIException: While invoking org.apache.geode.test.dunit.IgnoredException$1.run in VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 Caused by: java.lang.IllegalStateException: VM not available: VM 5 running on Host 64b741e8194f with 7 VMs with version 1.3.0 org.apache.geode.cache.wan.WANRollingUpgradeNewSenderProce
[jira] [Commented] (GEODE-9694) Remove deprecated elements from QueryCommandDUnitTestBase
[ https://issues.apache.org/jira/browse/GEODE-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493528#comment-17493528 ] ASF subversion and git services commented on GEODE-9694: Commit 5d72863357b1529fb245e723081e64bd74ed54b8 in geode's branch refs/heads/develop from Nabarun Nag [ https://gitbox.apache.org/repos/asf?p=geode.git;h=5d72863 ] GEODE-9694: Removed deprecated elements from QueryCommandDistributedTestBase (#6959) * Removed deprecated elements * Renamed from DUnit to DistributedTest > Remove deprecated elements from QueryCommandDUnitTestBase > - > > Key: GEODE-9694 > URL: https://issues.apache.org/jira/browse/GEODE-9694 > Project: Geode > Issue Type: Bug > Components: tests >Reporter: Nabarun Nag >Priority: Major > Labels: needsTriage, pull-request-available > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (GEODE-9910) Failure to auto-reconnect upon network partition
[ https://issues.apache.org/jira/browse/GEODE-9910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joris Melchior updated GEODE-9910: -- Labels: GeodeOperationAPI blocks-1.15.0 needsTriage (was: blocks-1.15.0) > Failure to auto-reconnect upon network partition > > > Key: GEODE-9910 > URL: https://issues.apache.org/jira/browse/GEODE-9910 > Project: Geode > Issue Type: Bug >Affects Versions: 1.14.0 >Reporter: Surya Mudundi >Priority: Major > Labels: GeodeOperationAPI, blocks-1.15.0, needsTriage > > Two node cluster with embedded locators failed to auto-reconnect when node-1 > experienced network outage for couple of minutes and when node-1 recovered > from the outage, node-2 failed to auto-reconnect. > node-2 tried to re-connect to node-1 as: > [org.apache.geode.distributed.internal.InternalDistributedSystem]-[ReconnectThread] > [] Attempting to reconnect to the distributed system. This is attempt #1. > [org.apache.geode.distributed.internal.InternalDistributedSystem]-[ReconnectThread] > [] Attempting to reconnect to the distributed system. This is attempt #2. > [org.apache.geode.distributed.internal.InternalDistributedSystem]-[ReconnectThread] > [] Attempting to reconnect to the distributed system. This is attempt #3. > Finally reported below error after 3 attempts as: > INFO > [org.apache.geode.logging.internal.LoggingProviderLoader]-[ReconnectThread] > [] Using org.apache.geode.logging.internal.SimpleLoggingProvider for service > org.apache.geode.logging.internal.spi.LoggingProvider > INFO [org.apache.geode.internal.InternalDataSerializer]-[ReconnectThread] [] > initializing InternalDataSerializer with 0 services > INFO > [org.apache.geode.distributed.internal.InternalDistributedSystem]-[ReconnectThread] > [] performing a quorum check to see if location services can be started early > INFO > [org.apache.geode.distributed.internal.InternalDistributedSystem]-[ReconnectThread] > [] Quorum check passed - allowing location services to start early > WARN > [org.apache.geode.distributed.internal.InternalDistributedSystem]-[ReconnectThread] > [] Exception occurred while trying to connect the system during reconnect > java.lang.IllegalStateException: A locator can not be created because one > already exists in this JVM. > at > org.apache.geode.distributed.internal.InternalLocator.createLocator(InternalLocator.java:298) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalLocator.createLocator(InternalLocator.java:273) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem.startInitLocator(InternalDistributedSystem.java:916) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem.initialize(InternalDistributedSystem.java:768) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem.access$200(InternalDistributedSystem.java:135) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem$Builder.build(InternalDistributedSystem.java:3034) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem.connectInternal(InternalDistributedSystem.java:290) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem.reconnect(InternalDistributedSystem.java:2605) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem.tryReconnect(InternalDistributedSystem.java:2424) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.InternalDistributedSystem.disconnect(InternalDistributedSystem.java:1275) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.ClusterDistributionManager$DMListener.membershipFailure(ClusterDistributionManager.java:2326) > ~[geode-core-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.membership.gms.GMSMembership.uncleanShutdown(GMSMembership.java:1187) > ~[geode-membership-1.14.0.jar:?] > at > org.apache.geode.distributed.internal.membership.gms.GMSMembership$ManagerImpl.lambda$forceDisconnect$0(GMSMembership.java:1811) > ~[geode-membership-1.14.0.jar:?] > at java.lang.Thread.run(Thread.java:829) [?:?] > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (GEODE-10058) Remove Defunct NetCore from geode-native repo and CI
Michael Martell created GEODE-10058: --- Summary: Remove Defunct NetCore from geode-native repo and CI Key: GEODE-10058 URL: https://issues.apache.org/jira/browse/GEODE-10058 Project: Geode Issue Type: Task Reporter: Michael Martell This project is being replaced by a pure C# client for .NET Core. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (GEODE-9948) Implement LINSERT
[ https://issues.apache.org/jira/browse/GEODE-9948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hale Bales reassigned GEODE-9948: - Assignee: Hale Bales > Implement LINSERT > - > > Key: GEODE-9948 > URL: https://issues.apache.org/jira/browse/GEODE-9948 > Project: Geode > Issue Type: New Feature > Components: redis >Reporter: Wayne >Assignee: Hale Bales >Priority: Major > > Implement the [LINSERT|http://https//redis.io/commands/linsert] command. > > +Acceptance Criteria+ > The command has been implemented along with appropriate unit and system tests. > > The command has been tested using the redis-cli tool and verified against > native redis. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-9502) Eliminate templates used to work around now-obsolete MSVC compiler warning
[ https://issues.apache.org/jira/browse/GEODE-9502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493510#comment-17493510 ] Matthew Reddington commented on GEODE-9502: --- This would cause an ABI change. > Eliminate templates used to work around now-obsolete MSVC compiler warning > -- > > Key: GEODE-9502 > URL: https://issues.apache.org/jira/browse/GEODE-9502 > Project: Geode > Issue Type: Improvement > Components: native client >Reporter: Blake Bender >Priority: Major > Labels: pull-request-available > > CacheableBuiltins.hpp contains the following comment, followed by a bunch of > very strange template definitions: > // The following are defined as classes to avoid the issues with MSVC++ > // warning/erroring on C4503 > According to Microsoft > (https://docs.microsoft.com/en-us/cpp/error-messages/compiler-warnings/compiler-warning-level-1-c4503?view=msvc-160), > this warning was obsolete as of VS2017. We no longer support any pre-VS2017 > compilers, so it should be safe to remove all this nonsense and replace it > with the template(s) originally intended. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-9639) Make native client compatible with C++20
[ https://issues.apache.org/jira/browse/GEODE-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493509#comment-17493509 ] Matthew Reddington commented on GEODE-9639: --- Merged into support/10.3. > Make native client compatible with C++20 > > > Key: GEODE-9639 > URL: https://issues.apache.org/jira/browse/GEODE-9639 > Project: Geode > Issue Type: Improvement > Components: native client >Reporter: Matthew Reddington >Priority: Major > Labels: pull-request-available > > There are standard library components that were removed in C++20, making our > library incompatible. Luckily, our use of deleted components are minimal and > replaceable without breaking API backward compatibility, but it will disrupt > ABI compatibility. > > The outcome needs to be tested against MSVC2017/v141 and MSVC2019/v142, > including examples. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-9268) Fix coredump whenever getFieldNames is called after a cluster restart
[ https://issues.apache.org/jira/browse/GEODE-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493492#comment-17493492 ] ASF subversion and git services commented on GEODE-9268: Commit 7873837eeec67c9ae85d32f7165a7753d0c0cd27 in geode-native's branch refs/heads/develop from Mario Salazar de Torres [ https://gitbox.apache.org/repos/asf?p=geode-native.git;h=7873837 ] GEODE-9268: Fix PdxInstance handling after cluster restart (#806) * GEODE-9268: Fix PdxInstance handling after restart - In scenarios where an PdxInstance is obtained and later used, it might happen a coredump if the PdxTypeRegistry is cleaned up after the PdxInstance is obtained. This happens on those scenarios where redundancy is completly lost. - This change refactors PdxInstance handling so in all cases, the PdxType used is the one owned by the PdxInstance. Whenever the PdxInstance is to be written, a check is executed to ensure the cluster is aware of the PdxType and if not, register it. - Removed PdxInstance serialization retries, as with the new approach is not needed anymore. - Removed UnknownPdxTypeException exception. Instead whenever a PdxType is requested and not present, an IllegalStateException exception is thrown, as it happens in the Java client. - Now PdxInstance is not serialized whenever created. Instead, it's PDX byte stream is generated on-demand. Note that the PdxInstance will be serialized before being put into a server, as it was done before this change. - Fixed TcrMessage deserialization whenever a PdxType is requested by its ID and no PdxType was found. - Fixed incPdxInstanceCreations so it's incremented strictly whenver a PdxInstance is created, and not whenever a PdxInstance is deserialized. - Fixed IT PdxTypeRegistry cleanupOnClusterRestart logic and renamed to cleanupOnClusterRestartAndPut. This test was supposed to verify that if a PdxInstance is created, and after that the cluster is restarted, there is no coredump while writting it to a region. Instead it was creating a PdxInstance before and after the cluster, but it has been fixed to work as initially intended. - Created a new IT PdxTypeRegistry cleanupOnClusterRestartAndFetchFields to verify the issue described in the first bullet is not causing a coredump. - Removed old IT testThinClientPdxInstance TS as there is a equivalent new IT TS named PdxInstanceTest. - Fixed some ITs to work accordingly the the new code. - Fixed PdxInstanceImplTest.updatePdxStream to work accordingly with the code changes. * GEODE-9268: Revision 1 - Refactored member attributes for NestedPdxObject classes in order to follow style guidelines. - Refactored member attributes for PdxInstanceImpl class in order to follow style guidelines. - Moved default bytes variables from static to constants inside an anon namespace within PdxInstanceImpl cpp file. - Renamed 'pft' variables to 'field' inside PdxInstanceImpl in order to make the code more readable. - Changed return of PdxLocalWriter::getPdxStream to use the explicit constructor of std::vector - Solved PdxInstanceTest.testNestedPdxInstance asserts description mismatch. - Used binary_semaphore inside PdxTypeRegistryTest insteda of an in-place CacheListener in order to listener for cluster start/stop events. - Revert back changes to the OQL used inside PdxTypeRegistryTest.cleanupOnClusterRestartAndPut - Added test descriptions for both tests inside PdxTypeRegistryTest. * GEODE-9268: Revision 2 - Addressed several others assert descriptions mismatch. > Fix coredump whenever getFieldNames is called after a cluster restart > - > > Key: GEODE-9268 > URL: https://issues.apache.org/jira/browse/GEODE-9268 > Project: Geode > Issue Type: Bug > Components: native client >Reporter: Mario Salazar de Torres >Assignee: Mario Salazar de Torres >Priority: Major > Labels: pull-request-available > > *WHEN* A PdxInstance is fetched from a region > *AND* The whole cluster is restarted, triggering PdxTypeRegistry cleanup. > *AND* getFieldNames is called on the PdxInstance created just before > *THEN* a coredump happens. > — > *Additional information:* > Callstack: > {noformat} > [ERROR 2021/05/05 12:57:12.781834 CEST main (139683590957120)] Segmentation > fault happened > 0# handler(int) at nc-pdx/main.cpp:225 > 1# 0x7F0A9F5F13C0 in /lib/x86_64-linux-gnu/libpthread.so.0 > 2# apache::geode::client::PdxType::getPdxFieldTypes() const at > cppcache/src/PdxType.hpp:181 > 3# apache::geode::client::PdxInstanceImpl::getFieldNames() at > cppcache/src/PdxInstanceImpl.cpp:1383 > 4# main at nc-pdx/main.cpp:374 > 5# __libc_start_main in /lib/x86_64-
[jira] [Closed] (GEODE-9268) Fix coredump whenever getFieldNames is called after a cluster restart
[ https://issues.apache.org/jira/browse/GEODE-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Blake Bender closed GEODE-9268. --- > Fix coredump whenever getFieldNames is called after a cluster restart > - > > Key: GEODE-9268 > URL: https://issues.apache.org/jira/browse/GEODE-9268 > Project: Geode > Issue Type: Bug > Components: native client >Reporter: Mario Salazar de Torres >Assignee: Mario Salazar de Torres >Priority: Major > Labels: pull-request-available > > *WHEN* A PdxInstance is fetched from a region > *AND* The whole cluster is restarted, triggering PdxTypeRegistry cleanup. > *AND* getFieldNames is called on the PdxInstance created just before > *THEN* a coredump happens. > — > *Additional information:* > Callstack: > {noformat} > [ERROR 2021/05/05 12:57:12.781834 CEST main (139683590957120)] Segmentation > fault happened > 0# handler(int) at nc-pdx/main.cpp:225 > 1# 0x7F0A9F5F13C0 in /lib/x86_64-linux-gnu/libpthread.so.0 > 2# apache::geode::client::PdxType::getPdxFieldTypes() const at > cppcache/src/PdxType.hpp:181 > 3# apache::geode::client::PdxInstanceImpl::getFieldNames() at > cppcache/src/PdxInstanceImpl.cpp:1383 > 4# main at nc-pdx/main.cpp:374 > 5# __libc_start_main in /lib/x86_64-linux-gnu/libc.so.6 > 6# _start in build/pdx{noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (GEODE-9268) Fix coredump whenever getFieldNames is called after a cluster restart
[ https://issues.apache.org/jira/browse/GEODE-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Blake Bender resolved GEODE-9268. - Resolution: Fixed > Fix coredump whenever getFieldNames is called after a cluster restart > - > > Key: GEODE-9268 > URL: https://issues.apache.org/jira/browse/GEODE-9268 > Project: Geode > Issue Type: Bug > Components: native client >Reporter: Mario Salazar de Torres >Assignee: Mario Salazar de Torres >Priority: Major > Labels: pull-request-available > > *WHEN* A PdxInstance is fetched from a region > *AND* The whole cluster is restarted, triggering PdxTypeRegistry cleanup. > *AND* getFieldNames is called on the PdxInstance created just before > *THEN* a coredump happens. > — > *Additional information:* > Callstack: > {noformat} > [ERROR 2021/05/05 12:57:12.781834 CEST main (139683590957120)] Segmentation > fault happened > 0# handler(int) at nc-pdx/main.cpp:225 > 1# 0x7F0A9F5F13C0 in /lib/x86_64-linux-gnu/libpthread.so.0 > 2# apache::geode::client::PdxType::getPdxFieldTypes() const at > cppcache/src/PdxType.hpp:181 > 3# apache::geode::client::PdxInstanceImpl::getFieldNames() at > cppcache/src/PdxInstanceImpl.cpp:1383 > 4# main at nc-pdx/main.cpp:374 > 5# __libc_start_main in /lib/x86_64-linux-gnu/libc.so.6 > 6# _start in build/pdx{noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-9268) Fix coredump whenever getFieldNames is called after a cluster restart
[ https://issues.apache.org/jira/browse/GEODE-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493485#comment-17493485 ] ASF GitHub Bot commented on GEODE-9268: --- pdxcodemonkey merged pull request #806: URL: https://github.com/apache/geode-native/pull/806 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@geode.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Fix coredump whenever getFieldNames is called after a cluster restart > - > > Key: GEODE-9268 > URL: https://issues.apache.org/jira/browse/GEODE-9268 > Project: Geode > Issue Type: Bug > Components: native client >Reporter: Mario Salazar de Torres >Assignee: Mario Salazar de Torres >Priority: Major > Labels: pull-request-available > > *WHEN* A PdxInstance is fetched from a region > *AND* The whole cluster is restarted, triggering PdxTypeRegistry cleanup. > *AND* getFieldNames is called on the PdxInstance created just before > *THEN* a coredump happens. > — > *Additional information:* > Callstack: > {noformat} > [ERROR 2021/05/05 12:57:12.781834 CEST main (139683590957120)] Segmentation > fault happened > 0# handler(int) at nc-pdx/main.cpp:225 > 1# 0x7F0A9F5F13C0 in /lib/x86_64-linux-gnu/libpthread.so.0 > 2# apache::geode::client::PdxType::getPdxFieldTypes() const at > cppcache/src/PdxType.hpp:181 > 3# apache::geode::client::PdxInstanceImpl::getFieldNames() at > cppcache/src/PdxInstanceImpl.cpp:1383 > 4# main at nc-pdx/main.cpp:374 > 5# __libc_start_main in /lib/x86_64-linux-gnu/libc.so.6 > 6# _start in build/pdx{noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-9889) LettucePubSubIntegrationTest > subscribePsubscribeSameClient FAILED
[ https://issues.apache.org/jira/browse/GEODE-9889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493471#comment-17493471 ] Geode Integration commented on GEODE-9889: -- Seen on support/1.14 in [integration-test-openjdk8 #29|https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-support-1-14-main/jobs/integration-test-openjdk8/builds/29] ... see [test results|http://files.apachegeode-ci.info/builds/apache-support-1-14-main/1.14.4-build.0922/test-results/integrationTest/1645005047/] or download [artifacts|http://files.apachegeode-ci.info/builds/apache-support-1-14-main/1.14.4-build.0922/test-artifacts/1645005047/integrationtestfiles-openjdk8-1.14.4-build.0922.tgz]. > LettucePubSubIntegrationTest > subscribePsubscribeSameClient FAILED > --- > > Key: GEODE-9889 > URL: https://issues.apache.org/jira/browse/GEODE-9889 > Project: Geode > Issue Type: Bug > Components: redis >Affects Versions: 1.14.0 >Reporter: Ray Ingles >Assignee: Hale Bales >Priority: Major > > Seen in a CI build: > > {{> Task :geode-apis-compatible-with-redis:integrationTest}} > {{org.apache.geode.redis.internal.executor.pubsub.LettucePubSubIntegrationTest > > subscribePsubscribeSameClient FAILED}} > {{org.junit.ComparisonFailure: expected:<[2]L> but was:<[0]L>}} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (GEODE-10054) CI Failure: EnsurePrimaryStaysPutDUnitTest.localFunctionRetriesIfNotOnPrimary fails because primary moved
[ https://issues.apache.org/jira/browse/GEODE-10054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated GEODE-10054: --- Labels: pull-request-available (was: ) > CI Failure: EnsurePrimaryStaysPutDUnitTest.localFunctionRetriesIfNotOnPrimary > fails because primary moved > - > > Key: GEODE-10054 > URL: https://issues.apache.org/jira/browse/GEODE-10054 > Project: Geode > Issue Type: Bug > Components: redis >Affects Versions: 1.15.0, 1.16.0 >Reporter: Hale Bales >Assignee: Hale Bales >Priority: Major > Labels: pull-request-available > > This test fails with the following stack trace because the primary moved when > it wasn't supposed to. > {code:java} > EnsurePrimaryStaysPutDUnitTest > localFunctionRetriesIfNotOnPrimary FAILED > org.opentest4j.AssertionFailedError: [CheckPrimaryBucketFunction > determined that the primary has moved] > Expecting value to be true but was false > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at > org.apache.geode.redis.EnsurePrimaryStaysPutDUnitTest.primaryRemainsWhileFunctionExecutes(EnsurePrimaryStaysPutDUnitTest.java:170) > at > org.apache.geode.redis.EnsurePrimaryStaysPutDUnitTest.localFunctionRetriesIfNotOnPrimary(EnsurePrimaryStaysPutDUnitTest.java:93) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-10057) Redis documentation has passive and active expiration descriptions reversed
[ https://issues.apache.org/jira/browse/GEODE-10057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493444#comment-17493444 ] ASF subversion and git services commented on GEODE-10057: - Commit 2ba72a909fde0a9cb2c9b3dec2f959791b8aca21 in geode-site's branch refs/heads/asf-site from Dave Barnes [ https://gitbox.apache.org/repos/asf?p=geode-site.git;h=2ba72a9 ] GEODE-10057: Correct geode-for-redis docs (updated v1.14 user guide) > Redis documentation has passive and active expiration descriptions reversed > --- > > Key: GEODE-10057 > URL: https://issues.apache.org/jira/browse/GEODE-10057 > Project: Geode > Issue Type: Bug > Components: docs, redis >Affects Versions: 1.15.0, 1.16.0 >Reporter: Donal Evans >Assignee: Donal Evans >Priority: Major > Labels: blocks-1.15.0, pull-request-available > Fix For: 1.14.3, 1.15.0, 1.16.0 > > > The geode-for-redis documentation describes the difference in behaviour for > active expiration and passive expiration, but the way these terms are used is > flipped from how they're typically used in documentation about open source > Redis: https://redis.io/commands/expire#how-redis-expires-keys. The docs > should be updated to match the usage in open source Redis documentation. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-10057) Redis documentation has passive and active expiration descriptions reversed
[ https://issues.apache.org/jira/browse/GEODE-10057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493443#comment-17493443 ] ASF subversion and git services commented on GEODE-10057: - Commit 63c98d0bf76b7d8199d0fc087c2ba608227b4300 in geode's branch refs/heads/support/1.14 from Dave Barnes [ https://gitbox.apache.org/repos/asf?p=geode.git;h=63c98d0 ] GEODE-10057: Correct geode-for-redis docs > Redis documentation has passive and active expiration descriptions reversed > --- > > Key: GEODE-10057 > URL: https://issues.apache.org/jira/browse/GEODE-10057 > Project: Geode > Issue Type: Bug > Components: docs, redis >Affects Versions: 1.15.0, 1.16.0 >Reporter: Donal Evans >Assignee: Donal Evans >Priority: Major > Labels: blocks-1.15.0, pull-request-available > Fix For: 1.14.3, 1.15.0, 1.16.0 > > > The geode-for-redis documentation describes the difference in behaviour for > active expiration and passive expiration, but the way these terms are used is > flipped from how they're typically used in documentation about open source > Redis: https://redis.io/commands/expire#how-redis-expires-keys. The docs > should be updated to match the usage in open source Redis documentation. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-10057) Redis documentation has passive and active expiration descriptions reversed
[ https://issues.apache.org/jira/browse/GEODE-10057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493441#comment-17493441 ] Dave Barnes commented on GEODE-10057: - Back-ported to 1.15 and 1.14. > Redis documentation has passive and active expiration descriptions reversed > --- > > Key: GEODE-10057 > URL: https://issues.apache.org/jira/browse/GEODE-10057 > Project: Geode > Issue Type: Bug > Components: docs, redis >Affects Versions: 1.15.0, 1.16.0 >Reporter: Donal Evans >Assignee: Donal Evans >Priority: Major > Labels: blocks-1.15.0, pull-request-available > Fix For: 1.14.3, 1.15.0, 1.16.0 > > > The geode-for-redis documentation describes the difference in behaviour for > active expiration and passive expiration, but the way these terms are used is > flipped from how they're typically used in documentation about open source > Redis: https://redis.io/commands/expire#how-redis-expires-keys. The docs > should be updated to match the usage in open source Redis documentation. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (GEODE-10057) Redis documentation has passive and active expiration descriptions reversed
[ https://issues.apache.org/jira/browse/GEODE-10057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dave Barnes updated GEODE-10057: Fix Version/s: 1.14.3 > Redis documentation has passive and active expiration descriptions reversed > --- > > Key: GEODE-10057 > URL: https://issues.apache.org/jira/browse/GEODE-10057 > Project: Geode > Issue Type: Bug > Components: docs, redis >Affects Versions: 1.15.0, 1.16.0 >Reporter: Donal Evans >Assignee: Donal Evans >Priority: Major > Labels: blocks-1.15.0, pull-request-available > Fix For: 1.14.3, 1.15.0, 1.16.0 > > > The geode-for-redis documentation describes the difference in behaviour for > active expiration and passive expiration, but the way these terms are used is > flipped from how they're typically used in documentation about open source > Redis: https://redis.io/commands/expire#how-redis-expires-keys. The docs > should be updated to match the usage in open source Redis documentation. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (GEODE-10054) CI Failure: EnsurePrimaryStaysPutDUnitTest.localFunctionRetriesIfNotOnPrimary fails because primary moved
[ https://issues.apache.org/jira/browse/GEODE-10054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hale Bales updated GEODE-10054: --- Labels: (was: needsTriage) > CI Failure: EnsurePrimaryStaysPutDUnitTest.localFunctionRetriesIfNotOnPrimary > fails because primary moved > - > > Key: GEODE-10054 > URL: https://issues.apache.org/jira/browse/GEODE-10054 > Project: Geode > Issue Type: Bug > Components: redis >Affects Versions: 1.15.0, 1.16.0 >Reporter: Hale Bales >Assignee: Hale Bales >Priority: Major > > This test fails with the following stack trace because the primary moved when > it wasn't supposed to. > {code:java} > EnsurePrimaryStaysPutDUnitTest > localFunctionRetriesIfNotOnPrimary FAILED > org.opentest4j.AssertionFailedError: [CheckPrimaryBucketFunction > determined that the primary has moved] > Expecting value to be true but was false > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at > org.apache.geode.redis.EnsurePrimaryStaysPutDUnitTest.primaryRemainsWhileFunctionExecutes(EnsurePrimaryStaysPutDUnitTest.java:170) > at > org.apache.geode.redis.EnsurePrimaryStaysPutDUnitTest.localFunctionRetriesIfNotOnPrimary(EnsurePrimaryStaysPutDUnitTest.java:93) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (GEODE-10054) CI Failure: EnsurePrimaryStaysPutDUnitTest.localFunctionRetriesIfNotOnPrimary fails because primary moved
[ https://issues.apache.org/jira/browse/GEODE-10054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hale Bales updated GEODE-10054: --- Affects Version/s: 1.15.0 > CI Failure: EnsurePrimaryStaysPutDUnitTest.localFunctionRetriesIfNotOnPrimary > fails because primary moved > - > > Key: GEODE-10054 > URL: https://issues.apache.org/jira/browse/GEODE-10054 > Project: Geode > Issue Type: Bug > Components: redis >Affects Versions: 1.15.0, 1.16.0 >Reporter: Hale Bales >Assignee: Hale Bales >Priority: Major > Labels: needsTriage > > This test fails with the following stack trace because the primary moved when > it wasn't supposed to. > {code:java} > EnsurePrimaryStaysPutDUnitTest > localFunctionRetriesIfNotOnPrimary FAILED > org.opentest4j.AssertionFailedError: [CheckPrimaryBucketFunction > determined that the primary has moved] > Expecting value to be true but was false > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at > org.apache.geode.redis.EnsurePrimaryStaysPutDUnitTest.primaryRemainsWhileFunctionExecutes(EnsurePrimaryStaysPutDUnitTest.java:170) > at > org.apache.geode.redis.EnsurePrimaryStaysPutDUnitTest.localFunctionRetriesIfNotOnPrimary(EnsurePrimaryStaysPutDUnitTest.java:93) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (GEODE-10057) Redis documentation has passive and active expiration descriptions reversed
[ https://issues.apache.org/jira/browse/GEODE-10057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Donal Evans resolved GEODE-10057. - Fix Version/s: 1.15.0 1.16.0 Resolution: Fixed > Redis documentation has passive and active expiration descriptions reversed > --- > > Key: GEODE-10057 > URL: https://issues.apache.org/jira/browse/GEODE-10057 > Project: Geode > Issue Type: Bug > Components: docs, redis >Affects Versions: 1.15.0, 1.16.0 >Reporter: Donal Evans >Assignee: Donal Evans >Priority: Major > Labels: blocks-1.15.0, pull-request-available > Fix For: 1.15.0, 1.16.0 > > > The geode-for-redis documentation describes the difference in behaviour for > active expiration and passive expiration, but the way these terms are used is > flipped from how they're typically used in documentation about open source > Redis: https://redis.io/commands/expire#how-redis-expires-keys. The docs > should be updated to match the usage in open source Redis documentation. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-10057) Redis documentation has passive and active expiration descriptions reversed
[ https://issues.apache.org/jira/browse/GEODE-10057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493434#comment-17493434 ] ASF subversion and git services commented on GEODE-10057: - Commit 8b9c03b0b867009be6d745252c57ae1a82b1eb0c in geode's branch refs/heads/support/1.15 from Donal Evans [ https://gitbox.apache.org/repos/asf?p=geode.git;h=8b9c03b ] GEODE-10057: Correct geode-for-redis docs (#7370) - Swap usage of active and passive in descriptions of expiration to match their usage in open source Redis documentation Authored-by: Donal Evans > Redis documentation has passive and active expiration descriptions reversed > --- > > Key: GEODE-10057 > URL: https://issues.apache.org/jira/browse/GEODE-10057 > Project: Geode > Issue Type: Bug > Components: docs, redis >Affects Versions: 1.15.0, 1.16.0 >Reporter: Donal Evans >Assignee: Donal Evans >Priority: Major > Labels: blocks-1.15.0, pull-request-available > > The geode-for-redis documentation describes the difference in behaviour for > active expiration and passive expiration, but the way these terms are used is > flipped from how they're typically used in documentation about open source > Redis: https://redis.io/commands/expire#how-redis-expires-keys. The docs > should be updated to match the usage in open source Redis documentation. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (GEODE-9549) Enable .net core tests in CI
[ https://issues.apache.org/jira/browse/GEODE-9549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ernest Burghardt resolved GEODE-9549. - Resolution: Fixed > Enable .net core tests in CI > > > Key: GEODE-9549 > URL: https://issues.apache.org/jira/browse/GEODE-9549 > Project: Geode > Issue Type: Improvement > Components: native client >Reporter: Blake Bender >Priority: Major > Labels: pull-request-available > > The .net core build and tests are integrated into the CI, but test running is > currently disabled due to a few issues. These need to be cleaned up, and > tests enabled in CI. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-10034) Organize Geode For Redis Stats By Data Structure
[ https://issues.apache.org/jira/browse/GEODE-10034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493423#comment-17493423 ] ASF subversion and git services commented on GEODE-10034: - Commit 9ef20d9cbd9013d4e38bc55c6d807623b330a2bf in geode's branch refs/heads/develop from Jens Deppe [ https://gitbox.apache.org/repos/asf?p=geode.git;h=9ef20d9 ] GEODE-10034: Organize Geode For Redis Stats By Category (#7363) - Geode for Redis statistics are broken up by category. For example stats will appear as `GeodeForRedisStats:STRING` or `GeodeForRedisStats:HASH`. Each type will then only contain stats relevant to the commands associated with that category. > Organize Geode For Redis Stats By Data Structure > > > Key: GEODE-10034 > URL: https://issues.apache.org/jira/browse/GEODE-10034 > Project: Geode > Issue Type: Improvement > Components: redis >Reporter: Wayne >Assignee: Jens Deppe >Priority: Major > Labels: pull-request-available > > The Geode for Redis Stats should be organized by Data Structure. For the > stats not associated with a data structure, the stats should continue to be > exposed under > "GeodeForRedisStats". > > +Acceptance Criteria+ > All stats, associated with a command specific to a data structure, should be > exposed under that data structure (e.g. Strings, Sets, SortedSets, Hashes, > Lists). > > All tests should pass. > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (GEODE-9848) Duplicate and Unnecessary REGISTER_INTEREST Message Sent to Server
[ https://issues.apache.org/jira/browse/GEODE-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ernest Burghardt resolved GEODE-9848. - Resolution: Fixed > Duplicate and Unnecessary REGISTER_INTEREST Message Sent to Server > -- > > Key: GEODE-9848 > URL: https://issues.apache.org/jira/browse/GEODE-9848 > Project: Geode > Issue Type: Bug > Components: native client >Reporter: Michael Martell >Priority: Major > Labels: pull-request-available > > In the course of debugging a RegisterAllKeys bug (GEMNC-508), it was > discovered that a second REGISTER_INTEREST message is being sent to the same > server. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-9268) Fix coredump whenever getFieldNames is called after a cluster restart
[ https://issues.apache.org/jira/browse/GEODE-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493420#comment-17493420 ] ASF GitHub Bot commented on GEODE-9268: --- gaussianrecurrence commented on a change in pull request #806: URL: https://github.com/apache/geode-native/pull/806#discussion_r808348184 ## File path: cppcache/integration/test/PdxTypeRegistryTest.cpp ## @@ -140,19 +141,60 @@ TEST(PdxTypeRegistryTest, cleanupOnClusterRestart) { listener->waitConnected(); - key = "after-restart"; - region->put(key, createTestPdxInstance(cache, key)); + region->put(key, pdx); // If PdxTypeRegistry was cleaned up, then the PdxType should have been // registered in the new cluster std::shared_ptr result; - auto query = - qs->newQuery("SELECT * FROM /region WHERE entryName = '" + key + "'"); + auto query = qs->newQuery("SELECT * FROM /region WHERE int_value = -1"); EXPECT_NO_THROW(result = query->execute()); EXPECT_TRUE(result); - EXPECT_GT(result->size(), 0); + EXPECT_EQ(result->size(), 1); } +TEST(PdxTypeRegistryTest, cleanupOnClusterRestartAndFetchFields) { + Cluster cluster{LocatorCount{1}, ServerCount{2}}; + cluster.start(); + + auto& gfsh = cluster.getGfsh(); + gfsh.create().region().withName("region").withType("PARTITION").execute(); + + auto listener = std::make_shared(); + + auto cache = createTestCache(); + createTestPool(cluster, cache); + auto qs = cache.getQueryService("pool"); + auto region = createTestRegion(cache, listener); + + std::string key = "before-shutdown"; + region->put(key, createTestPdxInstance(cache, key)); + auto object = region->get(key); + EXPECT_TRUE(object); + + auto pdx = std::dynamic_pointer_cast(object); + EXPECT_TRUE(pdx); + + // Shutdown and wait for some time + gfsh.shutdown().execute(); + listener->waitDisconnected(); + std::this_thread::sleep_for(std::chrono::seconds{15}); + + for (auto& server : cluster.getServers()) { +server.start(); + } + + listener->waitConnected(); + auto fields = pdx->getFieldNames(); Review comment: That's already covered within revision 1 :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@geode.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Fix coredump whenever getFieldNames is called after a cluster restart > - > > Key: GEODE-9268 > URL: https://issues.apache.org/jira/browse/GEODE-9268 > Project: Geode > Issue Type: Bug > Components: native client >Reporter: Mario Salazar de Torres >Assignee: Mario Salazar de Torres >Priority: Major > Labels: pull-request-available > > *WHEN* A PdxInstance is fetched from a region > *AND* The whole cluster is restarted, triggering PdxTypeRegistry cleanup. > *AND* getFieldNames is called on the PdxInstance created just before > *THEN* a coredump happens. > — > *Additional information:* > Callstack: > {noformat} > [ERROR 2021/05/05 12:57:12.781834 CEST main (139683590957120)] Segmentation > fault happened > 0# handler(int) at nc-pdx/main.cpp:225 > 1# 0x7F0A9F5F13C0 in /lib/x86_64-linux-gnu/libpthread.so.0 > 2# apache::geode::client::PdxType::getPdxFieldTypes() const at > cppcache/src/PdxType.hpp:181 > 3# apache::geode::client::PdxInstanceImpl::getFieldNames() at > cppcache/src/PdxInstanceImpl.cpp:1383 > 4# main at nc-pdx/main.cpp:374 > 5# __libc_start_main in /lib/x86_64-linux-gnu/libc.so.6 > 6# _start in build/pdx{noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-10057) Redis documentation has passive and active expiration descriptions reversed
[ https://issues.apache.org/jira/browse/GEODE-10057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493419#comment-17493419 ] ASF subversion and git services commented on GEODE-10057: - Commit 5f79b21bad9651b3f8c84ee99ed9651756ddf114 in geode's branch refs/heads/develop from Donal Evans [ https://gitbox.apache.org/repos/asf?p=geode.git;h=5f79b21 ] GEODE-10057: Correct geode-for-redis docs (#7370) - Swap usage of active and passive in descriptions of expiration to match their usage in open source Redis documentation Authored-by: Donal Evans > Redis documentation has passive and active expiration descriptions reversed > --- > > Key: GEODE-10057 > URL: https://issues.apache.org/jira/browse/GEODE-10057 > Project: Geode > Issue Type: Bug > Components: docs, redis >Affects Versions: 1.15.0, 1.16.0 >Reporter: Donal Evans >Assignee: Donal Evans >Priority: Major > Labels: blocks-1.15.0, pull-request-available > > The geode-for-redis documentation describes the difference in behaviour for > active expiration and passive expiration, but the way these terms are used is > flipped from how they're typically used in documentation about open source > Redis: https://redis.io/commands/expire#how-redis-expires-keys. The docs > should be updated to match the usage in open source Redis documentation. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (GEODE-10057) Redis documentation has passive and active expiration descriptions reversed
[ https://issues.apache.org/jira/browse/GEODE-10057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated GEODE-10057: --- Labels: blocks-1.15.0 pull-request-available (was: blocks-1.15.0) > Redis documentation has passive and active expiration descriptions reversed > --- > > Key: GEODE-10057 > URL: https://issues.apache.org/jira/browse/GEODE-10057 > Project: Geode > Issue Type: Bug > Components: docs, redis >Affects Versions: 1.15.0, 1.16.0 >Reporter: Donal Evans >Assignee: Donal Evans >Priority: Major > Labels: blocks-1.15.0, pull-request-available > > The geode-for-redis documentation describes the difference in behaviour for > active expiration and passive expiration, but the way these terms are used is > flipped from how they're typically used in documentation about open source > Redis: https://redis.io/commands/expire#how-redis-expires-keys. The docs > should be updated to match the usage in open source Redis documentation. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (GEODE-10057) Redis documentation has passive and active expiration descriptions reversed
[ https://issues.apache.org/jira/browse/GEODE-10057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Donal Evans reassigned GEODE-10057: --- Assignee: Donal Evans > Redis documentation has passive and active expiration descriptions reversed > --- > > Key: GEODE-10057 > URL: https://issues.apache.org/jira/browse/GEODE-10057 > Project: Geode > Issue Type: Bug > Components: docs, redis >Affects Versions: 1.15.0, 1.16.0 >Reporter: Donal Evans >Assignee: Donal Evans >Priority: Major > Labels: blocks-1.15.0 > > The geode-for-redis documentation describes the difference in behaviour for > active expiration and passive expiration, but the way these terms are used is > flipped from how they're typically used in documentation about open source > Redis: https://redis.io/commands/expire#how-redis-expires-keys. The docs > should be updated to match the usage in open source Redis documentation. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (GEODE-10057) Redis documentation has passive and active expiration descriptions reversed
[ https://issues.apache.org/jira/browse/GEODE-10057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Donal Evans updated GEODE-10057: Labels: blocks-1.15.0 (was: needsTriage) > Redis documentation has passive and active expiration descriptions reversed > --- > > Key: GEODE-10057 > URL: https://issues.apache.org/jira/browse/GEODE-10057 > Project: Geode > Issue Type: Bug > Components: docs, redis >Affects Versions: 1.15.0, 1.16.0 >Reporter: Donal Evans >Priority: Major > Labels: blocks-1.15.0 > > The geode-for-redis documentation describes the difference in behaviour for > active expiration and passive expiration, but the way these terms are used is > flipped from how they're typically used in documentation about open source > Redis: https://redis.io/commands/expire#how-redis-expires-keys. The docs > should be updated to match the usage in open source Redis documentation. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (GEODE-10057) Redis documentation has passive and active expiration descriptions reversed
[ https://issues.apache.org/jira/browse/GEODE-10057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Murmann updated GEODE-10057: -- Labels: needsTriage (was: ) > Redis documentation has passive and active expiration descriptions reversed > --- > > Key: GEODE-10057 > URL: https://issues.apache.org/jira/browse/GEODE-10057 > Project: Geode > Issue Type: Bug > Components: docs, redis >Affects Versions: 1.15.0, 1.16.0 >Reporter: Donal Evans >Priority: Major > Labels: needsTriage > > The geode-for-redis documentation describes the difference in behaviour for > active expiration and passive expiration, but the way these terms are used is > flipped from how they're typically used in documentation about open source > Redis: https://redis.io/commands/expire#how-redis-expires-keys. The docs > should be updated to match the usage in open source Redis documentation. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (GEODE-10057) Redis documentation has passive and active expiration descriptions reversed
Donal Evans created GEODE-10057: --- Summary: Redis documentation has passive and active expiration descriptions reversed Key: GEODE-10057 URL: https://issues.apache.org/jira/browse/GEODE-10057 Project: Geode Issue Type: Bug Components: docs, redis Affects Versions: 1.15.0, 1.16.0 Reporter: Donal Evans The geode-for-redis documentation describes the difference in behaviour for active expiration and passive expiration, but the way these terms are used is flipped from how they're typically used in documentation about open source Redis: https://redis.io/commands/expire#how-redis-expires-keys. The docs should be updated to match the usage in open source Redis documentation. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-10040) Fix intermittent clicache test failures
[ https://issues.apache.org/jira/browse/GEODE-10040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493403#comment-17493403 ] ASF GitHub Bot commented on GEODE-10040: mmartell merged pull request #924: URL: https://github.com/apache/geode-native/pull/924 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@geode.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Fix intermittent clicache test failures > --- > > Key: GEODE-10040 > URL: https://issues.apache.org/jira/browse/GEODE-10040 > Project: Geode > Issue Type: Test > Components: native client >Reporter: Michael Martell >Priority: Major > Labels: pull-request-available > > Occassionaly we see timeouts in the CI when running the legacy clicache > tests. These appear to mostly be timeouts waiting for gfsh commands to > complete. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-10040) Fix intermittent clicache test failures
[ https://issues.apache.org/jira/browse/GEODE-10040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493402#comment-17493402 ] ASF subversion and git services commented on GEODE-10040: - Commit ac9df11eb0b2c758a3f21a8b5f96be2788cfbd4e in geode-native's branch refs/heads/develop from Michael Martell [ https://gitbox.apache.org/repos/asf?p=geode-native.git;h=ac9df11 ] GEODE-10040: Increase wait timeout for gfsh (#924) > Fix intermittent clicache test failures > --- > > Key: GEODE-10040 > URL: https://issues.apache.org/jira/browse/GEODE-10040 > Project: Geode > Issue Type: Test > Components: native client >Reporter: Michael Martell >Priority: Major > Labels: pull-request-available > > Occassionaly we see timeouts in the CI when running the legacy clicache > tests. These appear to mostly be timeouts waiting for gfsh commands to > complete. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-9268) Fix coredump whenever getFieldNames is called after a cluster restart
[ https://issues.apache.org/jira/browse/GEODE-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493395#comment-17493395 ] ASF GitHub Bot commented on GEODE-9268: --- gaussianrecurrence commented on a change in pull request #806: URL: https://github.com/apache/geode-native/pull/806#discussion_r808316806 ## File path: cppcache/integration/test/PdxInstanceTest.cpp ## @@ -273,32 +270,51 @@ TEST(PdxInstanceTest, testPdxInstance) { EXPECT_EQ(-960665662, pdxTypeInstance->hashcode()) << "Pdxhashcode hashcode not matched with java pdx hash code."; +} - // TODO split into separate test for nested pdx object test. - ParentPdx pdxParentOriginal(10); - auto pdxParentInstanceFactory = - cache.createPdxInstanceFactory("testobject.ParentPdx"); - clonePdxInstance(pdxParentOriginal, pdxParentInstanceFactory); - auto pdxParentInstance = pdxParentInstanceFactory.create(); - EXPECT_EQ("testobject.ParentPdx", pdxParentInstance->getClassName()) - << "pdxTypeInstance.getClassName should return testobject.ParentPdx."; +TEST(PdxInstanceTest, testNestedPdxInstance) { + Cluster cluster{LocatorCount{1}, ServerCount{1}}; + + cluster.start(); + + cluster.getGfsh() + .create() + .region() + .withName("region") + .withType("REPLICATE") + .execute(); + + auto cache = cluster.createCache(); + auto region = setupRegion(cache); + auto&& typeRegistry = cache.getTypeRegistry(); + auto&& cachePerfStats = std::dynamic_pointer_cast(region) + ->getCacheImpl() + ->getCachePerfStats(); + + typeRegistry.registerPdxType(ChildPdx::createDeserializable); + typeRegistry.registerPdxType(ParentPdx::createDeserializable); + + ParentPdx original{10}; + auto factory = cache.createPdxInstanceFactory(original.getClassName()); + clonePdxInstance(original, factory); + auto pdxInstance = factory.create(); auto keyport = CacheableKey::create("pdxParentOriginal"); - region->put(keyport, pdxParentInstance); - auto objectFromPdxParentInstanceGet = + region->put(keyport, pdxInstance); + auto object = std::dynamic_pointer_cast(region->get(keyport)); + EXPECT_TRUE(object); - EXPECT_EQ(1, cachePerfStats.getPdxInstanceDeserializations()) + EXPECT_EQ(0, cachePerfStats.getPdxInstanceDeserializations()) << "pdxInstanceDeserialization should be equal to 1."; Review comment: Thanks for pointing those out, I've checked any other that might not match and fixed them. Hopefully, I haven't missed any others. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@geode.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Fix coredump whenever getFieldNames is called after a cluster restart > - > > Key: GEODE-9268 > URL: https://issues.apache.org/jira/browse/GEODE-9268 > Project: Geode > Issue Type: Bug > Components: native client >Reporter: Mario Salazar de Torres >Assignee: Mario Salazar de Torres >Priority: Major > Labels: pull-request-available > > *WHEN* A PdxInstance is fetched from a region > *AND* The whole cluster is restarted, triggering PdxTypeRegistry cleanup. > *AND* getFieldNames is called on the PdxInstance created just before > *THEN* a coredump happens. > — > *Additional information:* > Callstack: > {noformat} > [ERROR 2021/05/05 12:57:12.781834 CEST main (139683590957120)] Segmentation > fault happened > 0# handler(int) at nc-pdx/main.cpp:225 > 1# 0x7F0A9F5F13C0 in /lib/x86_64-linux-gnu/libpthread.so.0 > 2# apache::geode::client::PdxType::getPdxFieldTypes() const at > cppcache/src/PdxType.hpp:181 > 3# apache::geode::client::PdxInstanceImpl::getFieldNames() at > cppcache/src/PdxInstanceImpl.cpp:1383 > 4# main at nc-pdx/main.cpp:374 > 5# __libc_start_main in /lib/x86_64-linux-gnu/libc.so.6 > 6# _start in build/pdx{noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-9268) Fix coredump whenever getFieldNames is called after a cluster restart
[ https://issues.apache.org/jira/browse/GEODE-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493394#comment-17493394 ] ASF GitHub Bot commented on GEODE-9268: --- pdxcodemonkey commented on a change in pull request #806: URL: https://github.com/apache/geode-native/pull/806#discussion_r808316426 ## File path: cppcache/integration/test/PdxTypeRegistryTest.cpp ## @@ -140,19 +141,60 @@ TEST(PdxTypeRegistryTest, cleanupOnClusterRestart) { listener->waitConnected(); - key = "after-restart"; - region->put(key, createTestPdxInstance(cache, key)); + region->put(key, pdx); // If PdxTypeRegistry was cleaned up, then the PdxType should have been // registered in the new cluster std::shared_ptr result; - auto query = - qs->newQuery("SELECT * FROM /region WHERE entryName = '" + key + "'"); + auto query = qs->newQuery("SELECT * FROM /region WHERE int_value = -1"); EXPECT_NO_THROW(result = query->execute()); EXPECT_TRUE(result); - EXPECT_GT(result->size(), 0); + EXPECT_EQ(result->size(), 1); } +TEST(PdxTypeRegistryTest, cleanupOnClusterRestartAndFetchFields) { + Cluster cluster{LocatorCount{1}, ServerCount{2}}; + cluster.start(); + + auto& gfsh = cluster.getGfsh(); + gfsh.create().region().withName("region").withType("PARTITION").execute(); + + auto listener = std::make_shared(); + + auto cache = createTestCache(); + createTestPool(cluster, cache); + auto qs = cache.getQueryService("pool"); + auto region = createTestRegion(cache, listener); + + std::string key = "before-shutdown"; + region->put(key, createTestPdxInstance(cache, key)); + auto object = region->get(key); + EXPECT_TRUE(object); + + auto pdx = std::dynamic_pointer_cast(object); + EXPECT_TRUE(pdx); + + // Shutdown and wait for some time + gfsh.shutdown().execute(); + listener->waitDisconnected(); + std::this_thread::sleep_for(std::chrono::seconds{15}); + + for (auto& server : cluster.getServers()) { +server.start(); + } + + listener->waitConnected(); + auto fields = pdx->getFieldNames(); Review comment: For completeness' sake, probably yes? If we modified a method because it could hit the registry and core dump before, we should probably call it now and make sure it no longer blows up. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@geode.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Fix coredump whenever getFieldNames is called after a cluster restart > - > > Key: GEODE-9268 > URL: https://issues.apache.org/jira/browse/GEODE-9268 > Project: Geode > Issue Type: Bug > Components: native client >Reporter: Mario Salazar de Torres >Assignee: Mario Salazar de Torres >Priority: Major > Labels: pull-request-available > > *WHEN* A PdxInstance is fetched from a region > *AND* The whole cluster is restarted, triggering PdxTypeRegistry cleanup. > *AND* getFieldNames is called on the PdxInstance created just before > *THEN* a coredump happens. > — > *Additional information:* > Callstack: > {noformat} > [ERROR 2021/05/05 12:57:12.781834 CEST main (139683590957120)] Segmentation > fault happened > 0# handler(int) at nc-pdx/main.cpp:225 > 1# 0x7F0A9F5F13C0 in /lib/x86_64-linux-gnu/libpthread.so.0 > 2# apache::geode::client::PdxType::getPdxFieldTypes() const at > cppcache/src/PdxType.hpp:181 > 3# apache::geode::client::PdxInstanceImpl::getFieldNames() at > cppcache/src/PdxInstanceImpl.cpp:1383 > 4# main at nc-pdx/main.cpp:374 > 5# __libc_start_main in /lib/x86_64-linux-gnu/libc.so.6 > 6# _start in build/pdx{noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-9268) Fix coredump whenever getFieldNames is called after a cluster restart
[ https://issues.apache.org/jira/browse/GEODE-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493387#comment-17493387 ] ASF GitHub Bot commented on GEODE-9268: --- pdxcodemonkey commented on a change in pull request #806: URL: https://github.com/apache/geode-native/pull/806#discussion_r808300615 ## File path: cppcache/integration/test/PdxTypeRegistryTest.cpp ## @@ -140,19 +141,60 @@ TEST(PdxTypeRegistryTest, cleanupOnClusterRestart) { listener->waitConnected(); - key = "after-restart"; - region->put(key, createTestPdxInstance(cache, key)); + region->put(key, pdx); // If PdxTypeRegistry was cleaned up, then the PdxType should have been // registered in the new cluster std::shared_ptr result; - auto query = - qs->newQuery("SELECT * FROM /region WHERE entryName = '" + key + "'"); + auto query = qs->newQuery("SELECT * FROM /region WHERE int_value = -1"); EXPECT_NO_THROW(result = query->execute()); EXPECT_TRUE(result); - EXPECT_GT(result->size(), 0); + EXPECT_EQ(result->size(), 1); Review comment: New comments look great! It's always helpful to have a little understanding of what the tests are supposed to be doing, esp. for something as complex as PDX. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@geode.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Fix coredump whenever getFieldNames is called after a cluster restart > - > > Key: GEODE-9268 > URL: https://issues.apache.org/jira/browse/GEODE-9268 > Project: Geode > Issue Type: Bug > Components: native client >Reporter: Mario Salazar de Torres >Assignee: Mario Salazar de Torres >Priority: Major > Labels: pull-request-available > > *WHEN* A PdxInstance is fetched from a region > *AND* The whole cluster is restarted, triggering PdxTypeRegistry cleanup. > *AND* getFieldNames is called on the PdxInstance created just before > *THEN* a coredump happens. > — > *Additional information:* > Callstack: > {noformat} > [ERROR 2021/05/05 12:57:12.781834 CEST main (139683590957120)] Segmentation > fault happened > 0# handler(int) at nc-pdx/main.cpp:225 > 1# 0x7F0A9F5F13C0 in /lib/x86_64-linux-gnu/libpthread.so.0 > 2# apache::geode::client::PdxType::getPdxFieldTypes() const at > cppcache/src/PdxType.hpp:181 > 3# apache::geode::client::PdxInstanceImpl::getFieldNames() at > cppcache/src/PdxInstanceImpl.cpp:1383 > 4# main at nc-pdx/main.cpp:374 > 5# __libc_start_main in /lib/x86_64-linux-gnu/libc.so.6 > 6# _start in build/pdx{noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-9268) Fix coredump whenever getFieldNames is called after a cluster restart
[ https://issues.apache.org/jira/browse/GEODE-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493384#comment-17493384 ] ASF GitHub Bot commented on GEODE-9268: --- pdxcodemonkey commented on a change in pull request #806: URL: https://github.com/apache/geode-native/pull/806#discussion_r808284464 ## File path: cppcache/integration/test/PdxTypeRegistryTest.cpp ## @@ -140,19 +141,60 @@ TEST(PdxTypeRegistryTest, cleanupOnClusterRestart) { listener->waitConnected(); - key = "after-restart"; - region->put(key, createTestPdxInstance(cache, key)); + region->put(key, pdx); // If PdxTypeRegistry was cleaned up, then the PdxType should have been // registered in the new cluster std::shared_ptr result; - auto query = - qs->newQuery("SELECT * FROM /region WHERE entryName = '" + key + "'"); + auto query = qs->newQuery("SELECT * FROM /region WHERE int_value = -1"); EXPECT_NO_THROW(result = query->execute()); EXPECT_TRUE(result); - EXPECT_GT(result->size(), 0); + EXPECT_EQ(result->size(), 1); } +TEST(PdxTypeRegistryTest, cleanupOnClusterRestartAndFetchFields) { + Cluster cluster{LocatorCount{1}, ServerCount{2}}; + cluster.start(); + + auto& gfsh = cluster.getGfsh(); + gfsh.create().region().withName("region").withType("PARTITION").execute(); + + auto listener = std::make_shared(); + + auto cache = createTestCache(); + createTestPool(cluster, cache); + auto qs = cache.getQueryService("pool"); + auto region = createTestRegion(cache, listener); + + std::string key = "before-shutdown"; + region->put(key, createTestPdxInstance(cache, key)); + auto object = region->get(key); + EXPECT_TRUE(object); + + auto pdx = std::dynamic_pointer_cast(object); + EXPECT_TRUE(pdx); + + // Shutdown and wait for some time + gfsh.shutdown().execute(); + listener->waitDisconnected(); + std::this_thread::sleep_for(std::chrono::seconds{15}); + + for (auto& server : cluster.getServers()) { +server.start(); + } + + listener->waitConnected(); + auto fields = pdx->getFieldNames(); + EXPECT_TRUE(fields); + + std::set fields_set; + for (auto field : fields->value()) { +fields_set.insert(field->toString()); + } + + EXPECT_EQ(fields_set.count("entryName"), 1); Review comment: Fair enough -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@geode.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Fix coredump whenever getFieldNames is called after a cluster restart > - > > Key: GEODE-9268 > URL: https://issues.apache.org/jira/browse/GEODE-9268 > Project: Geode > Issue Type: Bug > Components: native client >Reporter: Mario Salazar de Torres >Assignee: Mario Salazar de Torres >Priority: Major > Labels: pull-request-available > > *WHEN* A PdxInstance is fetched from a region > *AND* The whole cluster is restarted, triggering PdxTypeRegistry cleanup. > *AND* getFieldNames is called on the PdxInstance created just before > *THEN* a coredump happens. > — > *Additional information:* > Callstack: > {noformat} > [ERROR 2021/05/05 12:57:12.781834 CEST main (139683590957120)] Segmentation > fault happened > 0# handler(int) at nc-pdx/main.cpp:225 > 1# 0x7F0A9F5F13C0 in /lib/x86_64-linux-gnu/libpthread.so.0 > 2# apache::geode::client::PdxType::getPdxFieldTypes() const at > cppcache/src/PdxType.hpp:181 > 3# apache::geode::client::PdxInstanceImpl::getFieldNames() at > cppcache/src/PdxInstanceImpl.cpp:1383 > 4# main at nc-pdx/main.cpp:374 > 5# __libc_start_main in /lib/x86_64-linux-gnu/libc.so.6 > 6# _start in build/pdx{noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-9268) Fix coredump whenever getFieldNames is called after a cluster restart
[ https://issues.apache.org/jira/browse/GEODE-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493383#comment-17493383 ] ASF GitHub Bot commented on GEODE-9268: --- pdxcodemonkey commented on a change in pull request #806: URL: https://github.com/apache/geode-native/pull/806#discussion_r808282553 ## File path: cppcache/integration/test/PdxInstanceTest.cpp ## @@ -273,32 +270,51 @@ TEST(PdxInstanceTest, testPdxInstance) { EXPECT_EQ(-960665662, pdxTypeInstance->hashcode()) << "Pdxhashcode hashcode not matched with java pdx hash code."; +} - // TODO split into separate test for nested pdx object test. - ParentPdx pdxParentOriginal(10); - auto pdxParentInstanceFactory = - cache.createPdxInstanceFactory("testobject.ParentPdx"); - clonePdxInstance(pdxParentOriginal, pdxParentInstanceFactory); - auto pdxParentInstance = pdxParentInstanceFactory.create(); - EXPECT_EQ("testobject.ParentPdx", pdxParentInstance->getClassName()) - << "pdxTypeInstance.getClassName should return testobject.ParentPdx."; +TEST(PdxInstanceTest, testNestedPdxInstance) { + Cluster cluster{LocatorCount{1}, ServerCount{1}}; + + cluster.start(); + + cluster.getGfsh() + .create() + .region() + .withName("region") + .withType("REPLICATE") + .execute(); + + auto cache = cluster.createCache(); + auto region = setupRegion(cache); + auto&& typeRegistry = cache.getTypeRegistry(); + auto&& cachePerfStats = std::dynamic_pointer_cast(region) + ->getCacheImpl() + ->getCachePerfStats(); + + typeRegistry.registerPdxType(ChildPdx::createDeserializable); + typeRegistry.registerPdxType(ParentPdx::createDeserializable); + + ParentPdx original{10}; + auto factory = cache.createPdxInstanceFactory(original.getClassName()); + clonePdxInstance(original, factory); + auto pdxInstance = factory.create(); auto keyport = CacheableKey::create("pdxParentOriginal"); - region->put(keyport, pdxParentInstance); - auto objectFromPdxParentInstanceGet = + region->put(keyport, pdxInstance); + auto object = std::dynamic_pointer_cast(region->get(keyport)); + EXPECT_TRUE(object); - EXPECT_EQ(1, cachePerfStats.getPdxInstanceDeserializations()) + EXPECT_EQ(0, cachePerfStats.getPdxInstanceDeserializations()) << "pdxInstanceDeserialization should be equal to 1."; Review comment: Likewise on line 313, the test is "less than 0", and the text says "greater than 0" -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@geode.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Fix coredump whenever getFieldNames is called after a cluster restart > - > > Key: GEODE-9268 > URL: https://issues.apache.org/jira/browse/GEODE-9268 > Project: Geode > Issue Type: Bug > Components: native client >Reporter: Mario Salazar de Torres >Assignee: Mario Salazar de Torres >Priority: Major > Labels: pull-request-available > > *WHEN* A PdxInstance is fetched from a region > *AND* The whole cluster is restarted, triggering PdxTypeRegistry cleanup. > *AND* getFieldNames is called on the PdxInstance created just before > *THEN* a coredump happens. > — > *Additional information:* > Callstack: > {noformat} > [ERROR 2021/05/05 12:57:12.781834 CEST main (139683590957120)] Segmentation > fault happened > 0# handler(int) at nc-pdx/main.cpp:225 > 1# 0x7F0A9F5F13C0 in /lib/x86_64-linux-gnu/libpthread.so.0 > 2# apache::geode::client::PdxType::getPdxFieldTypes() const at > cppcache/src/PdxType.hpp:181 > 3# apache::geode::client::PdxInstanceImpl::getFieldNames() at > cppcache/src/PdxInstanceImpl.cpp:1383 > 4# main at nc-pdx/main.cpp:374 > 5# __libc_start_main in /lib/x86_64-linux-gnu/libc.so.6 > 6# _start in build/pdx{noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-9268) Fix coredump whenever getFieldNames is called after a cluster restart
[ https://issues.apache.org/jira/browse/GEODE-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493379#comment-17493379 ] ASF GitHub Bot commented on GEODE-9268: --- pdxcodemonkey commented on a change in pull request #806: URL: https://github.com/apache/geode-native/pull/806#discussion_r808276065 ## File path: cppcache/integration/test/PdxInstanceTest.cpp ## @@ -273,32 +270,51 @@ TEST(PdxInstanceTest, testPdxInstance) { EXPECT_EQ(-960665662, pdxTypeInstance->hashcode()) << "Pdxhashcode hashcode not matched with java pdx hash code."; +} - // TODO split into separate test for nested pdx object test. - ParentPdx pdxParentOriginal(10); - auto pdxParentInstanceFactory = - cache.createPdxInstanceFactory("testobject.ParentPdx"); - clonePdxInstance(pdxParentOriginal, pdxParentInstanceFactory); - auto pdxParentInstance = pdxParentInstanceFactory.create(); - EXPECT_EQ("testobject.ParentPdx", pdxParentInstance->getClassName()) - << "pdxTypeInstance.getClassName should return testobject.ParentPdx."; +TEST(PdxInstanceTest, testNestedPdxInstance) { + Cluster cluster{LocatorCount{1}, ServerCount{1}}; + + cluster.start(); + + cluster.getGfsh() + .create() + .region() + .withName("region") + .withType("REPLICATE") + .execute(); + + auto cache = cluster.createCache(); + auto region = setupRegion(cache); + auto&& typeRegistry = cache.getTypeRegistry(); + auto&& cachePerfStats = std::dynamic_pointer_cast(region) + ->getCacheImpl() + ->getCachePerfStats(); + + typeRegistry.registerPdxType(ChildPdx::createDeserializable); + typeRegistry.registerPdxType(ParentPdx::createDeserializable); + + ParentPdx original{10}; + auto factory = cache.createPdxInstanceFactory(original.getClassName()); + clonePdxInstance(original, factory); + auto pdxInstance = factory.create(); auto keyport = CacheableKey::create("pdxParentOriginal"); - region->put(keyport, pdxParentInstance); - auto objectFromPdxParentInstanceGet = + region->put(keyport, pdxInstance); + auto object = std::dynamic_pointer_cast(region->get(keyport)); + EXPECT_TRUE(object); - EXPECT_EQ(1, cachePerfStats.getPdxInstanceDeserializations()) + EXPECT_EQ(0, cachePerfStats.getPdxInstanceDeserializations()) << "pdxInstanceDeserialization should be equal to 1."; Review comment: The statements on lines 237 and 258 still don't agree with the text, pls fix to avoid confusion. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@geode.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Fix coredump whenever getFieldNames is called after a cluster restart > - > > Key: GEODE-9268 > URL: https://issues.apache.org/jira/browse/GEODE-9268 > Project: Geode > Issue Type: Bug > Components: native client >Reporter: Mario Salazar de Torres >Assignee: Mario Salazar de Torres >Priority: Major > Labels: pull-request-available > > *WHEN* A PdxInstance is fetched from a region > *AND* The whole cluster is restarted, triggering PdxTypeRegistry cleanup. > *AND* getFieldNames is called on the PdxInstance created just before > *THEN* a coredump happens. > — > *Additional information:* > Callstack: > {noformat} > [ERROR 2021/05/05 12:57:12.781834 CEST main (139683590957120)] Segmentation > fault happened > 0# handler(int) at nc-pdx/main.cpp:225 > 1# 0x7F0A9F5F13C0 in /lib/x86_64-linux-gnu/libpthread.so.0 > 2# apache::geode::client::PdxType::getPdxFieldTypes() const at > cppcache/src/PdxType.hpp:181 > 3# apache::geode::client::PdxInstanceImpl::getFieldNames() at > cppcache/src/PdxInstanceImpl.cpp:1383 > 4# main at nc-pdx/main.cpp:374 > 5# __libc_start_main in /lib/x86_64-linux-gnu/libc.so.6 > 6# _start in build/pdx{noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (GEODE-10052) CI Failure: OutOfMemoryDUnitTest tests of Publish command fail expecting exception that was not thrown
[ https://issues.apache.org/jira/browse/GEODE-10052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated GEODE-10052: --- Labels: pull-request-available (was: ) > CI Failure: OutOfMemoryDUnitTest tests of Publish command fail expecting > exception that was not thrown > -- > > Key: GEODE-10052 > URL: https://issues.apache.org/jira/browse/GEODE-10052 > Project: Geode > Issue Type: Bug > Components: redis >Affects Versions: 1.16.0 >Reporter: Hale Bales >Assignee: Donal Evans >Priority: Major > Labels: pull-request-available > > There were three failures within a couple of days. They are all in publish > tests. > {code:java} > OutOfMemoryDUnitTest > shouldReturnOOMError_forPublish_whenThresholdReached > FAILED > java.lang.AssertionError: > Expecting code to raise a throwable. > at > org.apache.geode.redis.OutOfMemoryDUnitTest.addMultipleKeysToServer1UntilOOMExceptionIsThrown(OutOfMemoryDUnitTest.java:357) > at > org.apache.geode.redis.OutOfMemoryDUnitTest.fillServer1Memory(OutOfMemoryDUnitTest.java:344) > at > org.apache.geode.redis.OutOfMemoryDUnitTest.shouldReturnOOMError_forPublish_whenThresholdReached(OutOfMemoryDUnitTest.java:210) > {code} > {code:java} > OutOfMemoryDUnitTest > shouldReturnOOMError_forPublish_whenThresholdReached > FAILED > java.lang.AssertionError: > Expecting code to raise a throwable. > at > org.apache.geode.redis.OutOfMemoryDUnitTest.addMultipleKeysToServer1UntilOOMExceptionIsThrown(OutOfMemoryDUnitTest.java:357) > at > org.apache.geode.redis.OutOfMemoryDUnitTest.fillServer1Memory(OutOfMemoryDUnitTest.java:344) > at > org.apache.geode.redis.OutOfMemoryDUnitTest.shouldReturnOOMError_forPublish_whenThresholdReached(OutOfMemoryDUnitTest.java:210) > {code} > {code:java} > OutOfMemoryDUnitTest > shouldAllowPublish_afterDroppingBelowCriticalThreshold > FAILED > org.awaitility.core.ConditionTimeoutException: Assertion condition > defined as a org.apache.geode.redis.OutOfMemoryDUnitTest > Expecting code to raise a throwable within 5 minutes. > at > org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:164) > at > org.awaitility.core.AssertionCondition.await(AssertionCondition.java:119) > at > org.awaitility.core.AssertionCondition.await(AssertionCondition.java:31) > at > org.awaitility.core.ConditionFactory.until(ConditionFactory.java:939) > at > org.awaitility.core.ConditionFactory.untilAsserted(ConditionFactory.java:723) > at > org.apache.geode.redis.OutOfMemoryDUnitTest.shouldAllowPublish_afterDroppingBelowCriticalThreshold(OutOfMemoryDUnitTest.java:328) > Caused by: > java.lang.AssertionError: > Expecting code to raise a throwable. > at > org.apache.geode.redis.OutOfMemoryDUnitTest.lambda$shouldAllowPublish_afterDroppingBelowCriticalThreshold$36(OutOfMemoryDUnitTest.java:328) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (GEODE-10056) Gateway-reciver connection load mantained only on one locator
[ https://issues.apache.org/jira/browse/GEODE-10056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jakov Varenina updated GEODE-10056: --- Description: Fist problem is that servers send incorrect gateay-receiver connection load to locators with CacheServerLoadMessage. Second problem is that locator doesn't refresh gateway-receivers load per server in local map with the load received in CacheServerLoadMessage. This seems to be a bug, as there is already mechanism to track and store gateway-receiver connection load per server in locator, but that load is never refreshed by fault at the reception of CacheServerLoadMessage. Currently, receiver load is only refreshed/increased on the locator that is handling ClientConnectionRequest\{group=__recv_group...} and ClientConnectionResponse messages from remote server that is trying to establish gateway sender connection. All other locators in cluster will never refresh the gateway-receiver connection load in this case. When locator that was serving remote gateway-senders goes down then new locator will take that job. Problem is that new locator will not have correct load (it was never refreshed) and that would in most situations result with new gateway-sender connections being established in unbalanced way. Way to reproduce the issue: Start 2 clusters, Let's call site1 the sending and site2 the receiving site, The receiving site should have at least 2 locators. Both have 2 servers. No regions are needed. Cluster-1 gfsh>list members Member Count : 3Name | Id - | - locator10 | 10.0.2.15(locator10:7332:locator):41000 [Coordinator] server11 | 10.0.2.15(server11:8358):41003 server12 | 10.0.2.15(server12:8717):41005 Cluster-2 gfsh>list members Member Count : 4Name | Id - | - locator10 | 10.0.2.15(locator10:7562:locator):41001 [Coordinator] locator11 | 10.0.2.15(locator11:8103:locator):41002 server11 | 10.0.2.15(server11:8547):41004 server12 | 10.0.2.15(server12:8908):41006 Create GW receiver in Site2 on both servers. Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 0 | 10.0.2.15(server12:8908):41006 | 5457 | 0 | Create GW sender in Site1 on both servers. Use 10 dispatcher threads for easier obervation. Cluster-1 gfsh>list gateways GatewaySender SectionGatewaySender Id | Member | Remote Cluster Id | Type | Status | Queued Events | Receiver Location | -- | - | | - | - | - senderTo2 | 10.0.2.15(server11:8358):41003 | 2 | Parallel | Running and Connected | 0 | 10.0.2.15:5457 senderTo2 | 10.0.2.15(server12:8717):41005 | 2 | Parallel | Running and Connected | 0 | 10.0.2.15:5457 Observe balance in GW receiver connections in Site2. It will be perfect. Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 12 | 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, 10.0.2.15(server11:8358):41003, 10.0.2.15(server11:.. 10.0.2.15(server12:8908):41006 | 5457 | 12 | 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:.. 12 connections each - 10 payload + 2 ping connections. Now stop GW receiver in one server of site2. In Site1 do a stop/start gateway-sender command - all connections will go to the only receiver in site2 (as expected). Check it: Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 22 | 10.0.2.15(server11:8358):41003, 10.0.2.15(server12:8717):41005, 10.0.2.15(server11:8358):41003, 10.0.2.15(server11:.. 10.0.2.15(server12:8908):41006 | 5457 | 0 | Now 22 in just one receiver - 20 payload + 1 ping from each sender. Stop GW sender in one server in Site1. Connection drops in GW receiver to half the value (also expected).
[jira] [Updated] (GEODE-10056) Gateway-reciver connection load mantained only on one locator
[ https://issues.apache.org/jira/browse/GEODE-10056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jakov Varenina updated GEODE-10056: --- Description: Fist problem is that servers send incorrect gateay-receiver connection load to locators with CacheServerLoadMessage. Another problem is that locator doesn't refresh gateway-receivers load per server in local map with the load received in CacheServerLoadMessage. This seems to be a bug, as there is already mechanism to track and store gateway-receiver connection load per server in locator, but that load is not refreshed by fault at the reception of CacheServerLoadMessage. Currently, receiver load is only refreshed/increased on the locator that is handling ClientConnectionRequest\{group=__recv_group...} and ClientConnectionResponse messages from remote server that is trying to establish gateway sender connection. All other locators in cluster will never refresh the gateway-receiver connection load in this case. When locator that was serving remote gateway-senders goes down then new locator will take that job. Problem is that new locator will not have correct load (it was never refreshed) and that would in most situations result with new gateway-sender connections being established in unbalanced way. Way to reproduce the issue: Start 2 clusters, Let's call site1 the sending and site2 the receiving site, The receiving site should have at least 2 locators. Both have 2 servers. No regions are needed. Cluster-1 gfsh>list members Member Count : 3Name | Id - | - locator10 | 10.0.2.15(locator10:7332:locator):41000 [Coordinator] server11 | 10.0.2.15(server11:8358):41003 server12 | 10.0.2.15(server12:8717):41005 Cluster-2 gfsh>list members Member Count : 4Name | Id - | - locator10 | 10.0.2.15(locator10:7562:locator):41001 [Coordinator] locator11 | 10.0.2.15(locator11:8103:locator):41002 server11 | 10.0.2.15(server11:8547):41004 server12 | 10.0.2.15(server12:8908):41006 Create GW receiver in Site2 on both servers. Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 0 | 10.0.2.15(server12:8908):41006 | 5457 | 0 | Create GW sender in Site1 on both servers. Use 10 dispatcher threads for easier obervation. Cluster-1 gfsh>list gateways GatewaySender SectionGatewaySender Id | Member | Remote Cluster Id | Type | Status | Queued Events | Receiver Location | -- | - | | - | - | - senderTo2 | 10.0.2.15(server11:8358):41003 | 2 | Parallel | Running and Connected | 0 | 10.0.2.15:5457 senderTo2 | 10.0.2.15(server12:8717):41005 | 2 | Parallel | Running and Connected | 0 | 10.0.2.15:5457 Observe balance in GW receiver connections in Site2. It will be perfect. Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 12 | 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, 10.0.2.15(server11:8358):41003, 10.0.2.15(server11:.. 10.0.2.15(server12:8908):41006 | 5457 | 12 | 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:.. 12 connections each - 10 payload + 2 ping connections. Now stop GW receiver in one server of site2. In Site1 do a stop/start gateway-sender command - all connections will go to the only receiver in site2 (as expected). Check it: Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 22 | 10.0.2.15(server11:8358):41003, 10.0.2.15(server12:8717):41005, 10.0.2.15(server11:8358):41003, 10.0.2.15(server11:.. 10.0.2.15(server12:8908):41006 | 5457 | 0 | Now 22 in just one receiver - 20 payload + 1 ping from each sender. Stop GW sender in one server in Site1. Connection drops in GW receiver to half the value (also expected).
[jira] [Updated] (GEODE-10056) Gateway-reciver connection load mantained only on one locator
[ https://issues.apache.org/jira/browse/GEODE-10056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jakov Varenina updated GEODE-10056: --- Description: Fist problem is that servers send incorrect gateay-receiver connection load to locators with CacheServerLoadMessage. Second problem is that locator doesn't refresh gateway-receivers load per server in local map with the load received in CacheServerLoadMessage. This seems to be a bug, as there is already mechanism to track and store gateway-receiver connection load per server in locator, but that load is not refreshed by fault at the reception of CacheServerLoadMessage. Currently, receiver load is only refreshed/increased on the locator that is handling ClientConnectionRequest\{group=__recv_group...} and ClientConnectionResponse messages from remote server that is trying to establish gateway sender connection. All other locators in cluster will never refresh the gateway-receiver connection load in this case. When locator that was serving remote gateway-senders goes down then new locator will take that job. Problem is that new locator will not have correct load (it was never refreshed) and that would in most situations result with new gateway-sender connections being established in unbalanced way. Way to reproduce the issue: Start 2 clusters, Let's call site1 the sending and site2 the receiving site, The receiving site should have at least 2 locators. Both have 2 servers. No regions are needed. Cluster-1 gfsh>list members Member Count : 3Name | Id - | - locator10 | 10.0.2.15(locator10:7332:locator):41000 [Coordinator] server11 | 10.0.2.15(server11:8358):41003 server12 | 10.0.2.15(server12:8717):41005 Cluster-2 gfsh>list members Member Count : 4Name | Id - | - locator10 | 10.0.2.15(locator10:7562:locator):41001 [Coordinator] locator11 | 10.0.2.15(locator11:8103:locator):41002 server11 | 10.0.2.15(server11:8547):41004 server12 | 10.0.2.15(server12:8908):41006 Create GW receiver in Site2 on both servers. Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 0 | 10.0.2.15(server12:8908):41006 | 5457 | 0 | Create GW sender in Site1 on both servers. Use 10 dispatcher threads for easier obervation. Cluster-1 gfsh>list gateways GatewaySender SectionGatewaySender Id | Member | Remote Cluster Id | Type | Status | Queued Events | Receiver Location | -- | - | | - | - | - senderTo2 | 10.0.2.15(server11:8358):41003 | 2 | Parallel | Running and Connected | 0 | 10.0.2.15:5457 senderTo2 | 10.0.2.15(server12:8717):41005 | 2 | Parallel | Running and Connected | 0 | 10.0.2.15:5457 Observe balance in GW receiver connections in Site2. It will be perfect. Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 12 | 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, 10.0.2.15(server11:8358):41003, 10.0.2.15(server11:.. 10.0.2.15(server12:8908):41006 | 5457 | 12 | 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:.. 12 connections each - 10 payload + 2 ping connections. Now stop GW receiver in one server of site2. In Site1 do a stop/start gateway-sender command - all connections will go to the only receiver in site2 (as expected). Check it: Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 22 | 10.0.2.15(server11:8358):41003, 10.0.2.15(server12:8717):41005, 10.0.2.15(server11:8358):41003, 10.0.2.15(server11:.. 10.0.2.15(server12:8908):41006 | 5457 | 0 | Now 22 in just one receiver - 20 payload + 1 ping from each sender. Stop GW sender in one server in Site1. Connection drops in GW receiver to half the value (also expected).
[jira] [Updated] (GEODE-10056) Gateway-reciver connection load mantained only on one locator
[ https://issues.apache.org/jira/browse/GEODE-10056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jakov Varenina updated GEODE-10056: --- Description: One of the problem is that server send incorrect gateay-receiver acceptor connection load to locators with CacheServerLoadMessage. Another problem is that locator doesn't refresh gateway-receivers load per server in locator local map with the load received in CacheServerLoadMessage. This seems to be a bug, as there is already mechanism to track and store gateway-receiver connection load per server in locator, but that load is not refreshed by fault at the reception of CacheServerLoadMessage. Currently, receiver load is only refreshed/increased on the locator that is handling ClientConnectionRequest\{group=__recv_group...} and ClientConnectionResponse messages from remote server that is trying establish gateway sender connection. All other locators in cluster will never refresh the gateway-receiver connection load. In case locator that was serving remote gateway-sender goes down, then new locator will not have correct and that would in most situation result with new gateway-sender connections being established in unbalanced way. Way to reproduce the issue: Start 2 clusters, Let's call site1 the sending and site2 the receiving site, The receiving site should have at least 2 locators. Both have 2 servers. No regions are needed. Cluster-1 gfsh>list members Member Count : 3Name | Id - | - locator10 | 10.0.2.15(locator10:7332:locator):41000 [Coordinator] server11 | 10.0.2.15(server11:8358):41003 server12 | 10.0.2.15(server12:8717):41005 Cluster-2 gfsh>list members Member Count : 4Name | Id - | - locator10 | 10.0.2.15(locator10:7562:locator):41001 [Coordinator] locator11 | 10.0.2.15(locator11:8103:locator):41002 server11 | 10.0.2.15(server11:8547):41004 server12 | 10.0.2.15(server12:8908):41006 Create GW receiver in Site2 on both servers. Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 0 | 10.0.2.15(server12:8908):41006 | 5457 | 0 | Create GW sender in Site1 on both servers. Use 10 dispatcher threads for easier obervation. Cluster-1 gfsh>list gateways GatewaySender SectionGatewaySender Id | Member | Remote Cluster Id | Type | Status | Queued Events | Receiver Location | -- | - | | - | - | - senderTo2 | 10.0.2.15(server11:8358):41003 | 2 | Parallel | Running and Connected | 0 | 10.0.2.15:5457 senderTo2 | 10.0.2.15(server12:8717):41005 | 2 | Parallel | Running and Connected | 0 | 10.0.2.15:5457 Observe balance in GW receiver connections in Site2. It will be perfect. Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 12 | 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, 10.0.2.15(server11:8358):41003, 10.0.2.15(server11:.. 10.0.2.15(server12:8908):41006 | 5457 | 12 | 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:.. 12 connections each - 10 payload + 2 ping connections. Now stop GW receiver in one server of site2. In Site1 do a stop/start gateway-sender command - all connections will go to the only receiver in site2 (as expected). Check it: Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 22 | 10.0.2.15(server11:8358):41003, 10.0.2.15(server12:8717):41005, 10.0.2.15(server11:8358):41003, 10.0.2.15(server11:.. 10.0.2.15(server12:8908):41006 | 5457 | 0 | Now 22 in just one receiver - 20 payload + 1 ping from each sender. Stop GW sender in one server in Site1. Connection drops in GW receiver to half the value (also expected). Cluster-2 gfsh>list gateways GatewayReceiver Section
[jira] [Updated] (GEODE-10055) AbstractLauncher print info and debug with stderr instead of stdout
[ https://issues.apache.org/jira/browse/GEODE-10055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated GEODE-10055: --- Labels: needsTriage pull-request-available (was: needsTriage) > AbstractLauncher print info and debug with stderr instead of stdout > --- > > Key: GEODE-10055 > URL: https://issues.apache.org/jira/browse/GEODE-10055 > Project: Geode > Issue Type: Bug > Components: logging >Affects Versions: 1.12.8, 1.13.7, 1.14.3 >Reporter: Mario Kevo >Assignee: Mario Kevo >Priority: Major > Labels: needsTriage, pull-request-available > > The problem happened with locator/server launcher logs which are printed with > stderr in both cases, for info and debug. > {code:java} > protected void info(final Object message, final Object... args) { > if (args != null && args.length > 0) { > System.err.printf(message.toString(), args); > } else { > System.err.print(message); > } > } > {code} > And when it is redirected to some tools it represents it like an error as it > is printed with stderr, instead of stdout. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (GEODE-10056) Gateway-reciver connection load mantained only on one locator
[ https://issues.apache.org/jira/browse/GEODE-10056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jakov Varenina updated GEODE-10056: --- Description: When GW sender wants to create connection to a receiver, it will ask remote locator where to connect to (which server) using CLIENT_CONNECTION_REQUEST message. Locator should check the load (actually just the connection count in each GW receiver) and respond with least loaded server. But, servers do not track the load for their GW receiver acceptor! It is always 0. What happens then? It looks like each locator is mantaining a map of the load based on connections it dealt around so there will be no unbalancing problems until either locator restarts or clients get their connections from some other locator in the cluster. Both are quite valid scenarios in my opinion and the net-result is unbalance in replication connections. Start 2 clusters, Let's call site1 the sending and site2 the receiving site, The receiving site should have at least 2 locators. Both have 2 servers. No regions are needed. Cluster-1 gfsh>list members Member Count : 3Name | Id - | - locator10 | 10.0.2.15(locator10:7332:locator):41000 [Coordinator] server11 | 10.0.2.15(server11:8358):41003 server12 | 10.0.2.15(server12:8717):41005 Cluster-2 gfsh>list members Member Count : 4Name | Id - | - locator10 | 10.0.2.15(locator10:7562:locator):41001 [Coordinator] locator11 | 10.0.2.15(locator11:8103:locator):41002 server11 | 10.0.2.15(server11:8547):41004 server12 | 10.0.2.15(server12:8908):41006 Create GW receiver in Site2 on both servers. Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 0 | 10.0.2.15(server12:8908):41006 | 5457 | 0 | Create GW sender in Site1 on both servers. Use 10 dispatcher threads for easier obervation. Cluster-1 gfsh>list gateways GatewaySender SectionGatewaySender Id | Member | Remote Cluster Id | Type | Status | Queued Events | Receiver Location | -- | - | | - | - | - senderTo2 | 10.0.2.15(server11:8358):41003 | 2 | Parallel | Running and Connected | 0 | 10.0.2.15:5457 senderTo2 | 10.0.2.15(server12:8717):41005 | 2 | Parallel | Running and Connected | 0 | 10.0.2.15:5457 Observe balance in GW receiver connections in Site2. It will be perfect. Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 12 | 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, 10.0.2.15(server11:8358):41003, 10.0.2.15(server11:.. 10.0.2.15(server12:8908):41006 | 5457 | 12 | 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:.. 12 connections each - 10 payload + 2 ping connections. Now stop GW receiver in one server of site2. In Site1 do a stop/start gateway-sender command - all connections will go to the only receiver in site2 (as expected). Check it: Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 22 | 10.0.2.15(server11:8358):41003, 10.0.2.15(server12:8717):41005, 10.0.2.15(server11:8358):41003, 10.0.2.15(server11:.. 10.0.2.15(server12:8908):41006 | 5457 | 0 | Now 22 in just one receiver - 20 payload + 1 ping from each sender. Stop GW sender in one server in Site1. Connection drops in GW receiver to half the value (also expected). Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 11 | 10.0.2.15(server11:8358):41003,
[jira] [Updated] (GEODE-10056) Gateway-reciver connection load mantained only on one locator
[ https://issues.apache.org/jira/browse/GEODE-10056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jakov Varenina updated GEODE-10056: --- Summary: Gateway-reciver connection load mantained only on one locator (was: Gateway-reciver load mantained only on one locator) > Gateway-reciver connection load mantained only on one locator > - > > Key: GEODE-10056 > URL: https://issues.apache.org/jira/browse/GEODE-10056 > Project: Geode > Issue Type: Bug >Reporter: Jakov Varenina >Assignee: Jakov Varenina >Priority: Major > Labels: needsTriage > > When GW sender wants to create connection to a receiver, it will ask remote > locator where to connect to (which server) using CLIENT_CONNECTION_REQUEST > message. Locator should check the load (actually just the connection count in > each GW receiver) and respond with least loaded server. > But, servers do not track the load for their GW receiver acceptor! It is > always 0. What happens then? > It looks like each locator is mantaining a map of the load based on > connections it dealt around so there will be no unbalancing problems until > either locator restarts or clients get their connections from some other > locator in the cluster. Both are quite valid scenarios in my opinion and the > net-result is unbalance in replication connections. > How to test? > How to test? > Start 2 clusters, Let's call site1 the sending and site2 the receiving site, > The receiving site should have at least 2 locators. Both have 2 servers. No > regions are needed. > Cluster-1 gfsh>list members > Member Count : 3Name | Id > - | - > locator10 | 10.0.2.15(locator10:7332:locator):41000 [Coordinator] > server11 | 10.0.2.15(server11:8358):41003 > server12 | 10.0.2.15(server12:8717):41005 > > Cluster-2 gfsh>list members > Member Count : 4Name | Id > - | - > locator10 | 10.0.2.15(locator10:7562:locator):41001 [Coordinator] > locator11 | 10.0.2.15(locator11:8103:locator):41002 > server11 | 10.0.2.15(server11:8547):41004 > server12 | 10.0.2.15(server12:8908):41006 > > Create GW receiver in Site2 on both servers. > Cluster-2 gfsh>list gateways > GatewayReceiver Section Member | Port | Sender > Count | Senders Connected > -- | | | - > 10.0.2.15(server11:8547):41004 | 5175 | 0 | > 10.0.2.15(server12:8908):41006 | 5457 | 0 | > Create GW sender in Site1 on both servers. Use 10 dispatcher threads for > easier obervation. > Cluster-1 gfsh>list gateways > GatewaySender SectionGatewaySender Id | Member | > Remote Cluster Id | Type | Status | Queued Events | > Receiver Location > | -- | - | > | - | - | - > senderTo2 | 10.0.2.15(server11:8358):41003 | 2 | > Parallel | Running and Connected | 0 | 10.0.2.15:5457 > senderTo2 | 10.0.2.15(server12:8717):41005 | 2 | > Parallel | Running and Connected | 0 | 10.0.2.15:5457 > > Observe balance in GW receiver connections in Site2. It will be perfect. > > Cluster-2 gfsh>list gateways > GatewayReceiver Section Member | Port | Sender > Count | Senders Connected > -- | | | > - > 10.0.2.15(server11:8547):41004 | 5175 | 12 | > 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, > 10.0.2.15(server11:8358):41003, 10.0.2.15(server11:.. > 10.0.2.15(server12:8908):41006 | 5457 | 12 | > 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, > 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:.. > > 12 connections each - 10 payload + 2 ping connections. > Now stop GW receiver in one server of site2. In Site1 do a stop/start > gateway-sender command - all connections will go to the only receiver in > site2 (as expected). Check it: > > Cluster-2 gfsh>list gateways > GatewayReceiver Section Member | Port | Sender > Count | Senders Connected > -- | | | > - > 10.0.2.15(server11:8547):41004 | 5175 | 22 | > 10.0.2.15(server11:8358):41003, 10.0.2.15(server12:8717):41005, > 10.0.2.15(serve
[jira] [Updated] (GEODE-10056) Gateway-reciver load mantained only on one locator
[ https://issues.apache.org/jira/browse/GEODE-10056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jakov Varenina updated GEODE-10056: --- Description: When GW sender wants to create connection to a receiver, it will ask remote locator where to connect to (which server) using CLIENT_CONNECTION_REQUEST message. Locator should check the load (actually just the connection count in each GW receiver) and respond with least loaded server. But, servers do not track the load for their GW receiver acceptor! It is always 0. What happens then? It looks like each locator is mantaining a map of the load based on connections it dealt around so there will be no unbalancing problems until either locator restarts or clients get their connections from some other locator in the cluster. Both are quite valid scenarios in my opinion and the net-result is unbalance in replication connections. How to test? How to test? Start 2 clusters, Let's call site1 the sending and site2 the receiving site, The receiving site should have at least 2 locators. Both have 2 servers. No regions are needed. Cluster-1 gfsh>list members Member Count : 3Name | Id - | - locator10 | 10.0.2.15(locator10:7332:locator):41000 [Coordinator] server11 | 10.0.2.15(server11:8358):41003 server12 | 10.0.2.15(server12:8717):41005 Cluster-2 gfsh>list members Member Count : 4Name | Id - | - locator10 | 10.0.2.15(locator10:7562:locator):41001 [Coordinator] locator11 | 10.0.2.15(locator11:8103:locator):41002 server11 | 10.0.2.15(server11:8547):41004 server12 | 10.0.2.15(server12:8908):41006 Create GW receiver in Site2 on both servers. Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 0 | 10.0.2.15(server12:8908):41006 | 5457 | 0 | Create GW sender in Site1 on both servers. Use 10 dispatcher threads for easier obervation. Cluster-1 gfsh>list gateways GatewaySender SectionGatewaySender Id | Member | Remote Cluster Id | Type | Status | Queued Events | Receiver Location | -- | - | | - | - | - senderTo2 | 10.0.2.15(server11:8358):41003 | 2 | Parallel | Running and Connected | 0 | 10.0.2.15:5457 senderTo2 | 10.0.2.15(server12:8717):41005 | 2 | Parallel | Running and Connected | 0 | 10.0.2.15:5457 Observe balance in GW receiver connections in Site2. It will be perfect. Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 12 | 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, 10.0.2.15(server11:8358):41003, 10.0.2.15(server11:.. 10.0.2.15(server12:8908):41006 | 5457 | 12 | 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:.. 12 connections each - 10 payload + 2 ping connections. Now stop GW receiver in one server of site2. In Site1 do a stop/start gateway-sender command - all connections will go to the only receiver in site2 (as expected). Check it: Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 22 | 10.0.2.15(server11:8358):41003, 10.0.2.15(server12:8717):41005, 10.0.2.15(server11:8358):41003, 10.0.2.15(server11:.. 10.0.2.15(server12:8908):41006 | 5457 | 0 | Now 22 in just one receiver - 20 payload + 1 ping from each sender. Stop GW sender in one server in Site1. Connection drops in GW receiver to half the value (also expected). Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 11 | 10.0
[jira] [Updated] (GEODE-10056) Gateway-reciver load mantained only on one locator
[ https://issues.apache.org/jira/browse/GEODE-10056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jakov Varenina updated GEODE-10056: --- Description: It looks like each locator is maintaining a map of the load based on connections it dealt around so there will be no unbalancing problems until either locator restarts or clients get their connections from some other locator in the cluster. How to test? Start 2 clusters, Let's call site1 the sending and site2 the receiving site, The receiving site should have at least 2 locators. Both have 2 servers. No regions are needed. Cluster-1 gfsh>list members Member Count : 3Name | Id - | - locator10 | 10.0.2.15(locator10:7332:locator):41000 [Coordinator] server11 | 10.0.2.15(server11:8358):41003 server12 | 10.0.2.15(server12:8717):41005 Cluster-2 gfsh>list members Member Count : 4Name | Id - | - locator10 | 10.0.2.15(locator10:7562:locator):41001 [Coordinator] locator11 | 10.0.2.15(locator11:8103:locator):41002 server11 | 10.0.2.15(server11:8547):41004 server12 | 10.0.2.15(server12:8908):41006 Create GW receiver in Site2 on both servers. Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 0 | 10.0.2.15(server12:8908):41006 | 5457 | 0 | Create GW sender in Site1 on both servers. Use 10 dispatcher threads for easier obervation. Cluster-1 gfsh>list gateways GatewaySender SectionGatewaySender Id | Member | Remote Cluster Id | Type | Status | Queued Events | Receiver Location | -- | - | | - | - | - senderTo2 | 10.0.2.15(server11:8358):41003 | 2 | Parallel | Running and Connected | 0 | 10.0.2.15:5457 senderTo2 | 10.0.2.15(server12:8717):41005 | 2 | Parallel | Running and Connected | 0 | 10.0.2.15:5457 Observe balance in GW receiver connections in Site2. It will be perfect. Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 12 | 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, 10.0.2.15(server11:8358):41003, 10.0.2.15(server11:.. 10.0.2.15(server12:8908):41006 | 5457 | 12 | 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:8717):41005, 10.0.2.15(server12:.. 12 connections each - 10 payload + 2 ping connections. Now stop GW receiver in one server of site2. In Site1 do a stop/start gateway-sender command - all connections will go to the only receiver in site2 (as expected). Check it: Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 22 | 10.0.2.15(server11:8358):41003, 10.0.2.15(server12:8717):41005, 10.0.2.15(server11:8358):41003, 10.0.2.15(server11:.. 10.0.2.15(server12:8908):41006 | 5457 | 0 | Now 22 in just one receiver - 20 payload + 1 ping from each sender. Stop GW sender in one server in Site1. Connection drops in GW receiver to half the value (also expected). Cluster-2 gfsh>list gateways GatewayReceiver Section Member | Port | Sender Count | Senders Connected -- | | | - 10.0.2.15(server11:8547):41004 | 5175 | 11 | 10.0.2.15(server11:8358):41003, 10.0.2.15(server11:8358):41003, 10.0.2.15(server11:8358):41003, 10.0.2.15(server11:.. 10.0.2.15(server12:8908):41006 | 5457 | 0 | Now 11 as one sender from Site1 is stopped. Start the GW receiver in server of site2 (that was stopped before). It will not receive new connections just yet. Start GW sender in one server in Site1 (that was stopped before). All connections will land in receiver started before so the balance is there. Cluster-2 gfsh>list gateways Gate
[jira] [Updated] (GEODE-10056) Gateway-reciver load mantained only on one locator
[ https://issues.apache.org/jira/browse/GEODE-10056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Murmann updated GEODE-10056: -- Labels: needsTriage (was: ) > Gateway-reciver load mantained only on one locator > -- > > Key: GEODE-10056 > URL: https://issues.apache.org/jira/browse/GEODE-10056 > Project: Geode > Issue Type: Bug >Reporter: Jakov Varenina >Priority: Major > Labels: needsTriage > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (GEODE-10056) Gateway-reciver load mantained only on one locator
[ https://issues.apache.org/jira/browse/GEODE-10056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jakov Varenina reassigned GEODE-10056: -- Assignee: Jakov Varenina > Gateway-reciver load mantained only on one locator > -- > > Key: GEODE-10056 > URL: https://issues.apache.org/jira/browse/GEODE-10056 > Project: Geode > Issue Type: Bug >Reporter: Jakov Varenina >Assignee: Jakov Varenina >Priority: Major > Labels: needsTriage > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (GEODE-10056) Gateway-reciver load mantained only on one locator
Jakov Varenina created GEODE-10056: -- Summary: Gateway-reciver load mantained only on one locator Key: GEODE-10056 URL: https://issues.apache.org/jira/browse/GEODE-10056 Project: Geode Issue Type: Bug Reporter: Jakov Varenina -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (GEODE-10055) AbstractLauncher print info and debug with stderr instead of stdout
[ https://issues.apache.org/jira/browse/GEODE-10055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mario Kevo reassigned GEODE-10055: -- Assignee: Mario Kevo > AbstractLauncher print info and debug with stderr instead of stdout > --- > > Key: GEODE-10055 > URL: https://issues.apache.org/jira/browse/GEODE-10055 > Project: Geode > Issue Type: Bug > Components: logging >Affects Versions: 1.12.8, 1.13.7, 1.14.3 >Reporter: Mario Kevo >Assignee: Mario Kevo >Priority: Major > Labels: needsTriage > > The problem happened with locator/server launcher logs which are printed with > stderr in both cases, for info and debug. > {code:java} > protected void info(final Object message, final Object... args) { > if (args != null && args.length > 0) { > System.err.printf(message.toString(), args); > } else { > System.err.print(message); > } > } > {code} > And when it is redirected to some tools it represents it like an error as it > is printed with stderr, instead of stdout. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (GEODE-10055) AbstractLauncher print info and debug with stderr instead of stdout
[ https://issues.apache.org/jira/browse/GEODE-10055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Murmann updated GEODE-10055: -- Labels: needsTriage (was: ) > AbstractLauncher print info and debug with stderr instead of stdout > --- > > Key: GEODE-10055 > URL: https://issues.apache.org/jira/browse/GEODE-10055 > Project: Geode > Issue Type: Bug > Components: logging >Affects Versions: 1.12.8, 1.13.7, 1.14.3 >Reporter: Mario Kevo >Priority: Major > Labels: needsTriage > > The problem happened with locator/server launcher logs which are printed with > stderr in both cases, for info and debug. > {code:java} > protected void info(final Object message, final Object... args) { > if (args != null && args.length > 0) { > System.err.printf(message.toString(), args); > } else { > System.err.print(message); > } > } > {code} > And when it is redirected to some tools it represents it like an error as it > is printed with stderr, instead of stdout. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (GEODE-10055) AbstractLauncher print info and debug with stderr instead of stdout
Mario Kevo created GEODE-10055: -- Summary: AbstractLauncher print info and debug with stderr instead of stdout Key: GEODE-10055 URL: https://issues.apache.org/jira/browse/GEODE-10055 Project: Geode Issue Type: Bug Components: logging Affects Versions: 1.14.3, 1.13.7, 1.12.8 Reporter: Mario Kevo The problem happened with locator/server launcher logs which are printed with stderr in both cases, for info and debug. {code:java} protected void info(final Object message, final Object... args) { if (args != null && args.length > 0) { System.err.printf(message.toString(), args); } else { System.err.print(message); } } {code} And when it is redirected to some tools it represents it like an error as it is printed with stderr, instead of stdout. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-9268) Fix coredump whenever getFieldNames is called after a cluster restart
[ https://issues.apache.org/jira/browse/GEODE-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493100#comment-17493100 ] ASF GitHub Bot commented on GEODE-9268: --- gaussianrecurrence commented on pull request #806: URL: https://github.com/apache/geode-native/pull/806#issuecomment-1041278583 > @gaussianrecurrence Thanks for picking this back up. I have resolved a few of the comments that you replied to and don't need code changes. Let's clean up the last few and get this merged! I've created revision 1 addressing all your comments and improved some things on the testing part. Glad to hear any other feedback you might have :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@geode.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Fix coredump whenever getFieldNames is called after a cluster restart > - > > Key: GEODE-9268 > URL: https://issues.apache.org/jira/browse/GEODE-9268 > Project: Geode > Issue Type: Bug > Components: native client >Reporter: Mario Salazar de Torres >Assignee: Mario Salazar de Torres >Priority: Major > Labels: pull-request-available > > *WHEN* A PdxInstance is fetched from a region > *AND* The whole cluster is restarted, triggering PdxTypeRegistry cleanup. > *AND* getFieldNames is called on the PdxInstance created just before > *THEN* a coredump happens. > — > *Additional information:* > Callstack: > {noformat} > [ERROR 2021/05/05 12:57:12.781834 CEST main (139683590957120)] Segmentation > fault happened > 0# handler(int) at nc-pdx/main.cpp:225 > 1# 0x7F0A9F5F13C0 in /lib/x86_64-linux-gnu/libpthread.so.0 > 2# apache::geode::client::PdxType::getPdxFieldTypes() const at > cppcache/src/PdxType.hpp:181 > 3# apache::geode::client::PdxInstanceImpl::getFieldNames() at > cppcache/src/PdxInstanceImpl.cpp:1383 > 4# main at nc-pdx/main.cpp:374 > 5# __libc_start_main in /lib/x86_64-linux-gnu/libc.so.6 > 6# _start in build/pdx{noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-9980) Startup of Locator or Server should fail fast if geode.enableGlobalSerialFilter is enabled but fails configuration
[ https://issues.apache.org/jira/browse/GEODE-9980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493093#comment-17493093 ] ASF subversion and git services commented on GEODE-9980: Commit 8f3186cb74ccd2eb1ec8ba648c7a11b08d575962 in geode's branch refs/heads/support/1.14 from Kirk Lund [ https://gitbox.apache.org/repos/asf?p=geode.git;h=8f3186c ] GEODE-9817: Enable customized source set paths for ClassAnalysisRule (#7367) Adds support for customizing source set paths of ClassAnalysisRule. PROBLEM Modules external to Geode must be structured the same as Geode source code in order to use ClassAnalysisRule and the Analyze*Serializables tests. This is necessary to better facilitate pluggability of modules that need to provide sanctioned serializable lists. SOLUTION Add source set path customization to ClassAnalysisRule, introduce a new layer of Analyze*Serializables test base classes that can be directly extended in order to customize source set paths in ClassAnalysisRule. Also includes improvements to some iterating of classes during analysis. [prereq for backport of GEODE-9980 and GEODE-9758] (cherry picked from commit 5d1e91932dff296632916a6ceccfb36039357acd) > Startup of Locator or Server should fail fast if > geode.enableGlobalSerialFilter is enabled but fails configuration > -- > > Key: GEODE-9980 > URL: https://issues.apache.org/jira/browse/GEODE-9980 > Project: Geode > Issue Type: Bug > Components: serialization >Affects Versions: 1.15.0 >Reporter: Kirk Lund >Assignee: Kirk Lund >Priority: Major > Labels: GeodeOperationAPI, blocks-1.15.0, pull-request-available > > The following error conditions need better handling which includes handling > of all errors consistently and cause the startup of a Locator or Server to > fail if it's unable to honor the setting of > {{-Dgeode.enableGlobalSerialFilter=true}} for any reason. Currently, if > {{-Dgeode.enableGlobalSerialFilter=true}} is specified but Geode is unable to > create a global serial filter, then it will will log a warning and continue > running. A user may easily miss that log statement and believe that the JVM > is running with a properly configured serialization filter. > 1) The user is trying to secure the JVM very thoroughly and accidentally > specifies both {{-Djdk.serialFilter}} and > {{-Dgeode.enableGlobalSerialFilter}}. > 2) The user runs some non-Geode code in the same JVM that invokes > {{ObjectInputFilter.Config.setFilter(...)}} directly. > 3) The user is using a version of Java 8 prior to 8u121 (the release that > first added {{sun.misc.ObjectInputFilter}}) and specifies > {{-Dgeode.enableGlobalSerialFilter=true}}. Also, the same behavior occurs if > they do NOT specify enabling that property. > 4) {{LocatorLauncher}} or {{ServerLauncher}} is started in a JVM that has > already created at least one {{ObjectInputStream}} which will cause > {{ObjectInputFilter.Config.setFilter(...)}} to fail. > 5) {{LocatorLauncher}} or {{ServerLauncher}} is started in a Java 8 JVM that > is not based on OpenJDK (ie {{sun.misc.ObjectInputFilter}} does not exist). > 6) {{LocatorLauncher}} or {{ServerLauncher}} is started in an unforeseen > environment that causes invocation of > {{ObjectInputFilter.Config.setFilter(...)}} via Java Reflection to throw > {{IllegalAccessException}}. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-9817) Allow analyze serializables tests to provide custom source set paths to ClassAnalysisRule
[ https://issues.apache.org/jira/browse/GEODE-9817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493092#comment-17493092 ] ASF subversion and git services commented on GEODE-9817: Commit 8f3186cb74ccd2eb1ec8ba648c7a11b08d575962 in geode's branch refs/heads/support/1.14 from Kirk Lund [ https://gitbox.apache.org/repos/asf?p=geode.git;h=8f3186c ] GEODE-9817: Enable customized source set paths for ClassAnalysisRule (#7367) Adds support for customizing source set paths of ClassAnalysisRule. PROBLEM Modules external to Geode must be structured the same as Geode source code in order to use ClassAnalysisRule and the Analyze*Serializables tests. This is necessary to better facilitate pluggability of modules that need to provide sanctioned serializable lists. SOLUTION Add source set path customization to ClassAnalysisRule, introduce a new layer of Analyze*Serializables test base classes that can be directly extended in order to customize source set paths in ClassAnalysisRule. Also includes improvements to some iterating of classes during analysis. [prereq for backport of GEODE-9980 and GEODE-9758] (cherry picked from commit 5d1e91932dff296632916a6ceccfb36039357acd) > Allow analyze serializables tests to provide custom source set paths to > ClassAnalysisRule > - > > Key: GEODE-9817 > URL: https://issues.apache.org/jira/browse/GEODE-9817 > Project: Geode > Issue Type: Wish > Components: tests >Reporter: Kirk Lund >Assignee: Kirk Lund >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0 > > > In order to make SanctionedSerializablesService and the related tests to be > more pluggable by external modules, I need to make changes to allow analyze > serializables tests to provide custom source set paths to ClassAnalysisRule. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (GEODE-9758) Provide an easy to configure a process-wide serialization filter for use on Java 8
[ https://issues.apache.org/jira/browse/GEODE-9758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493094#comment-17493094 ] ASF subversion and git services commented on GEODE-9758: Commit 8f3186cb74ccd2eb1ec8ba648c7a11b08d575962 in geode's branch refs/heads/support/1.14 from Kirk Lund [ https://gitbox.apache.org/repos/asf?p=geode.git;h=8f3186c ] GEODE-9817: Enable customized source set paths for ClassAnalysisRule (#7367) Adds support for customizing source set paths of ClassAnalysisRule. PROBLEM Modules external to Geode must be structured the same as Geode source code in order to use ClassAnalysisRule and the Analyze*Serializables tests. This is necessary to better facilitate pluggability of modules that need to provide sanctioned serializable lists. SOLUTION Add source set path customization to ClassAnalysisRule, introduce a new layer of Analyze*Serializables test base classes that can be directly extended in order to customize source set paths in ClassAnalysisRule. Also includes improvements to some iterating of classes during analysis. [prereq for backport of GEODE-9980 and GEODE-9758] (cherry picked from commit 5d1e91932dff296632916a6ceccfb36039357acd) > Provide an easy to configure a process-wide serialization filter for use on > Java 8 > -- > > Key: GEODE-9758 > URL: https://issues.apache.org/jira/browse/GEODE-9758 > Project: Geode > Issue Type: Improvement > Components: configuration, serialization >Affects Versions: 1.12.7, 1.13.0, 1.14.0 >Reporter: Jianxia Chen >Assignee: Kirk Lund >Priority: Major > Labels: GeodeOperationAPI, docs, pull-request-available > > Provide an easy way to configure a process-wide serialization filter for use > on Java 8. When enabled, validate-serializable-objects should be enabled and > the process-wide serialization filter should be configured to accept only JDK > classes and Geode classes in addition to anything the user might specify with > serializable-object-filter. -- This message was sent by Atlassian Jira (v8.20.1#820001)